Home
cover of episode Future of Work: AI

Future of Work: AI

2024/4/3
logo of podcast Pivot

Pivot

Chapters

Shownotes Transcript

Support for this show comes from Constant Contact. Constant Contact's award-winning marketing platform has powerful tools that make it easy to grow your audience, engage your customers, and sell more to boost your business. In just a few clicks, you can launch a marketing campaign that's tailored to your business and goals. That includes email, social, SMS, and more.

So you can sell more, raise more, and fast track your business growth. So get going and start growing your business today with a free trial at ConstantContact.com. Constant Contact, helping the small stand tall.

Hi, everyone. This is Pivot from New York Magazine and the Vox Media Podcast Network. I'm Kara Swisher. And I'm Scott Galloway. And this is our special three-part series on the future of work, where we look at the business and technology trends that will shape the workforce, employment, and the very nature of work. Today, we're going to do a deep dive on AI and how it will impact the work as we know it.

Some numbers to get us started. 56% of U.S. workers report using generative AI to complete work tasks, according to a survey from the conference board. 22% of U.S. workers worry that technology will make their jobs obsolete, according to Gallup.

Only 26% of companies have established AI policies. We're going to talk through all this with our guest, Susan Athey. Susan is a professor of the economics of technology at Stanford Graduate School of Business. She's also the chief economist of the antitrust division at the U.S. Department of Justice, but is talking to us today about her work at Stanford because there's a lot going on at the Justice Department. Welcome, Susan. Hi, it's great to be here. All right. First of all,

There's a lot of doom and gloom talk about there when it comes to talking about how the workforce would transform due to AI. I think it's the single biggest question I get. I'm on a book tour right now with people ask about it and they're worried about it. They don't have a lot of information, but there is some merit to it. A recent survey found that 44% of companies expect some layoffs to occur in 2024 due to new AI capabilities.

As Scott always talks about, CEOs are always looking for efficiencies, and it makes sense if they can find them. Where do you fall? Are we too worried or not worried enough? I happen to be in the middle. I think Scott is probably in the middle, too. But where do you fall? Yeah, I think I'm also in the middle. I think there's a lot of hot takes that are pretty extreme. So at one end,

There's utopia, where our biggest problem is how to find meaning when we don't need to work anymore. And, you know, drones are dropping things at our doorstep and, you know, we're reprinting our food. But at the other end, you know, there's some kind of dystopia. And there's a lot of different versions of that dystopia, actually, you know, robot wars or other things. But

You know, even if we imagine sort of peacetime, you know, you may have a highly capital intensive world in an economy where a few get rich and the rest become irrelevant. If you don't need workers, they lack political power there. And, you know, we don't do a good job with redistribution. And there's like mass unemployment, which then, of course, can lead to unrest. So,

You know, those are pretty extreme, although science fiction kind of gives you some ideas of what to imagine. But my own view is much more in the middle. One thing, the utopia is a bit unrealistic because it leaves out the economics and politics of how everything gets done, like how do resources get allocated and who actually governs us.

But the dystopia doesn't seem imminent either because there's so many bottlenecks and constraints on the path to universal adoption of a new technology. We still have to fax stuff. And so there's just a lot of frictions on the way to being able to achieve mass adoption.

So I'm really more focused on short-term worries, like how do we help people make the transition as certain jobs are likely to be displaced and how to include more people in prosperity. Nice to meet you, Susan. So when you think about technology and its impact or new technologies, whether it's automation or different agri-farming technologies, it generally has follows the following curve. There's some short-term job destruction.

And then those efficiencies in that capital or those profits are redeployed and you end up typically with more with net job growth. And I don't see why this technology would be any different. What's different here? Shouldn't this result in net job growth eventually? So, I mean, it's certainly possible that you can end up in a more capital intensive economy. And so, you know, there's no

necessary reason that it has to go one way or the other. But in the end, you know, there's a lot of things where humans can be productive and they even can be productive in a world of robots. So if you think about taking care of older people or taking care of younger people,

You know, that's a nice example because it tends to sort of scale with the size of the population and we're going to have young people and we're going to have old people. And there's really can be a pretty low productive ratio of humans looking after other humans. So that just suggests to me that the marginal product of humans isn't going to go down. And in fact,

You know, we may find ways to augment humans so that they can be productive longer or be productive with less knowledge. So, for example, for elder care, more focused on the well-being side of things rather than just, you know, the physical care aspect.

So I think, you know, I agree with you that we just can't necessarily imagine what the new equilibrium looks like, but it seems hard for me to imagine that there aren't things for humans to do. And also, until the AI solves electricity and electricity

you know, figures out how to make a lot more chips, we will still have constraints on the resources that are used on computing. Leading into Scott's question, what are the, say, the positive features it would bring to the workforce that we're overlooking? There's been a lot of AI professionals who are doing the doom scrolling. So one thing is that it can be stressful in a job if you're worried about making a mistake.

Also, certain physical aspects of a job can be challenging. And a lot of jobs require a lot of training to avoid making mistakes.

One thing that AI can do is it can help you be good at a job with a little bit less upfront training, and it can also avoid mistakes. Yeah, one reason that seniors want to stop doing their jobs is if they start having some memory problems or worry about overlooking things, that's very stressful for them. So they don't want to do a bad job or hurt someone in their job.

like they want to say work, they might want to do child care, but then they're worried they might, you know, forget something or forget the kid in the car. Like the child, which would be a problem. Exactly. But the same with elder care that requires the same kind of attention to detail. But in the end, if drugs can be dispensed automatically and if you have some kind of, you know, safety monitoring going on, then it can be possible to really reduce or eliminate some of those risks and then give people the chance to

not just do something that's a fun hobby that keeps them busy, but something that really contributes to society and helps keep them engaged. So an aid, you're talking about it like an aid. Now you run a lab at Stanford with this thesis that technology can benefit humans. Tell us about that work and as it relates to AI in the workforce. First of all, AI is a general purpose technology. So what that means really broadly is that the same innovations can be used for lots of different purposes.

And it could be distributed around the world and shared at relatively low cost. So the premise of our lab is that we will collaborate with social impact organizations. The organizations have a relationship with some end customers. They could be patients in a hospital. They could be people who need counseling. They could be workers who need transitioning or students learning.

The organization has that relationship with the end consumers. Then we try to take advantage of all the students we have at Stanford and all of the great technological capabilities that we have to build things for those social impact applications. The idea is once you build them, they can be shared and spread. More broadly, when you just think about it, someone who understands the capabilities of AI,

People have outlined so many potential threats. It becomes sentient, income inequality, job destruction. There's just so many different threats that people have outlined. Is there one threat, Susan, that you think is the most ominous that we should be focusing on from a regulatory standpoint and that academics should be modeling out? What should we be doing to potentially prevent a tragedy of the commons if there's a threat here? So I think there's a number of

individual tactical threats, and then there's some that are maybe a bit more systemic. So starting with the jobs, we have never been good as a society at following through with the redistribution that can make everybody better off. So international trade's a great example. Econ textbooks tell us why that's good, why that can make everybody better off. But often if people are attached to a location,

Individual humans are made much worse off. And there's a lot of evidence about the negative impacts of job displacement. I've done some of this research myself, and often the people in the worst locations and the worst industries, if they lose a factory, they can be very bad off even 10 years later, while the people who are more educated and mobile are able to move locations or find new jobs.

So what we see here is that over the last 10 or 15 years, we haven't seen as much productivity benefit of all of the computing advances in the numbers, but we have seen a lot of firms lay the infrastructure. So they are in the cloud now, they're using software as a service, and that makes it much faster for them to adopt a certain technology if it comes quickly. So think about human customer servants, agents.

If there's software as a service and firms are already kind of plugged into that software as a service for their current humans, you could see many firms all at once kind of replacing their humans. And so those people, especially if there are

Areas of the country where they put the call centers because labor was cheap, because there weren't a lot of other jobs or in certain countries specialize in this, they could get hit hard all at once. And that can be very disruptive. And in the political environments we have not just in the US or around the world, it can be very difficult

to do the redistribution that might help everybody share in the benefits because somewhere else, there may be lots of benefits, although it might be accruing to a smaller number of people if it's a capital intensive replacement. It might be the software engineers who are making the improvements. So that's like an economy wide thing. And then there's a bunch of tactical issues as well.

I think one, you know, the disinformation, misinformation is a really big issue. We need people to be invested in democracy in order to, you know, go through any transition. We need people to think about hard problems. And if we have hard problems with trade-offs,

you know, and then people are just kind of being polarized in the process, we won't be able to have the kinds of societal discussions we need.

And then, of course, there's all the security threats. Those are also, you know, a bit scary. And we've never been very good at investing to prevent problems. We're good at reacting. But some of these problems may come so fast that we are kind of left flat footed. So we're going to go on a quick break. And when we come back, we'll talk more about AI's impact on the workforce, including which industries that you think will most transform in the next five years due to AI.

Support for this show comes from Constant Contact. The internet is a funny tool. If you run a small business, it brings countless new ways for you to get your name out there. So many, in fact, that actually leveraging those channels of communication can get overwhelming fast. It might even feel like you need a marketing degree and an extra day of the week to get any movement at all. That's why Constant Contact does the heavy lifting for you.

Constant Contact's award-winning marketing platform has powerful tools that make it easy to grow your audience, engage your customers, and sell more to boost your business. In just a few clicks, you can launch a marketing campaign that's tailored to your business and goals. That includes email, social, SMS, and more. So you can sell more, raise more, and fast-track your business growth. And you can count on Constant Contact's award-winning customer support for guidance along the way.

So get going and start growing your business today with a free trial at ConstantContact.com. Constant Contact, helping the small stand tall.

Scott, we're back with our special series on the future of work. We're talking to Stanford professor Susan Athey. So we're going to talk about healthcare in a second, because I think it's probably the one that's going to be most transformed. But what industries do you think off the top will be most transformed? So one thing to look at, first of all, is just the industry of AI. And we have a lot up in the air right now about how that's going to shake out. So we do need to be aware of that.

how concentrated that industry is going to be and whether there's going to be a good environment for startups to be able to create services. And we even see it in my lab at Stanford. We're building services that can be used for social impact, but that requires tools. AI is a business. AI is a business, but AI, of course, transforms everything around it.

We are seeing some of the earliest adopters being, say, software as a service firms that serve a lot of customers.

And so the ones that get ahead in AI can have a higher market share. And all of the things, all the infrastructure around it will be transformed as well. Right. So there's a business. So healthcare is a hot topic, obviously, when it comes to AI disruption. The market for AI in healthcare projected to reach over $170 billion by 2029. But 60% of Americans say they would be uncomfortable with a provider relying on AI. Right.

speaking of hot dogs, you did a trial using digital counseling to help patients choose contraceptive methods. Talk about the overall thing and then what you did there and how it went there.

So in the developing country context, it can be very difficult to recruit enough nurses that have a lot of education and experience. So this was a digital assistant for the nurses to guide patients through a counseling session. It made sure that the patients were able to express their concerns about side effects as well as their desires.

And then it provided a ranking of options. And we compared a method that where the app provided a ranking versus where the patients just led the discussion. And we found that when the app provided a ranking that was responsive to what the patients wanted, the patients spent more time

evaluating the options and had higher satisfaction and were more educated about their options. But interestingly, the nurses also liked it and they felt that they learned from using the application because

Because it's not very satisfying to counsel people when you're not sure you're giving them the very best information for them. And it can be hard to hold in your head all the different combinations of side effects and concerns people want. And so the application sort of helped them do a better job helping their patients. And so I think that's a general trend. And again, the developing context or in places where resources are tight, you can potentially get better information to people.

But crucially, there was still a human in the loop who could interpret all of it and could answer questions and help people feel comfortable with the information that they were getting. And also, the patient was more engaged as a result of this and was more participating in their decision making. And I think that's a much more likely short-term thing because it's really hard to get technology to do

to avoid the errors. And if you're in high stakes environment like healthcare, errors can still be very costly. So having an assistant to the provider to make better choices is something that seems imminently likely. - So somewhere there's gonna be a lot of change. Somewhere there's gonna be a lot of change.

And what about for teachers? The stats are pretty staggering. 60% of educators use AI in their classrooms already. You specifically experimented with teaching children to read using news feeds, no less. Tell us about that. Yeah, so we worked with an educational application that was sort of like a Netflix or a TikTok for stories, for reading.

And when we first started working with them, they had humans curate the newsfeed to pick stories they thought would be interesting. But we built a recommendation system based on the students' past behavior and found 50% increases in the amount of stories the students read.

And then we also use gamification to try to get them excited about it at the beginning and show that the students continued reading afterwards. And what I take from that is like, look, the commercial sector has figured out how to get you hooked, you know, but a lot of that is detrimental. It's doom scrolling. It doesn't really make you happy. But we can potentially use those same kind of tactics for good to help you make for education and to help you develop positive habits. So you touched on K through 12 professor

I think a bunch of us have been talking about and waiting for the impending disruption of higher ed. I've just been shocked how resilient and static it is. I think it's the same at GSB, but you walk into Stern and you could be walking in in 2000 or 2023, the classroom environment just hasn't changed that much, the curriculum hasn't changed that much.

Do you see any disruption coming? It just feels like so far it's been this fortress where the walls hold on to the business model. Do you think that AI is going to change higher ed? I had a colleague that actually fine-tuned an AI on all of his course materials.

And so it can answer questions about the course materials, but only the course, well, as much as possible focused on the course materials. And he said that it cut way down on the email questions during that semester of the course. So I do think that there's a lot of ability to get sort of basic repetitive questions answered in a more customized way.

We're also, I led a study at GSB of how AI could be used, and we felt like we're generally in the early stages. There's been a lot of experimentation in terms of, you know, how do you integrate it rather than outlaw it while still getting people to learn the concepts? One example of that is coding syntax. So there's some people who are CS scientists.

students and they need to learn to code. But we need MBA students, we need business people to be able to think about how coding works because there's going to be digitization in every single industry going forward. But, you know, MBAs aren't so super excited about learning lots of syntax and you waste all your time just teaching details. And now the code reviews can help you with the syntax and

That can help people move much faster and get to the interesting stuff, the thinking part, and not spend as much time on the syntax part.

But there's a downside because they can also more easily just skip over it without thinking at all. And so I think we're going to be really challenged as professors to really change the way we assess students so that we ensure that they still get the conceptual part, but don't get bogged down with the part that may be less important in the future, like missing commas and where the curly braces go is just not where it's at, you know, basically starting now.

What do you advise students in terms of trying to prepare for an AI future? Outside of taking courses in AI and spending time with different LLMs,

Do you think it's going to impact the way, I don't know, the skills we emphasize in terms of preparing for a more AI-enabled future? I think logic is going to be very important. The AI is good when you break down a task into a part that a robot would be especially good at. It's often like a repetitive task, and it often involves something where you can measure success easily.

measuring success is hard. And so it's often going to be the case that we can put AI on something where we can measure success. Thinking about how to measure success and thinking beyond short-term clicking type measures about what success really looks like requires a lot of logical thinking. It also requires thinking about what sometimes people call second order effects or equilibrium effects. So, you know,

For example, I helped people get jobs by doing portfolios. If everyone made those same portfolios, maybe they wouldn't be so effective in getting jobs because part of what was signaling that you were a good worker was that you figured out to make the portfolio. But if everybody does it, it no longer has the signaling value.

That's kind of equilibrium thinking that is required to anticipate what happens when you put things in. And also in just being creative in terms of how do you measure success if clicks and eyeball scrolls and stuff is not enough to understand success. So that kind of thinking, it becomes more important, not less important. That's a compliment to AI. It's not substituted by AI. Yeah.

All right. So in summary, should the average worker be worried about AI taking their job? I mean, if you were, people must ask you this all the time, right? You just say it depends or what do you say to them? Beyond the students, beyond people who are currently like, I'm getting replaced. I mean, I think if your job is to, you know,

create images and sell them or write ad copy or send repetitive emails to your customers and handwrite them, that doesn't seem like that's going to last very long. Now, what takes its place

Maybe managing systems that do those things or measuring systems that do those things, but there may be less of those jobs. The people who are in those jobs may be more productive. But then, as I mentioned earlier, jobs may open up that previously had big barriers to entry, big training requirements, while those jobs might become more productive.

more possible for people to transition into. But as I mentioned before, the big concern is we are terrible at transitions. We are terrible at helping people through transitions, especially at the lower end of the income distribution. So what is possible

is not the same as what we're gonna actually choose to do. - And it could in fact affect people on the higher end, lawyers. - Absolutely. I mean, you see this also already just research, document searches, but we used to have lots of paralegals

you know, go through stacks of documents now. Bait stamping. Yeah. But now paralegals do keyword searches. That's going to get changed. But actually, we still use paralegals. It's interesting, though, we may use people in different ways. And some new lawyers are not getting the experience of, you know, reading the document, but also just reading documents by hand. Like you can get by by only doing keyword searching and never just...

picking up the documents one by one and sort of seeing where your creativity takes you. Yep, that's a really good point. You still have to read it. Oh, not really. As long as you can get a summary of it by AI. Anyway, Susan, thank you so much. We really appreciate it. Okay, Scott, that's it for the final part of our three-part series on the future of work.

Read us out. Today's show is produced by Lara Naiman, Zoe Marcus, and Taylor Griffin. Ernie Untertide engineered this episode. Thanks also to Drew Brose and Neil Saverio. Nishat Kerouac is Vox Media's executive producer of audio. Make sure you subscribe to the show wherever you listen to podcasts. Thanks for listening to Pivot from New York Magazine and Vox Media. You can subscribe to the magazine at nymag.com slash pod. We'll be back next week for another breakdown of all things tech and business.