AI is being used to speed up drug discovery, predict severe weather events, improve vegan food taste, and enhance digital pathology. It's also helping in areas like material science and biology, potentially solving fundamental human problems.
Near-term risks include the potential for AI to be used in creating bioweapons, geopolitical tensions due to AI chip supply chains, and the displacement of human expertise in various fields, leading to an 'obsolescence regime' where AI systems outperform humans in most tasks.
Public skepticism stems from a lack of understanding of AI's capabilities and how to use it effectively. Economic insecurity also plays a role, as people fear AI will take their jobs or exacerbate existing inequalities.
The obsolescence regime refers to a future where AI systems make human expertise obsolete. Companies, militaries, and governments may rely on AI workers, executives, and decision-makers, leaving humans as figureheads or unable to compete without AI assistance.
AI companions could lead to increased loneliness and mental health issues if people become overly reliant on them, potentially reducing human interaction and emotional growth. There's also a risk of these technologies being misused without proper societal safeguards.
AI slop refers to the overwhelming amount of AI-generated content on the internet, which could make it hard to find human-generated content. Solutions include increasing the value of human-created content and developing better curation and taste-based systems to filter and highlight authentic human work.
The challenge lies in creating a fair compensation system for creators whose work is used to train AI models. Mechanisms like compulsory licenses and attribution as a service could help, but the political and legal systems must be functional enough to implement these solutions.
AI progress continues due to advancements in data curation, synthetic data, and the increasing number of researchers working on the problem. While data and compute costs may rise, new methods like test-time scaling could enhance model capabilities without hitting a wall.
By 2030, AGI could lead to AI systems that are indispensable in daily life, similar to how people rely on smartphones today. These systems could handle complex tasks, potentially transforming industries and making human expertise less critical in many areas.
The government could facilitate immigration policies to attract AI talent, fund academic research to bridge the gap between industry and academia, and implement targeted regulations to mitigate risks like bioweapons without stifling innovation.
How can a microchip manufacturer keep track of 250 million control points at once? How can technology behind animated movies help enterprises reimagine their future? Built for Change listeners know those answers and more. I'm Elise Hu. And I'm Josh Klein. We're the hosts of Built for Change, a podcast from Accenture. We talk to leaders of the world's biggest companies to hear how they've reinvented their business to create industry-shifting impact.
And how you can too. New episodes are coming soon, so check out Built for Change wherever you get your podcasts. This is Andrew Osorkin with The New York Times. You're about to listen to some fascinating breakout conversations from our annual Dealbook Summit live event, which was recorded on December 4th in New York City. You're going to be hearing experts, stakeholders, and leaders discussing some pretty vital topics that are shaping the business world and the world at large. ♪
Hello, everyone, and welcome to the AI Revolution Task Force. This is, and I don't think I'm exaggerating, a historic gathering. Here with us are 10 of the leading voices on AI today, a carefully selected group of all-stars and experts with deep experience working on AI capabilities, AI safety, AI investing, and AI policy and governance at some of the most influential organizations in the world.
As anyone who writes about AI for a living, like I do, will tell you, AI is a controversial technology with high stakes that provokes stronger reactions than, say, writing about relational databases.
In San Francisco, where I live and I think where most of us live, discussions about AI have taken on a quasi-religious tone recently, with some groups warning of imminent doom and urging the government to step in to regulate, and other groups urging us to stop worrying so much and just embrace our new robot overlords.
My hope with today's discussion is to tease out some of those differences and debates and also try to maybe find some common ground as we look for a new path forward for AI, trying to do what we all want, which is to harness its potential while minimizing its risks. Okay, let's start with some introductions. Starting from my right over here, Dan Hendricks, who is the director of the Center for AI Safety is here.
Jack Clark, co-founder and head of policy at Anthropic. Dr. Rana El-Khalioubi, co-founder and general partner at Blue Tulip Ventures. We have Eugenia Cuida, founder and CEO of Replica. We have Peter Lee, president of Microsoft Research at Microsoft. I'm Kevin Roos, a tech columnist and co-host of Hardfork at The New York Times. We have Josh Woodward, vice president of Google Labs. Sarah Gua, founder and managing partner at Conviction.
We have Ajaya Khotra, Senior Program Officer, Potential Risks from Advanced AI at Open Philanthropy. Next to her, we have Mark Raybert, Executive Director of the AI Institute and founder of Boston Dynamics.
And last but not least, we have Tim Wu, the Julius Silver Professor of Law, Science, and Technology at Columbia Law School and the former Special Assistant to the President for Technology and Competition Policy. I hope I got everyone's titles correct. Before we dive in, I want to give our audience here today a sense of the range of views that we have represented in this room about AI. And I want to do so by conducting a few live polls. So,
Please raise your hand if you agree with the following statements. Number one, there is a 50% or greater chance that AGI, or artificial general intelligence, which we can define for now as an AI system that outperforms human experts at virtually all cognitive tasks, will be built before 2030. Raise your hand if you agree with that statement. Okay, so we've got, I think, seven out of ten here agree with that statement. Okay.
We can get more into that later. Number two, raise your hand if you agree with this statement. Over the next decade, AI will create more jobs than it eliminates. Raise your hand if you agree with that statement. Okay, so about half and half. And then last but not least, number three, raise your hand if you agree with the following statement. If I had a magic button I could press to slow down AI progress to half of its current speed, I would press it. One, two, three.
Looks like you're maybe thinking over there, Dan. Okay, so only two button pressers out of the group. That's very helpful and illuminating. Thank you. Now let's get to some specific topics and questions. I want to start with the positive, more optimistic side of the AI ledger. So I'm hoping we can start on my immediate sides here with Peter and Josh. What are the
Maybe we'll start with you, Peter. What specific things are you seeing in AI today that strike you as beneficial or exciting real-world use cases of AI? Well, I think the thing that's really amazing to me is how the new AI architectures, things like transformers, diffusion models, they've been
so amazing at learning our way of talking, our way of thinking. You know, I call that learning from the digital exhaust of human thought and expression. It's pretty incredible. But what has really kind of excited me more recently is that those same architectures seem to be just as adept at learning from nature. And when I say nature, I mean things like
protein structures or chemical compounds, weather and wind patterns, big multicellular images for digital pathology. It isn't obvious that these AI architectures should do that, but they're doing that right now. That has just gotten not only us at Microsoft Research, but at great labs around the world,
There's a level of intensity around the possibilities that we might be able to do things like drastically speed up drug discovery or find targets for drugs that are currently considered undruggable. Or we could predict severe weather events, days or even a couple of weeks in advance. Even mundane things like, I don't know, making your vegan food taste better.
or your skin tone, fresher looking. Things like that, I think we're just on the verge of seeing these generative AI things that have proven so great at chatting with you, at solving data problems, also learning from nature.
Josh, what about you? What are you seeing at Google that makes you hopeful or excited? Yeah, I think there's a couple of things. One is there's really an interesting moment, I think, where the ways to kind of express yourself or kind of drive creative early stage brainstorming in any modality feels really exciting, whether it's
getting to a first draft faster or even creating things in different mediums. That's one. I think the other one is the ability to transfer or almost transform content from one format to another. Really excited about that, what that might mean for education and learning and sort of the future of that. And then maybe a final thing I would say is I think we're just starting to get to a stage where
AI is starting to feel more personal where you can kind of guide it and direct it to things that you're interested in. And so this feels like a whole new chapter in terms of the types of applications and use cases you can go after. And your team is responsible for one of my favorite AI products of the year, Notebook LM, which is the way that you can upload your own documents
create your own notebooks. And then actually the feature that really took off was the AI generated podcast, which will probably put me out of a job, but I do enjoy it anyway. No, well, that's a good example, actually, with Notebook LM. So you bring your own sources into it. The AI becomes an expert on that.
So you really feel like you have kind of this AI research assistant focused on what you care about. And then you're able to hit a button and sort of turn it into different things. Yeah, thanks for using it. Yeah, no, it's great. Sarah, what about you? You're investing in a lot of AI startups. You're seeing the things that people are building using AI. What are you excited about these days?
Well, we were just talking before this about some of the advances that are happening in material science and biology. And I think there are very fundamental problems for humanity that these architectures apply to. And that is very exciting. Things that are not mentioned that we're investing in, I
I think AI is a very democratizing technology. And so there are many skills today that are very high-scale human labor intensive that people cannot access, from expensive legal counsel to the ability to generate videos, to access to specialist doctors, to teachers, right? And maybe we'll take the last one as an example. A lot of people will be familiar with, like, the Bloom research that says you can take
you know, a median student and improve their performance, like more than a standard deviation with one-on-one tutoring. We've just never been able to achieve that in the educational system. And so I think it's a pretty good example of, well, like, what if you can, right? What if you can give everybody a personalized tutor? What if you can give everybody personalized medicine? And I think that the ability of AI to automate much of those tasks is something we're really inspired by. Has anyone here used AI to teach themselves something recently? Jack? Yeah.
So at Anthropic this year, we made some really good coding models and they seem to have taken the fear and difficulty out of interacting with software itself. So I'd let my development environment at Anthropic bit rot and become unusable. And usually in that situation, you have to ask an infinitely patient colleague to help you debug something.
your development environment, which you've horribly messed up. But in this case, I just debugged it by sending screenshots to Claude and saying, help me, Claude, I have horribly messed up my development environment and need to fix it. And in about 15 minutes, I was able to get back kind of programming and building software again. And that was really remarkable to me, because a year or two ago, you'd need to pay over a colleague to do this stuff. And now I just see
all of my colleagues using this technology and others to make it easier for them to get back to work. And I think that's a total change. Yeah, that's really great. Now let's turn to the negative side, the risks, some of the things that people are worried about with AI. Dan, you have spent years working on AI safety and warning about potential risks of very powerful AI systems.
What specific near-term risk or scenario are you most worried about? Well, if you asked me a year ago, I would have said bioweapons. Using AI to create a bioweapon. Yeah, yeah, yeah. So you could imagine like an expert level virologist AI walking somebody through the steps, helping them troubleshoot, things like that.
That risk source maybe isn't as concerning to me because we're having better jailbreaking techniques. So people can't coax the model into giving it those instructions nearly as easily anymore. So consequently, I think more of the risks I'm concerned about are on a longer time horizon.
like maybe a few years from now, as opposed... So the biocatabilities might be there next year, but so long as these are behind an API and have all these filters and all these basic guardrails in place, then this may not be too much of a hazard. But the geopolitics of this in a few years is very concerning to me. Like, for instance...
If China invades Taiwan later this decade, that's where we get all of our AI chips from. So the West loses very potentially. China is building up their own capacities on chips and the U.S. not really so much. So that would be a fairly plausible world where the U.S. would summarily fall behind. So that's been on my mind more recently. Mm-hmm.
I'll get to you, Peter, in just a second. But I want to ask Ajaya, too, because you're also someone who's spent a long time thinking and writing and talking about risks from AI. And you had a phrase recently that has been sticking in my brain, which is the obsolescence regime. Explain what that means.
So I spend a lot of time thinking about a regime in which AI systems have made human expertise obsolete. So a regime in which if you want your company to be competitive, you have to have AI workers and AI executives. And maybe you have a human CEO, but they're a figurehead and they have to basically listen to their AI advisor that is able to keep up with what's going on better than they can.
If you want to win a war, you need not only AI-powered drones, you also need AI tacticians and AI generals. Or if you have a human general, they're just a figurehead and they have to listen to the AI general that knows what's going on better. If you want to enforce the law in a world full of AIs or even make law in a world full of AIs, you need AI policymakers and AI experts.
lawyers and AI policemen, essentially, to keep up with the world we're creating in which all the businesses and militaries and everyone else is also using AI. So I'm kind of imagining this regime in which to refuse to use AI or even to spend too much human time double checking its decisions would be like insisting on using pen and paper or not using electricity today. Like you just can't do it.
And in that world, I think about what could that mean systemically if everyone is forced to deploy AI to the fullest extent, to the fastest extent possible. We would be living in a world where these AI systems are our agents running around doing things over extremely long timescales, very autonomously on our behalf, faster and perhaps smarter than we can comprehend and keep up with.
And I think there are, well, I think I just think that world is, you know, before getting into any of the specifics, a kind of unsettling, potentially unstable world. And I think there are a lot of risks that come with that that are more specific, such as, for example,
I think a lot of today's world relies on human judgment and human leeway in a bunch of important little ways. So Dan was talking about the risks of bioweapons earlier. I think a big reason that we don't have nasty engineered pandemics every year is that there are some tens of thousands of human experts that could create such pandemics, but they don't want to. They choose not to because they're not sociopathic.
But in this future world that I'm imagining, you would have expertise at the push of a button that could potentially be perfectly loyal to any individual who has the power, the money to buy computers to run those experts on. And I think about that in the context of, say, democracy. President Trump in his previous term said,
Tried to push the limits in a bunch of different ways, tried to tell people underneath him to do things that were norm violating or illegal, and they pushed back. If all those people under him were instead, you know, 10 times as brilliant, but perfectly loyal, programmed to be perfectly loyal, that could be a destabilizing situation like our society is.
is sort of designed around some give and some autonomy from humans. And then on the other side, if those systems aren't loyal, that's its own can of worms. You know, if they're running around pursuing goals that we think we understand but we don't actually understand, then we can end up in a situation where all these AI systems collectively
control the means of production, essentially. They're running the economy. They're running our militaries, our governments. And if they were to kind of go awry, we wouldn't have very many options with which to stop that transition.
So, okay, so we have the risk of novel bioweapons being created by AI. We have the risk of sort of the disempowering of humans because AI is just so good at so many things. Jack, are we missing any big risks here that people are worried about when you talk to them on Capitol Hill or elsewhere? I think the lesson from social media is that many of the most pernicious risks are really hard to predict and are going to be broader than
and less specific from what we're seeing here. I mean, I recently became a father and I was suddenly aware of how strange social media is and how I'm really glad social media didn't exist when I was an adolescent, a nervous one at that. And so I think that AI is going to massively change the kind of
ecosystem in which people grow up. And I expect many of the things that we're going to find most destabilizing will stem from those kinds of broad and hard to anticipate changes.
I totally agree. I think the social side of AI is widely under-discussed. Ronna, do you have thoughts on that? I was going to build on the social kind of implications of some of these technologies. I'm very passionate and excited about the use of AI companions and AI co-pilots and agents to help people become healthier and more productive and more knowledgeable.
And I can see the opportunity where some of these technologies can help with mental health or loneliness. But I also worry, and I'm so glad you're sitting next to me because I feel like you are doing it the right way. But I am worried about companies who are building AI friends and AI companions without really thinking about the societal end. You know, like this can go really wrong. And I have a 15-year-old.
Does anyone else who has children think that you would not want your child having an AI friend?
Explain, Eugenia, what Replica is and what you're building and maybe address some of the concern that Rana just shared. Replica is an original AI friend. We pretty much created the industry of AI companionship starting Replica eight years ago now. It's basically a platform that allows you to come and create a personal AI friend that you can talk to and build a relationship with anytime you want.
And who is using Replica? Are they young? Are they adults? What are they using it for? You actually don't allow young people on your platform, right? Yes, we actually don't allow anyone under the age of 18. And we're very strict about it. We implement a lot of age gating over years. It comes from my personal belief that we're just not ready. We shouldn't be experimenting anymore.
I have two young daughters, and I think I know I don't want them today yet, just because it's not research. We don't know how to do it well for adults on any of the iCompanion platforms. And our power users generally 30, 35 plus. And these are people that experience, that for some reason are either alone, some people
a lot of time or they experience some sort of loneliness. A lot of them in relationships, a lot of them have families and friends, just they feel lonely. And so they're yearning for a connection, for a relationship. Does anyone here have an AI friend? Ever spent time with this? I set one up for research purposes, but I wouldn't call it a close friend.
Your AI friend is very offended. Peter, you were going to say something earlier. Yeah, you know, Dan's mentioned about bioweapons risk. I agree with them that there's a lot of deep thought going into that and mitigations. But I still would predict that, say, in the next five years, that
more than one, maybe dozens of countries will put in very severe restrictions on the use of AI in biology research generally because of that risk. And, you know, I think it wouldn't surprise me if, you know,
Three dozen countries even banned the use of generative AI tools in biology research because of that risk. And the other risk, getting back to the social aspect, I really agree with that very strongly because in that same timeframe, it wouldn't surprise me if a generative AI system made some new discovery in some science field like biology, wrote the paper on it, submitted it to, I don't know, Cell or Nature, some journal,
And God is accepted. And I think that even for the science, the human scientist community, that's going to be a moment that will test people.
Jack? I'd just like to build on what Peter said, where my colleague and CEO of Anthropic Dario wrote an essay recently called Machines of Love and Grace, where he talked about a lot of the good things we're going to get from this. But I think he made a useful point in that, which is he thinks as these systems get more advanced, you could end up in a world where you have, he said, a century of scientific progress in 10 years. And I think we're actually going to see that effect.
play out in almost every part of the world like not just science but in our societal experience of talking to ai systems like the ones that eugenia's building in our experience interacting with the sorts of ai corporations ajaya talks about and i think for this discussion what we're grappling with and good luck to us all is how do you solve the problem of like an everything machine which is the thing that we're uh you kevin are going to guide us to the answers on
Well, I'll do my best. Mark? Yeah, I wanted to go back to what Peter said about countries banning the use of AI for bio development. I don't know. I guess I'm more on the side of worrying about too much regulation now.
killing off opportunities. I think technology can solve a lot of problems, ones that we can't foresee, and that I think there's a big risk of that happening. You know, presumably some of those countries' scientists would be developing cures for disease or improvements like you were talking about at the outset. And I think in almost every area you could imagine...
AI helping develop technology that's useful. As an experimentalist, I'm in my own technical area, I'm more of an experimentalist, and what we believe is that until you do the experiment, you don't know what the answer is. It's very hard to predict. I think us predicting today where a technology could go, especially as it's emerging at such a fast pace, is really a risk.
Tim? I had a question for Peter, which is, where does your intuition come from that you think countries will be interested in banning research? And I say that because it's pretty rare, having worked with the federal government, that we ban research. Nuclear fission is probably the strongest or carefully eliminated. So it's pretty rare. And I
i'm curious where that intuition comes from yeah it is exceptionally rare but already you see hints of this for example in things like the eu ai act which i think broadly is a very important uh thing that has been put together and it kicked off a thought process uh regulatory and policy thought process which i think is crucial so what i'm about to say is not in any way to say that that's a bad idea it's a very good idea
But there you already see hints, for example, in limits on the number of computing flops, you know, floating operations per second. You know, at 10 to the 23, you can do certain things in biology research. At 10 to the 24, you can't, and so on. So you're starting to see hints of an emerging regulatory sort of framing that I think there's a reasonable chance that,
I agree with Mark. I hope it doesn't happen, but I think there's a reasonable chance that these could be codified into law. If what you're saying is that Europe's going to ban a lot of things, I think that's a pretty safe bet. Maybe more specific on that, but I... I'm really into that. Okay. Yeah, I wouldn't have the same bet for the United States. That's all I'm saying. Dan? Yeah, I think you could capture many of the benefits and still reduce those risks by targeting it. So you could...
add restrictions on some specific more esoteric topics like in virology and many other aspects of biology. There aren't guardrails there. And then if people are wanting to access those virology capabilities, then it's like, okay, are you a researcher who has like a reason to it? So I know your customer regime could do that. So I think there could be some ways that you can get rid of a lot of those bioterrorism risks
and still let researchers do what they want. So that makes me optimistic that the guardrails that we're placing on these systems won't be overly onerous or have much economic impact. I have a
Two questions, one sort of aimed at the more optimistic members of this task force and one maybe at some of the more worried or anxious members. For the optimists, I think despite a lot of the reasons for optimism that we've shared today, the general public, at least in America, is quite skeptical of AI. Pew data from a recent survey suggests that 52% of Americans are more concerned than excited about AI. That's up from 38% just two years ago.
If AI is as great as we all might think it is, why do you think there's so much fear out there? And what can the industry do, if anything, to convince people that this technology will improve their lives?
I can take a stab at that. I think a lot of the times people don't really understand what this technology is and they don't understand how to use it and harness it properly in their everyday lives, both professionally and personally, but they don't also understand how it works. Like basic AI literacy is so important to go behind the scenes learning
And just understand the basics of how data and algorithms come together to create this technology. So I'm really committed to that. As you know, Kevin, you've been on my podcast, Pioneers of AI. I'm really committed to simplifying and making AI accessible to a much more broader audience and making it more inclusive. So that's something I'm very passionate about.
You know, I probably make two points here. I would posit that your average American, like if you ask them, do you want us to limit your access to AI capabilities? They would say no. Like, I want to be able to be more intelligent and to be more capable and to be more creative.
And in many cases, like I want that capability to exist for scientists with the right guardrails and such. And so you might ask, well, like why do you want other people to be restricted from this? And the answer is because I am afraid of replacement or because I do not trust them or do not trust that the appropriate safeguards are there. Right. So I just separate like I don't know that the fear of AI is so personal.
And I would also say like it's quite typical when you look at technology historically, right? Everything from trains to electricity like was considered the end of the world at some point. And so I'd say I do think this is the largest technical revolution we've ever seen. And I'd also say that sort of end of the world fear has existed before.
And so I think the thing to address is actually the very specific concerns. And I would add like cybersecurity to this as well. But to me, it might be cyber, it might be bio, it might be national security. And like what is, how quickly does this transition happen that there is opportunity for Americans and for others? But I don't know that it's like an individual rejection, really. Maybe one other question.
concern I'd have here is mostly, and to Mark's point as well, regulation mostly happens at a national level. There's very little agreement amongst countries as to how to restrict this technology. And so I think a pretty important consideration is where there is regulation, it is often at risk of creating capture problems.
regulatory capture of an industry by larger organizations versus allowing creative exploration by individual experimentalists or smaller companies. And I think the likelihood that everyone, you know, 200 nations in the world restricts use of AI for bio is very unlikely over the next X years. And so the question is, if you really are concerned about
safety, there's some balance of capability and America being ahead versus like, what are you really controlling at a global level? I want to say a couple of things. First of all, about people being scared. Recently, there were some stats that less than 10% of Americans actually use chat GPT on somewhat regularly. So I think people just don't really know yet, quite yet what they're talking about. I think there's a big
the interface is lagging to or some sort of like consumer interface so people can start using it and understand how to use these models is lagging behind. The models already have a lot of capabilities. People are not using them in daily lives. And another thought is on the existential risks. I think people talk a lot about
bio and AI taking all of our jobs and some terminator, AGI. But oftentimes, and maybe the future is not, the existential threat of AI is not what we see in sci-fi movies. In my view, the AI companions are potentially one of the most dangerous technologies that we've ever created.
almost posing an existential risk to humans. Because imagine we do build these AIs that make us all really productive and help us work and so on, but, and we're,
thrive at work, but then slowly die inside because now we have these perfect companions that we don't have any willpower and we don't have any willpower to interact with each other. I think that is quite possible. It does remind me a lot at the beginning of social media when a lot of focus was on what this tech can do for us and very little on what this tech can do to us.
And, you know, now we're dealing with unintended consequences. Well, now I have to skip ahead to the social AI part of this, because you just said, if I heard you correctly, that companionship AI could be one of the most dangerous technologies we've ever invented. You are the head of one of the leading AI companionship companies. If you are so worried about this technology, how are you building it to be safe and why do it at all?
Um, great question. Thank you. But just like any technology, it's always a double-edged sword. If it's really powerful, it can do both. If it's nuclear energy, it can, you know, my dad worked in Chernobyl, but it can also be the one way for us to, you know, to save our planet. Here, the same thing. I do believe we're past the point. We're going through a very, very bad mental health and loneliness crisis that is killing us already, yet it's not
maybe always something we talk about. It's not always on the front page of every newspaper, yet it's really, really happening. And I believe we need a more powerful technology that can bring us back together. And I do believe that AI Companions built with a goal that's fully aligned with humans, in my view, that is something along the lines of human flourishing, with that goal in mind, can bring us back together.
I really like the way you think about this and frame things because to my mind, the safety risks and potential downstream harms only make sense as a discussion
in the context of specific use cases like this. And so I really like what you say, probably better than your publicist does for your company. And so talking about AI itself is sort of like talking about copper wire. And so we know copper wire carries electric current really efficiently from point A to point B, and that makes so much possible.
Yes, it can shock people, so we need some standards and insulation regulations and so on. But talking about the risks of copper wire just doesn't make sense. It's what you do with it. And that leads to, well, what does it mean for putting copper wire in your house? What does it mean for putting it underground or on telephone poles? And all of that kind of stuff.
And so the thing that really resonated with what Eugenia was just saying is she's thinking about those risks in the specific use case of a companion for people. And it's that type of discussion. That means we need thousands of these types of discussions. Ajaya, and then I'll ask another question.
Yeah, I've heard this claim a lot that we have to go kind of application by application. And, you know, the analogy to other technologies makes some sense. But I think that it is actually a mistake to think about AI in this way, at least a certain class of AI systems.
Because I think over the next few years, we'll see people have generalist AI agents that are working on their behalf in a bunch of different ways. So I don't know if you agree with this, but I think in the next five years, we'll see people and companies make use of AI agents that have access to emails, have access to bank accounts, can make phone calls, can move money around in the world.
can write and execute code on computers and remote computers, can act like an executive at a company could. And that is a more dynamic system with a broader range of autonomous motion than copper wire or any other technology that we've invented before. So in some sense, you could say we have to think about the risks on an application basis. But then I would say the application that I am concerned with
are AI agents with extremely broad executive action. Could I ask about, you asked why people were afraid, that the public seems to be generally afraid. And I wanted to see if I could drag the media into that. I was wondering when someone was going to blame the media. I was going to do it myself if no one did it. But yes, go on. I was waiting, Pooh. Thank you, Mark. I had two examples to use. One was
You know, my company makes robots. And if you look at our YouTube channels, we get an overwhelmingly positive likes versus dislikes. But if you read the stories that the media writes about them, over half, depending upon which video it is, puts the word terrifying in the title of their story.
And, you know, that troubles me as an influence on the attitude of the public, that they're being encouraged to see it as terrifying, even though I don't think that there's anything in the video inherently terrifying. To be clear, these are the famous Boston Dynamics videos of robots and robot dogs, you know, kicking soccer balls and doing parkour and dancing and getting back up after they're pushed over and stuff. And those are the kinds of videos you're talking about? Yeah, those are the ones I'm talking about.
burning question for Mark. Yes, go for it. Okay, because I'm excited about this embodied AI world where we're going to see these large language models go into robotics. But I spent my career building artificial emotional intelligence and just
machines that have empathy and emotion. And none of the Boston Dynamics robots have empathy or EQ. So have you thought about that? So my new org, which we're calling the I Institute, even though it's going to go through a name change shortly, is specifically aimed at combining more cognitive and emotional function with the physicality and the athletic stuff, although we're doing the athletic stuff too. So yeah, we've been thinking about that. And of course,
The degree to which the last few years of AI development has made the cognitive side easier to have, that's a real boon. I think there's still a gap, though, that getting the physicality to accelerate is still somewhat of a puzzle and a lot of the current AI technology
is optimistic about that, but not quite as... I don't think the proof of the pudding is there yet. Why is that? This is something many people have noticed, that all of the progress or most of the progress we've seen over the past couple years in AI has been in software AI. Robots like the kind you all were building at Boston Dynamics have not advanced nearly as much. Why is that? I think they're still advancing at a good clip. I don't think the...
The public and the media are talking about it as intensely as they talk about the AI quote revolution. I've been studying AI for a long time, and AI has gone like this over the years. There's sort of an up and down hype cycle everywhere.
every 10 years or so. So we'll see, you know, we'll see how long this cycle lasts, even though there's clearly a huge set of advances. But I wanted to pick on the media one more time before I let it go. You know, you at the beginning asked us the question about AGI. And I think there was hidden in your question the assumption that there might be something wrong or terrifying or intimidating about there being AI smarter than people.
Is that right? I didn't say any of those words, but I can see how you would get that. So I think that there's been a lot of discussion about AGI. And I think behind the scenes, the suggestion is that that would be a problem if AI got smarter than people. But I don't necessarily think that. So maybe I'm asking, the group, is the prospect of AGI a threat?
Does anyone here think the prospect of AGI is a threat? I take a different spin on it, which is it starts to explain the answer to your question of why many people are suspicious or concerned about AI. I think in some ways Karl Marx was right that economic insecurity explains a lot of history.
And, you know, whether people are explaining it one way or another or whatever it is, people are, you know, the last 40 years have not been that great for the average American worker. You know, their income is basically flat. You know, the whole regions have been sort of destroyed. Seems all the money goes to the coast and so forth. And they look at AI and they, you know, look at an AI that's better than a human and they say, how's this going to work out for me?
And, you know, I think like we're actually under discussing the macroeconomic questions and they're not like safety risks. It's very different than, you know, the question of whether you're going to invent a virus that takes over the world. It's more of the sort of macroeconomic trend.
And, you know, things can go in different directions. You know, the plow made a lot of big invention, made a lot of farmers able to be self-sustaining. But something like the cotton gin, you know, reinforced plantation model and led a lot of workers who were enslaved to like terrible life conditions. So I think people, they may not always express it that well, but I think the economic insecurity that a lot of people feel is driving a lot of the resistance to AI. We'll be right back.
How can a microchip manufacturer keep track of 250 million control points at once? How can technology behind animated movies help enterprises reimagine their future? Built for Change listeners know those answers and more. I'm Elise Hu. And I'm Josh Klein. We're the hosts of Built for Change, a podcast from Accenture. We talk to leaders of the world's biggest companies to hear how they've reinvented their business to create industry-shifting impact.
And how you can too. New episodes are coming soon, so check out Built for Change wherever you get your podcasts. I gave my brother a New York Times subscription. She sent me a year-long subscription so I have access to all the games. We'll do Wordle, Mini, Spelling Bee. It has given us a personal connection. We exchange articles. And so having read the same article, we can discuss it. The coverage, the options, it's not just news. Such a diversified biz.
I was really excited to give him a New York Times cooking subscription so that we could share recipes. And we even just shared a recipe the other day. The New York Times contributes to our quality time together. You have all of that information at your fingertips. It enriches our relationship, broadening our horizons. It was such a cool and thoughtful gift. We're reading the same stuff. We're making the same food. We're on the same page.
Connect even more with someone you care about. Learn more about giving a New York Times subscription as a gift at nytimes.com slash gift. Get a special rate if you act before December 26th.
I have a question that I want to aim at the more worried folks in the room. A lot of you have been warning about AI risks for quite a number of years. Jack, you were part of the team at OpenAI that had concerns about whether GPT-2 was safe. That was several versions ago.
Do you think that, and Dan, your organization was part of a group that organized an open letter calling for a six-month pause on the, no, that was not the- Wrong open letter. Wrong open letter. Oh, this was the pandemics and nuclear war. The one sentence. The one sentence letter, my mistake.
Um, is there a kind of crying wolf problem among AI safety advocates where people are starting to discount these predictions because these systems keep being released and keep not ending the world? So I'll jump in on this. Um, I think there's a good adage here from, you know, the world of finance, which is, uh,
the market can stay irrational longer than you can stay solvent. Like you could predict a risk, but timing the market is always incredibly difficult. And at the launch of GPT-2, we were like, wow, this could lead to phishing scams or impersonation or, you know, text that is used in misinformation.
We were right, but completely wrong on the timeline. And I've since updated to be much more wary of making those kinds of specific predictions, because I think that you're never going to time it quite right, and it might lead to skepticism. But maybe to build on what Tim said earlier, what we do know is that increasingly powerful AI systems are
are going to be drivers of major socioeconomic change. And I think if you said to world leaders, hey, in five years or so, there's going to be multiple net new trillion dollar corporations on the planet that have never existed before and will suddenly arrive and have an economic effect, or even more, there's going to be a new highly productive, economically relevant country full of geniuses that just teleports onto the planet overnight.
They would take that extraordinarily seriously, not because they think those corporations or country are going to do really bad things, but because it's going to be a source of vast, sudden economic dislocation and change, which is a source of instability. And I think lots of what I have come to worry about is less these specific risks where you may be totally wrong on timing and specifics.
But I think we're underestimating how large this change is going to be and how prepared we may not be for it. I mean, yeah, at the time last year when that statement came out, I wasn't concerned about the models at the time. And they're roughly at the capabilities level. Remind us briefly what that statement said. The statement was, I should have memorized, but I don't. I think it was something like mitigating the risk of extinction from AI. Yeah.
should be a global priority alongside nuclear weapon or pandemics and nuclear war or something like that. Basically, it analogizes it to other dual use technologies like nuclear and bio that we've coordinated around as an international community. But so that was just to put on the radar, get the discussion going, consider what's the option set for policy and start seeing what's what's feasible. But the current systems don't keep me up at night.
I mean, I don't know what the systems in 2025 will be like. Maybe they'll first start to have some of those hazards, though. Maybe not. But I do think generally people are thinking that this is a AI is a villain of the week. It's it's another challenge. It's social media or it's it's it's just like electricity. And I think it's a challenge.
I mean, just if you have the automation of all labor, eventually, that seems like that totally changes the human condition in a way that other technologies don't, just by definition. So I think it's, like Jack was saying, this is underrated as a disruptor to society. I just want to say I cringe at the idea of crying wolf because...
The idea that such a powerful new technology should not have people thinking really hard about the risks and potential downstream harms, about the dual-edged nature of it. It's so important. And I'm
You know, we were talking earlier. Actually, I have felt that OpenAI never got enough credit from the very beginning in trying to express that idea, express specific measures like audits and red teaming and need for government regulators to think about things. In fact,
I think in 2019, you were ridiculed for saying those things. Quite loudly, yes. Yeah, and it really took later work and open letters and so on to finally get people to wake up and really take that very seriously. Any powerful new disruptive technology absolutely requires that kind of care and thought.
Josh, I want to turn to you. One thing that we haven't talked about as a risk that I think a lot of people are starting to become quite concerned about is what is sometimes called AI slop, this sort of surfeit of AI-generated content that is starting to appear on social media platforms, on sites like YouTube. I've seen actually people starting to put up AI-generated podcasts generated through Notebook LM on Spotify and YouTube. And there's this fear, I think,
among some folks who worry specifically about Google, that the internet is just sort of being overwhelmed by AI-generated content and that it will soon be very hard to find content that is generated by humans. And it'll be hard for those humans to be paid to make that content because the economic model of the internet may start to decombust. So do you worry about AI slop?
I think about it a lot. I try to avoid it. I would say that a couple of thoughts here. One, I think the value of sort of human created things will actually go up. And I think just like we've seen in other industries, even labeling it, marking it, human created or other things don't seem far fetched to me. And so I think that value will increase.
partially because I think it will become scarcer when you have models that can generate things 24-7. Someone that's put their kind of human touch into it, I think, is one aspect. I think another thing I think a lot about is the value of taste, I think, will go up.
So, taste from a user perspective, but also how companies like Google and others rank content and surface, discover, retrieve it. So, I think those are two factors or things that are maybe even undervalued right now that I think will go up. Maybe the final thing I'd say is I think that
We're in the early days of trying to find ways to generate content with AI models. And I actually can see a shift kind of on the horizon where there, I think, will still be creators in the sense that we know them today on YouTube that you referenced or other platforms. But I think there will be a rise of almost curators, right?
that curate types of content and work with the models to sort of create new things. And I think that's actually a whole new area of like, what are the norms? What does good look like in that sort of environment? And I think that's one of the things we're seeing firsthand with Notebook LM is that actually there's something to people being able to curate some sources and sort of work with the AI to create something that previously they never could have created on their own. And so I think that's maybe a third area that's kind of interesting to me. Hmm.
I want to talk about another debate that's been going on in a lot of places, which is the sort of debate about creative production and what happens to people whose content is used to train AI systems, whether they are being fairly compensated for that. Obviously, my employer, The New York Times, is in active litigation with OpenAI and Microsoft about this. But I
But I think there's a lot of fear out there. We saw this in the Hollywood writers' strike and some of the other unrest in creative industries about the use of AI on copyrighted data. Is this a problem with a solution? Does anyone here know how we resolve this conflict in a way that makes people feel like there's a fair resolution, Tim? I mean, I think it should, if we had a rationally working sort of political system, be subject to relatively easy solutions.
There are mechanisms, for example, ASCAP or compulsory licenses over the history of the creative industries. There was a time when radio was a great threat to composers. There was a time when the cable industry was a great threat to television. And we worked these things out in government through things called compulsory licenses or some kind of way of paying creators.
Um, I just am worried about the American political system, maybe being too broken to do it, but like who really disagrees with the idea that creators should just get, you know, some way that money, the way that, you know, the composer gets money or, uh,
or anyone else who's creative gets some money for their work. It should be doable. Curious about that, and stop me if this is too much of a nerdy sidebar, but there's kind of two points at which the creators could maybe be paid, right? One, when the image that they worked on gets put into the training data, and then another when...
potentially when someone is like make an image in the style of blah, blah, blah person. Are you, I feel like that second thing at least is something that creators seem interested in and is at least novel compared to like radio because it's not actually exactly reproducing the work that they created. Yeah, I mean one thing I think that is an advantage in the AI world is everything is quantified. You know more. You know, radio has always been a rough guess.
of how his music is getting played and they've never really known. They sort of send out checks based on some sampling. I mean, you could really know. And then, you know, I think it's a negotiation. I think you could figure it out and have something for creating, this would be a little like technical, using a derivative work versus the original training, you know, basing something on everything. In a rational system, it's, you know, I hope the New York Times lawsuit results in a big settlement.
I'm a copyright lawyer also, and it seems like they've got them on OpenAI on the regurgitation stuff. So I hope that is sort of the beginnings of a negotiation, because I hope it just doesn't get held up.
The worst thing could happen to be like Google Books where everybody just fought so long and now there's this library out there somewhere that no one can really get to. People like me want to read those old books and it's almost impossible to get to them. So, you know, we're betting on the American sort of political and legal apparatus to be able to solve it.
I love the idea of attribution as a service, as a business model for how we do this. And yeah, you could kind of think of these micropayments that happen when your data is used for training or when it's used for inference or it's kind of part of the inference. There's a company called Prorata.ai that's starting to do this. And some of the large language model companies are kind of embracing this idea of attribution. So I wonder if that will take off. Yeah.
Sarah, I want to ask you about another big debate happening in the world of AI, which is about whether AI progress is hitting a wall. You talk to a lot of people building AI systems. You have a great podcast as well. And Sam Altman was just asked by Andrew Ross Sorkin on stage whether AI is hitting a wall. And he reaffirmed his position that there is no wall. The current architectures are
are sufficient to get us all the way to AGI. We are not running out of data or compute. The bottlenecks are just not there, and any rumors to the contrary are just false. So what's your take on the question of whether AI progress is hitting a wall?
I'm very leery of making predictions in AI that go beyond, like, as far as I can see. And, like, we may—let's call that, like, six months in the data center agreements people have already signed. And I would say, like, we are going to see model progress, right? The way I would think of it is—
You know, there's an internet data set and others that is used in pre-training today, and getting data for model progress is going to become more expensive.
There are a lot of people working on ways to make additional data sources, curation, synthetic data very useful. And the number of interesting scientific minds working on this problem and the opportunity it had is magnitude larger than it was a few years ago. So I'm quite confident that there will be continued progress.
I think like the way I might frame the question is like how expensive is it? Because, you know, data centers and power at a certain size, you have to justify it somehow. And if you are collecting human expert data in a bunch of specialized fields,
in very sophisticated ways that's much more expensive than using scraped internet data, right? And so I think as far as I can see, I'm very confident we will see continued model capabilities. And I think people are coming up with also new ways to improve the experienced output, right? And so everybody here knows about the idea of test time scaling. So like, you know, as a shorthand, thinking harder about harder problems.
And I think that's a different paradigm where we're going to get like much more interesting powers and agents out of even the existing base models.
Anyone else want to share their thoughts on whether we're approaching a wall? Just from our research, we don't see a wall. That doesn't mean there isn't one. But there's no hard scientific evidence that we see at this point that there is a wall. I think the bigger challenges are more economic. So one framing that Kevin Scott, who I know you know well,
has shared with me and others is if you think about the current state of generative AI systems as being able to fairly reliably do the cognitive work that might take a really good human being five seconds of deep thought and then do that work. You know, we're going to get now to five minutes of deep thought, but at maybe 100x higher cost, compute cost.
The economics of that might not be very workable very broadly. But if we get to five hours, five days, five weeks, even continued 10 or 100x jumps in costs start to make economic sense. And so I think if there are- You're telling me there might be a point at which we ask a chat GPT or a co-pilot or another system a question and it takes five weeks to come back with an answer? Yes.
I'm not sticking around for that. I'm sorry. That tab is getting closed. It's like managing an employee. It depends on the question, right? That's true. That actually is a good jumping off point for a question I had for the people who raised their hands for AGI by 2030. Because I personally, for what I think of as AGI, I think it will take longer than that. But I also think it would be the most
explosive, consequential thing to have ever happened. And I'm noticing that many of the people who said AGI by 2030 are talking about implications that, to me, seem small bore if you think that AGI is going to happen in 2030. So I'm curious, like, what does AGI mean to you? And what do you think are going to be the impacts of it? And how quickly will it
I don't think you and I are so far apart in how we view things. For me, AGI is kind of a difficult concept because there's no technical or scientific definition. So, you know, for all I know, we have it today. We just don't know it. It might take years. I guess for me, I would say it's like you were saying, you know, there's the five second AI, the five minute AI, the 10 hour AI. For me, I would say maybe AGI is the 10 year version of that. Do you think that would happen by 2030? Yeah.
What I think will happen, the way I think about it is, will there be AI that... So you have a mobile phone, I'm assuming, with you right now. And there are several triumphs of computer science research in that mobile phone. There's a software-defined radio. There's a Linux-based operating system. There's all sorts of cool stuff in there. And put together, you can't live without... Like, if you didn't have it today, you would probably be feeling fairly vulnerable and uncomfortable. Right.
You depend on it all day, every day. I actually left it in the other room. I'm very brave. I see. Wow. I'm impressed. Unlike Kevin over here. So the question I wonder about is by 2030, will we have generative AI or AI systems where we feel similarly, where we can't bear the thought of being without it even for a minute?
There's already people feel that way about their companions. 100%. You know, people get married. That's their most important relationship in life. So we're seeing that already with many people. So...
Yeah, I've been working on AI since 2012, and maybe not as long as some people here, but even that decade, 12 years, the goalposts keep moving. Even when people ask, what do you think about AGI? First of all, I'm not probably the best person to ask, but even then, I think, I don't know if we know what's AGI. It feels just such a strange thing.
The definition changed over 12 years 15 times. I can't even track anymore. What do we call HAI, Jack? I might just say my definition is it's like an economically productive full nation state that will scale in accordance with the amount of compute you can give it. And the implications of that are completely like...
economy dislocating and changing. And if you can think about, if I was running a country, and I wanted the country to do something, and it was a midsize country, you can accomplish things of vast world changing consequence, especially if the people are, to Ajay's point earlier, like, extremely like loyal and rule following and all extremely smart and diligent. And I think that that
level of change is wild. And maybe the important thing to remember is, we're not just talking about a single system, we're going to be talking about thousands to millions of copies of very, very smart systems, working as a team. The New York Times is a really, really smart media publication, not just because of Kevin Ruse, but because of you and all of your colleagues working as a team. And you're able to get amazing stuff done by working together.
these systems will be like that. And I think wrapping your head around that leads to some pretty wild consequences. I think it's possible to keep both this belief in progress of groups of AI systems that can achieve extraordinary things
Like, in your head at the same time as belief. The last mile into, like, society in every way is very, very long, right? And so, like, the example I like to use here, and it's, like, you know, different technically, different architecturally, but we have, for several decades, been able to do radiology, much of it, algorithmically. And yet, like...
We have a dearth of radiologists in the United States, and that problem doesn't seem to be improving. And what you find out when you investigate the problems, you're like, oh, well, the radiology machines only work so quickly, and we're concerned about quality in these ways, and they have to generate reports. And so I'd just say the complexity of what we think of as integration into society, organizations, companies, people's processes is just very slow and deep. And actually, I think that buys us quite a bit of time, right? Even if we think general capabilities are
progress very quickly. And I'm trying to like think as aggressively as I can about that. And so like the period of time where an average American or a nation state experiences that sort of explosive change might be a little further out than we think.
And maybe just to come in very quickly on that, you know, the economist Tyler Cowen, I think, has also made this point just as you did, which is that it takes, you know, 20 to 30 years for societies to adapt to new technologies. And he actually said this to me and Dario once along the lines of,
Well, your timelines may be optimistic, but you know what also needs to happen? Like 10 to 30 years of human societal change before these changes come through. And that may be what happens here. My favorite example of this historically is the elevator, which was invented for several decades before it became widespread in use because people just didn't trust it.
And so there was actually a safety feature that needed to be invented to make it safer so that people would actually get on one and you can start installing them in buildings. So there may be a link between safety and adoption here that's interesting to think about. I want to turn to politics.
What's that? And that's good. Yes. Yes. I want to turn to politics. That's an area where there's been a lot of change recently. Obviously, we have a new incoming administration. There's been some reporting that Donald Trump is considering appointing an AI czar to lead his administration's efforts on AI. So is that a good idea? And would anyone in the room like to throw their hat into the ring for that?
I do not want to throw my hat in the ring for that. However, I recently met the UAE's AI minister, and I believe he was the first AI minister in the world. He got appointed about seven years ago. And I think that's a great thing. I think it's great to have a country kind of be on the forefront, not just of regulation, but actually innovation. And how can you use AI and really creative and yet accelerate scientific discovery and progress and democratize access to all
All sorts of cool things. I don't know if this is the job description for the AI czar that Trump has, but I think it could be really awesome. Only one way to find out. You got to get on a flight down to Palm Beach and go take the meeting. Meet Elon Musk. My prediction is that you would quit two years frustrated having...
Dan, you've been described...
at least once as Elon Musk's safety whisperer, uh, you've been a safety advisor to XAI, his AI startup. And some people have speculated that you might have a role in a, any kind of a Trump administration effort around AI safety and regulation. Do you have anything to share there? Uh,
I don't know whether there will be an AI czar. So I think it would make sense for there being some person driving it because it's very siloed inside the White House. You'd have the NSC and the OCP and all these different things, and they'll all be focusing on different aspects of the problem. So one person driving it.
a specific vision so that it's not a lot of high-powered individual just moving in various different directions, I think would make a lot of sense on AI. But I think we'll just see what happens. Do you have a guess, educated or otherwise, on what the Trump administration's posture toward AI will be and how it might be different from the Biden White House's executive order? Well, I mean, I think both administrations, it seems plausible that they'll recognize that this has national security relevance.
I mean, Trump started repopularized export controls in some ways, and the Biden administration carried that further on export controls on AI chips. So that could potentially be a thing that they would focus on to have strategic competitors not have access to those sorts of chips unless they agree to some sort of standard. So I think that would make a lot of sense. But yeah.
We'll see what happens. I think it's – they haven't gotten to that stage yet really because it's still just cabinet level mostly. So I think we'll have a sense of like whether there will be substantial efforts on AI safety and national competitiveness and national security in the next few months. But I think nobody knows. Yeah.
One idea that has taken root in some policy circles recently is the idea that we need a Manhattan Project for AI, basically a coordinated federal effort to bring the big AI labs together to race toward AGI before China gets it. Does anyone have thoughts on whether we need a Manhattan Project for AI? It seems to me we have about three or four or five of them already, if you look at the big companies who are investing in
tens or twenties or thirties of billions of dollars in it. Should the government be giving them more money to have them go faster? I think the one role the government can play is to fund the
the keep the embers alive of new ideas and keep them alive beyond until they have commercial potential certainly in robotics they did that they did a great job of that with us and in other areas I think they did that with the internet broadly and I think I see that as more of a role I think at least the current versions of AI are getting plenty of juice and
One thing I worry about in the US, UK and Europe is that there is a kind of a gap between the graduates, both at graduate level and undergraduate level in computer science and what is going on in these three or four or five Manhattan projects in industry. And when I look at the leading academic institutions in China, for example, the Chinese government has worked very hard to close those gaps.
And so I think one thing I would like to see is dramatically increased productive funding for academic research in AI.
I actually have a little bit of a hot take here. I think the easiest way to win AI for the US would be to just make it extremely easy to come here on a visa for any computer scientist or mathematician or physicist who is working in AI or has any inclination to work in AI. Because ultimately... I came here on the...
extraordinary ability visa and then got the green card this way and the US passport this way. It was hard, but at least I'm not from India where it's impossible. Chinese people are hard, also really hard. But that should be just automatic. You should just come here, show your degree, show your affiliation with some institutions or universities, and you should just be able to just get in immediately and get a job. Just because as a person coming from
I grew up in Russia. I'm half Ukrainian, half Russian. We all want to live here. Most people want to live here. Most researchers in China want to live here. And that is the competitive advantage. You can get all these people like that. And the fact that it takes, it's almost impossible to move people here. Moving my PhDs from Russia is impossibly hard. They can't even get a visa to visit. So they end up just staying there, even though they can, they can be there overnight and
the AI war will be over in one day, just if that immigration is solved. And I think this is actually a really simple thing to do. And unfortunately, I'm not seeing much of a progress. Well, if I can pile on on that, it's a real problem. I felt this in the White House that
immigration policy has only become understood as like the southern border you know as opposed to like what about trying it's as soon as it came up it's like if it's about asylum and you know people big caravans or whatever it is just the southern border became immigration policy it's impossible to have a discussion we did when i was in the white house we uh tried to
absorb a lot of Russians who are fleeing Russia and try to, Ukrainians try to do some stuff. And we did some. But yeah, but it was, we did some. But it was, because it's so centered on the southern border, it's just one of the ways in which U.S. policy is insane. I mean, we've resorted to opening an office in Zurich because we can hire anybody there. And then we work, you know, remotely with them.
And same with opening AI, just opening the Zurich office. It's relatively easy. It's actually quite hard to move people. And this absurd H1B lottery, how can there be a lottery for AI scientists? They should just get a visa stamp right when they...
get on a plane ticket. Make it an ASTA type thing for a select, you know, just for that profession. And in fact, it's the opposite. Today, if you're coming with an AI degree, you get requests for evidence, which is called RFE, to oblivion. They will ask you 10,000 questions and it's actually much harder to get a visa if you're coming with one of those degrees. Well, and Microsoft Research has continued to even now accelerate opening of new labs around the world. It's probably worth noting that
economic security is national security and things like manhattan project are like at the end of a very long list of interventions government can do to gain a lead in a technology and interventions you can do earlier like sorting out immigration or as peter said building the basic experimental infrastructure that would let scientists access the kind of computers that you know the companies assembled here have and so when i hear terms like a manhattan project i'm like
That's where our imagination has got us. There are many other tools which are less expensive and scary that you can be using way earlier. And I think we need to try to focus on those.
One thing that governments around the world have done during previous waves of technological disruption is to try to cushion the impact for workers whose jobs are displaced. In some countries in Scandinavia, they have job councils where people who are laid off as a result of automation can sort of get sort of retrained for other jobs. Are there things that the government can or should be doing to help people whose jobs may be threatened by AI in the coming years?
Well, I think that in the future I see there aren't really going to be other jobs to retrain people into pretty soon. So I think it would have to be a pretty different approach. Maybe something like UBI is, I think, going to have to be
necessarily on the table at some point in the future. But I also think that it is unfortunately maybe like four or five on the list of monumental problems to be sorted out. And it's on a log scale. So the ones that are higher on the list are.
Well, let's talk about that because jobs, I think, gets short shrift in this conversation sometimes where we talk about immediate near-term risks and long-term existential risks. There's sort of this middle layer, which is the fear that AI could take away all the jobs. The question I get asked most when I go around the country talking about AI with groups of people, as I'm sure you all do, is,
what is the job that I can get or what is the job that I can tell my kid to train for that will be safe from AI, AI-proof? For a long time, people were told to learn to code. That was the way to protect yourself from being displaced. But that is no longer self-evidently the best option for people. But are there AI-proof jobs or at least jobs that AI will find harder to displace? Yeah.
Rana? I don't know what the job is, but I have a 21-year-old daughter and a 15-year-old son. And I think, I don't know what they're going to do, but I know like critical thinking, collaboration, communication, creative thinking. Like all of these things are going to continue to be super important. And I want them to be really skilled at these talents or things.
Yeah, more soft skills. So I don't know, but I don't know what they're going to end up doing. It almost doesn't matter.
Well, and tell your kids whatever they choose to do. Like if they want to go into software engineering, get into it and learn how to be very good at harnessing AI to do this. Actually, I've looked at this very carefully in modern medicine. It's very rare for technology to have eliminated humans from a medical specialty. It has happened twice. One is that the rise of brain scan technologies led to neurology, which has completely eliminated phrenologists.
And then the other is microbiology advances and therapies have eliminated the use of barbers to do bloodletting as a way to treat flu and things like that. Leeches have been underemployed in the medical field for decades. What has happened over and over again is that prized skills by advanced knowledge workers like medical specialists –
have been altered dramatically. And my favorite example is in obstetrics, where doctors used to be trained and had the prize skill of manual palpation as a way to examine the unborn fetus and the pregnant woman. And that has been completely displaced by ultrasound.
And there was tremendous resistance. There were safety concerns early on. But things eventually flipped to the point now if a patient went to an obstetrician and he or she said, I don't believe in that ultrasound thing. I'm going to put my hands on your belly. That could be even now considered a reportable event. And so I think that there's the potential that we might need far fewer obstetricians
of a certain type in the future. But even that tends not to happen as often as people fear. I mean, if you go to the history of technological developments over the long term, I mean, obviously we still all have jobs, right? So, you know, no technology is completely... The problem, I think something Jack kept bringing up is the disruption in the middle.
And, you know, it's the kind of, you know, we've had a conversation about risks. I think it's important to think about two kinds of risks. In some ways, some of the safety risks are almost easier to describe and name than
because they are, you know, things like inventing a bioweapon or something like that. They're a little more tangible. The broader sort of macroeconomic disruption, industrial revolution, I mean, there's a strong argument industrial revolution invention of a lot of technologies led to like World War I, World War II, you know, all these various killings. And so like just this, and that stuff is really much harder to sort of
counter. And I don't think that we're going to arrive at it right now. But I could think like 10 years from now, we'll have a conversation. We'll be like, we didn't realize that, you know, the massive disruption of so many people like having to shift jobs, whatever, led to like rise of the authoritarian leader. And we ended up in like World War, some kind of war of some kind. We'll be really, you know, it'd be hard to link it directly to AI, but sort of this
existing economic insecurity. So to come back to your point, I mean, I think anything we do to sort of try and soften disruption, I don't have an exact program, but to like make it so that gangs of angry people don't take to the streets and support strong men taking over their countries. I mean, we're close to that in a lot of the world.
That is the kind of stuff I would think is the most important for the civilization to survive. And to that point, I feel like a lens that I use to think about the AI revolution is that it will play out like the Industrial Revolution, but around 10 times faster.
So, you know, over the period of hundreds of years, a couple hundred years, we went from a world where 90 plus percent of human beings were employed in farming to a world where maybe like one or two percent now in the modern industrialized economies are employed in farming. Right.
But that change occurred over enough of a period of time where most of the change happened by, you know, the father was a farmer, but then the son went off to the city and took a factory job instead. And then his son went off and took an information technology job as a podcaster or whatever. But if that transition happens over... I should know about my grandfather. Yeah.
over like 10 years instead of 200 years, then it doesn't happen over the course of generations. It has to be individuals who are losing the relevance of all of their skills and then having nothing necessarily to replace that with. What about the fact that...
the US, Mexico and Canada all have a declining birth rate. I think that-- There's several in Europe and several in Asia. South Korea is at 0.7 where 2.1 is break even. So there's an argument that productivity is going to depend on AI and robots because the populations are aging and declining.
separately from the risk that there's going to be competition between workers and AI. Well, get those AI researchers in. Population crisis solved. Well, maybe no one will have children because they're all, you know, shacked up with their AI partners and lovers, and that's what will cause the next population crisis. Josh and Sarah, I want to ask you about what's next in AI. You both spent a lot of time backsliding
building and funding products that are coming up in this space. There's been a lot of talk recently about agents. That's one thing. Anthropic just released a demo or, I guess, a real live version of a tool that can use the computer, basically AI that can click around and order you plane tickets or something. What else should we be expecting in the next, call it, 12 months?
I'll talk about a few specific applications that I'm excited about. But I will say, like, I feel like a case should be made for some future of abundance as well. And, like, that's not been part of this conversation at all. And I'm not saying that, like, trivially happens without mechanisms for distribution of abundance. But I don't think anybody here, like, if anybody's ever read, like, The Better Angels of Our Nation, like, or just thought about the quality of life, right?
pre-industrial revolution. We're not like, let's go back. I'm not taking my kids there. I'd rather be like, oh, you'll find a job, right? Or like maybe jobs will be less important. And I am a sort of pure capitalist. Like I think we're going to continue to have a market economy and people will see a significant part of their value to society and their identity be determined by their work. But historically, and this is different from all other things, you
Demand for many different things has been elastic and emergent after technical revolution. And so I think the, you know, having some faith or some picture of what improved speed of scientific discovery and cheaper healthcare and education means in the world, like should be a little bit more positive. I'm glad you brought that up because I would love to have you just sketch the vision. Like what is the life under the post-AGI abundance scenario?
economy look like for the average person? What do they do all day? How do they make money? Do they need to make money? What does life look like for someone who is quite young today? I'll put our digital twins to work. Yeah, it's possible. But when you talk about the sort of more optimistic vision, what are you picturing? Yeah.
Yeah, I don't feel prepared to picture that because it's like in a future where you have a high quality of life, where there is enough productivity, where you can do less work and the work you do is what you choose. Like I think people learn and entertain and like create, right?
Right. And I think that is like actually quite interesting because our our level of understanding of the world is still quite limited. And so I think there is more focus on discovery and entertainment and socialization. Actually, I have one thing to add. I think there's so much focus on discovery.
Rightfully so. Jobs and whether we're going to have a job or not have a job and whether we're going to have money and abundance of money. I think we need to focus on the mental health because right now I do believe and maybe it's a little bit of a contrarian take, but I do think most of the wars happen out of mental health. Most of the horrible things in the world happen because of the
emotional weakness of, you know, different people, of them not being loved in a certain way when they were little and so on and so on. And all of these emotional problems, it all, you know, there's a great movie, Don't Look Up. Like we all know what's going to kill us. We are not doing anything about it or not doing enough about it. And so it's not because we can't do it. It's because we don't know how to do it. It's because we can't do it emotionally. We're just weak as humans. So unless we figure out
how AI is going to fix us inside, no matter how abundant our life and how much entertainment we're going to have on the outside, I'm not seeing a great future. I'll give you a positive, sorry, I'll give you a positive vision, which is I think most of us spend most of our time working. And if work actually got better, that would be a positive vision. I think that could...
You know, these are my own beliefs, but I think if it was very easy to be in your own small or medium-sized business, you know, if you didn't have the sense, if you just had a sense that you had the economic autonomy to start your own business and be the way you wanted to be more easily, I think that would be good. And I think AI could really help with that.
And what I think is the negative version of that, if all the market power is just controlled by a couple small companies who extract everything and you feel like an Amazon seller who's just trying to keep even, I think that's bad. So in other words, I think a lot will depend on
frankly, on market power. And that's sort of an economic perspective again. But I do think there is a very positive vision of the future in which people feel a lot of economic autonomy and where their work life is a lot better. But I don't know if we're going in that direction or not. That's my concern. There was just one tiny little thing. There was a concept of below the API and above the API a few years ago, and I still feel it's kind of relevant that at some point people are going in two directions.
Some are working above the API and some are working below the API, meaning one person, for example, is writing code at Uber and one person is driving the car where the machine tells him to go. Just listening to you, there's always something that resonates for me because I do think AI has the power to amplify both the best things we do, but also our worst weaknesses. And it's that tension that I think
I didn't answer before about the existential dread, but that's where for me, the existential dread comes from. The knowledge that it can scratch at our worst open wounds while also really enabling us to do things we couldn't do before.
In terms of positive visions, I just would really recommend the essay that Jack mentioned earlier by Dario, Machines of Love and Grace. I think it is probably the best positive vision out there. There's really not much literature in this space. But I think there's this vision of if we center AI around humans and build AI that is in service of humans, I think there's a vision where AI enhances who we are as humans. Like we become healthier and more productive and
Back to your concept of human flourishing, like better on many aspects of what it means to be human. And I think there's a world where we can do that. I'm optimistic. We only have a few minutes left, but I want to make sure that I get to my favorite question that was suggested for this group, which is, what is the thing that you have been wrong about when it comes to AI? Yeah.
No epistemic humility in this room. No, nobody has been wrong. I'm wrong all the time. Which one are you wrong? When I first saw people, Jeff Hinton and his team visited Microsoft Research for a summer. It was back in 2009. And they were suggesting using these layered cake of neural nets instead of Gaussian mixture models for speech recognition. And I knew it was ridiculous.
In fact, I even felt sorry for Jeff because I'd been a professor with him back in the late 80s at Carnegie Mellon, and he was doing the same stuff back then. And I just thought, wow, he hasn't let go of this. And of course, you know, he and Ilya and team just changed the world. Deep learning became incredibly important, and that just happens all the time.
I think I was wrong about just how many, how relatively few bad people there are in the world. I think a lot of people who care about AI safety or risk
including me had like a wrong cartoonish view of how many malicious actors there are out there. You're like sitting there in the AI lab and you're imagining like shadow anthropic, which is just like anthropic, but all it does is bad stuff. And your model of risk becomes based on that. And I think people are just much, much more well-intended than that. And that's an area where I've, I've, I've had to change my views.
I'd say I have been surprised by how few consumer facing products there have been and how little uptake of the useful applications it seems like is possible has been realized over the last three years. I think every year I kind of pictured this cornucopia. I pictured myself using AI every day.
And that has not happened. I have not figured out how to integrate it into my workflow. And I think it's been hard for the vast majority of people, except for some sort of power user programmers. I think it's coming in terms of consumer applications. I mean, the capabilities are obviously increasing rapidly, but there are great product people like the Notebook LM people and many others that like, I think the ability to just talk into a chat, like a blank chat box is the very beginning. And it just takes
takes entrepreneurs. We need the iPhone moment, right? We need the iPhone moment for AI. I actually went back and looked at, I kind of think we're in year two of 10 in a platform shift that's happening. And this kind of longer term view and thinking about it in those chunks at least helps me. I think I've been wrong about
how fast things are moving, which is true, but also this sort of how fast it gets adopted and sort of integrated. But I went back and looked. The iPhone moment, 2007, Steve announces two years later. I don't know if any of you looked at any of this, but the App Store was essentially websites that had been shrunken down
flashlights, and fart apps. That was the app store. And the app that looked like you were drinking a beer, but you weren't. Let's not forget that one. So I do agree with Sarah. I think maybe if there's something I've been wrong about so far is I thought the adoption would be faster too. But I do think we're kind of in this phase where there's been a first wave of apps
that have largely been chatbot-based that we're now entering kind of a second phase where there's a lot of new applications that are really built AI-first in a different way. Give us a sense of what's coming. I mean, you're at Google Labs. What should we expect in the next year as far as AI products? Well, I mean, we think a lot about what's the future of knowledge going to look like and what does it mean to have, say, a tutor on demand, which was talked about earlier on this. We think a lot about what's the future of software development going to look like.
And where you have 10x engineers become 100x or 0x become 10x. I think the creative tools, there's loads of different ways that creativity is going to be unlocked. And when you put, again, that mix of kind of human and AI working together. And then maybe a final one we've talked a little bit about is the future of work. And I think we're getting into a phase where we're moving from kind of point products to rethinking entire workflows. And that's where it gets really interesting. And I assume for the
The VC is on the panel here where it gets also interesting just from a sort of upside potential because then you're starting to talk about kind of new ways people are going to work and not just sort of bolting in AI to an existing workflow. With the time we have left, I want to do a lightning round and go around the table and answer the following question in a very concise way in as few words as possible. You fall asleep and wake up in 2030.
AGI has arrived. What's the first thing you have it do for you? Dan. What happened? I'll ask it that. So that's a punt. Sorry. You would ask it what happened? Yeah. Did we win? I'd ask it what I least understand about myself. Hmm.
Introspection. Laundry is the thing that came up for me. Very uninspiring, but I'm like, I would have it do all the things that I hate doing. I agree with that. Mark, if you can get the Boston Dynamics people on the folding laundry robot, that would be great for me personally. This is also coming. Yeah, it is. Really? Physical intelligence, right? Maybe a year. Can I pre-order?
But at what cost, right? But at what price? At what price could I make my laundry? Wow. Eugenia? For me, it's really easy. I've been working on it for 12 years. I want it to make me flourish in life. All it is. Help me live a happy life.
Translate into English what my dog is trying to tell me in the morning. That's a good one. Get up, Peter. I'm hungry. Josh? Something around chatting and sort of reconnecting with people, both like past or present, across location and distance would be interesting to me.
Sarah? 23. I have a 7-year-old, a 5-year-old, and a 1-year-old. And I think at that point it would be like, what can I teach them today? I would ask it to help solve the massive governance problems posed by the existence of AGI. On brand, yes. Mine's similar to that one. I would have each of my grandchildren get, and then you could plug in, MIT or Stanford or Harvard education.
Tim, I'm going to borrow from the movie Her and have it read all my unread emails and respond to them. And respond to them very politely. Yes, and I'll add mine, which is that as soon as AI can file my expenses for me, I will consider it AGI. That will be the greatest thing we've invented so far.
What a wonderful beginning of a conversation. I hope that this sparks many more, and I'm so grateful for you all sharing your views and your thoughts and your insight here today. I think we've accomplished a lot, and while we haven't solved the problems posed by AI, I'm confident that the people in this room will be part of the solution. So thank you all so much for coming, and have a great rest of your day.
Dealbook Summit is a production of The New York Times. This episode was produced by Evan Roberts and edited by Sarah Kessler. Mixing by Kelly Piclo. Original music by Daniel Powell. The rest of the Dealbook events team includes Julie Zahn, Hillary Kuhn, Angela Austin, Haley Hess, Dana Prakowski, Matt Kaiser, and Yanwei Liu.
Special thanks to Sam Dolnick, Nita Lassam, Ravi Mattu, Beth Weinstein, Kate Carrington, and Melissa Tripoli. Thanks for listening. Talk to you next time.