Wondery Plus subscribers can listen to How I Built This early and ad-free right now. Join Wondery Plus in the Wondery app or on Apple Podcasts. This is it. We've got an Amex Platinum Pro on our hands, ladies and gentlemen. We haven't seen anyone relax like this before in the Centurion Lounge. Is he connecting to complimentary Wi-Fi? Oh my, look at that. He is.
and you will not believe where he's going next. The Amex dedicated card member entrance for the win. Unbelievable. When you get travel perks with Amex Platinum, you're part of the action. That's the powerful backing of American Express. Terms apply. Learn more at americanexpress.com slash with Amex. Take your business further with the smart and flexible American Express Business Gold Card.
You can earn four times points on your top two eligible spending categories every month, like transit, U.S. restaurants, and gas stations. That's the powerful backing of American Express. Four times points on up to $150,000 in purchases per year. Terms apply. Learn more at americanexpress.com slash businessgoldcard.
So when I travel frequently, I will need the drugs prior to usual. And I have ultimate confidence that I will get them due to this service. That's David, a CVS Caremark member. Experiencing how CVS Caremark makes access to medications part of his story.
The same way they help every member get the prescriptions they need, when and how they need them. Just like David's pharmacy care representative who helps schedule refills around travel. So David can see the world with less worry. Go to cvs.co.stories to learn more and see all the ways CVS Caremark can help create an even better member experience.
That's cvs.co slash s-t-o-r-i-e-s. Hey, it's Guy here, and before we start the show...
I want to tell you about a super exciting thing. We are launching on How I Built This. So if you own your own business or trying to get one off the ground, we might put you on the show. Yes, on the show. And when you come on, you won't just be joining me, but you'll be speaking with some of our favorite former guests who also happen to be some of the greatest entrepreneurs on earth. And together, we'll answer your most pressing questions about launching and growing your business.
Imagine getting real-time branding advice from Sunbum's Tom Rinks or marketing tips from Vaughn Weaver of Uncle Nearest Whiskey.
If you'd like to be considered, send us a one-minute message that tells us about your business and the issues or questions that you'd like help with. And make sure to tell us how to reach you. Each week, we'll pick a few callers to join us on this show. You can send us a voice memo at hibt at id.wondery.com or hibt.com.
You can call 1-800-433-1298 and leave a message there. That's 1-800-433-1298. And that's it. Hope to hear from you soon. And we are so excited to have you come on the show. And now, on to the show. Hello and welcome to How I Built This Lab. I'm Guy Raz.
So artificial intelligence is already changing the way we live and work, believe it or not, even if you don't realize it. And so far, most of the AI applications we've seen have been focused on specific tasks like generating images or video from a text prompt or identifying chemical compounds for new drug treatments or using satellite imagery to spot forest fires before they spread or even producing a podcast like this one. But what
But what if there was a single AI program that could do just about anything a human could do, but even better? Well, my guest today, Shane Legg, has devoted his entire career to creating a general purpose AI that can be effective for, well, just about anything. In 2010, Shane co-founded DeepMind with the mission of bringing this artificial general intelligence to life.
Today, the company is part of Google's AI division and Shane is on the front lines of a technological revolution. But his journey started back in the 1980s when he was just a kid in New Zealand with a new birthday present. So for my 10th birthday, my parents bought a small computer. It's called a VZ200 and it had an 8-bit microprocessor and had whole 8 kilobytes of memory.
and had built in basic programming language. And as a 10 year old boy, I'd heard about computers, you know, in movies or read about them in books or things like that. But I had a real computer of my own now. And this was prior to the internet and all these sorts of things. So
There was really only one thing to do on the computer, and that was the program. It was a place where you could make things. You could write your own programs. You could start learning the language and so on, and you could just dream. You could invent things. You can invent little spaces with little pixels or little graphics that would chase each other around the screen or do all sorts of things like this. And it was just sort of a playground for my imagination.
I could bring these little worlds to life. I could bring these little characters to life. And that just absolutely captivated me when I was 10, 11, and so on. You would go on to study mathematics and statistics in New Zealand. This is in the late 90s when you kind of start your career. Yeah.
the notion of artificial intelligence had been around. It had been written about in science fiction novels, and it had already been depicted in films. Was that on your radar as a young man when you were starting out your career? When I first started, it wasn't. I started doing mathematics. I liked the subject because it was very challenging. I did some computer science as well. I found computer science very easy because I'd spent my
you know, adolescence programming. But it was only at the end of my second year when in my spare time I wrote some software that did calculus and I showed that to some of the professors and then they said, well, actually, we've got a summer job
And on a machine learning project, would you be interested in coming to work for us for the summer? What did machine learning mean in 1983? It's like the equivalent of like a, I don't know, like a mainframe computer to a laptop today, right? Yeah, it was early days then. It was a relatively new field then that had sort of split off from artificial intelligence not that many years earlier.
And the difference in flavor was the emphasis on instead of engineering like reasoning systems or something like that, the emphasis was on using data and learning a model of something. And so that's where machine learning comes from. And presumably at that time, by the way, you had to feed the data into the machine because it's really – I mean there really wasn't – I mean the internet was around, but it wasn't –
populated with endless reams of information. Yeah, it wasn't. And also, even if you had endless reams of information, the algorithms and the computers couldn't handle it. So what you would typically do is you would have specific small data sets. I mean, by small, you'd like also print them out on a few sheets of paper kind of size. And you would have algorithms, particularly classification algorithms. So what they would do is you'd have a list of
I don't know, let's say people, and you'd have different measurements about their blood results and then whether or not they have the disease. And you would learn a function, which is given this person's blood results, do they or do they not have this disease? And then in a new instance, you'd get the blood results and you wouldn't know if they have the disease, but you'd learn to predict it to classify that individual as, say, having that disease or not. So that's a classification problem. Mm-hmm.
So that was your first contact with what sort of a kind of AI, machine learning. But in early 2000, right, most people who were going into the field of computing were really focused on the Internet. But you went right into artificial intelligence at a time when it was like really nascent. It wasn't really much of an industry yet.
Yeah, there wasn't much of an industry. It was certainly an active area in academia. And so, yeah, after doing my undergraduate and then master's degree, I ended up working to do things like document classification for newspapers, like determining this document here that uses the word bank, is it really about rivers or is it about finance, right? Yeah. So what you're talking about is
basically creating a specific task for a machine that can get, you know, more or less, can get a bit more sophisticated doing that single task. And this was sort of a kind of a crude, you know, sort of a compared to what these machines can do today, relatively crude. But
But this is what you pursued. I mean, you pursued a PhD in artificial intelligence already starting in 2003 and really began to think about…
machine super intelligence this was the the the subject of your thesis machine super intelligence was the i mean when you talk about machine super intelligence at that time were you already using the term artificial general intelligence yeah so maybe it's worth going back a
And they were doing various things, including this sort of document classification and so on. But one of the things they were interested in is building artificial intelligence systems that were very general and capable, building a thinking machine or something, if you like.
And they didn't, I'd say they didn't really get very far in that, but it did get me thinking about that subject. And then when that company collapsed, I spent a lot more time thinking about the subject. And I actually became convinced, particularly after reading The Age of Spiritual Machines by Ray Kurzweil, that artificial, very powerful and general AI systems were going to come, but some a few decades into the future.
So I actually came up with an estimate then that it was about a 50% chance of what I call AGI now by about 2028. And that's also about the time that I proposed the term artificial general intelligence. And to be clear, the way I define artificial general intelligence is it's an artificial agent that can do all the kinds of cognitive tasks that people can typically do and possibly more.
And I like that definition because I think it's very intuitive. We have a good understanding of the sorts of cognitive things that people can typically do.
And we can look around at existing systems and see that they can do some of the sorts of cognitive tasks people can do, but they can't do others. And so we have some sort of intuitive sense of the kind of breadth and capability that you would need to be classified as an artificial general intelligence. Okay. So we're about four years away from seeing whether your prediction is right. We're going to come back to that later in this conversation, but I want to talk about what you would go on to do because
while you were in graduate school, you met Demis Hassab, still at Google today.
And Mustafa Suleiman in the news now because he was just brought on by Microsoft to head up their AI program. And together you founded a company called DeepMind in the UK. Yeah. What was the goal? Presumably it was to develop an AGI? Yes. Our business plan from September 2010 had our logo on the front and DeepMind and it had one sentence, which
which is build the world's first artificial general intelligence. And eventually you would develop a program, you develop a system that could defeat the greatest Go players in the world. And for people who don't know this game, and I don't know it very well, this is a much more complex game than chess. This was a huge challenge to try and figure out how to actually...
build that. Can you explain why you guys were focused on Go? Yeah. To be clear, we did work on many projects. We had a broad range of things, but AlphaGo ended up being, of course, one of the big famous ones. Yeah, it was a long-standing problem in artificial intelligence. In a game like chess, the number of possible moves is not so great that you can actually use brute force computation to
more or less, to actually search through all the space of possibilities of which moves, which counter moves and so on and find very good combinations. In the game of Go, that's much more difficult because the board is much, much larger and there's a much larger number of possibilities at each point. So when you start looking through the trees of different possible moves, it exponentially grows at a much, much faster rate.
And so this basically tripped up all the more brute force approaches to search and planning that would try to play this game well. So we had to come up with something new.
And what we did is we actually blended together search techniques, something called MCTS, with deep learning. And the deep learning would actually learn two things. It would learn which moves were likely to be good moves just by looking at the patterns on the board using a deep neural network. And it would also learn, given a certain state of the game, who is more likely to win or lose.
And then what we would do is we'd get our AI system to play against itself. And as it would win or lose games, it would take that signal and then it would apply these learning algorithms to improve these deep learning networks so they become better and better at anticipating which are the likely good moves. And at any given point in time,
which of the two players was most likely to win the game. And if you combine that with a search, then you start having a very, very powerful algorithm that both has this sort of classical search, but this sort of slightly more intuitive, deep learning aspect where it's kind of picking up subtle patterns in the game and trying to figure out, you know, which way it's going based on sort of these more subtle structures and so on. And it was that beautiful combination of the two that led through to the breakthrough in performance.
We're going to take a quick break, but when we come back, how DeepMind went from mastering Go to working on some of the biggest challenges of our time. Stay with us. I'm Guy Raz, and you're listening to How I Built This Lab. I love a good deal as much as the next guy, but it has to be easy. No hoops, no tricks. So when Mint Mobile said you could get wireless for $15 a month with the purchase of a 3D printer,
of a three-month plan? I called them on it, but it turns out it really is that easy. I was able to set it up in minutes. The website is super simple, and the whole process was seamless. And you get an amazing service for a fraction of the price the other guys charge. To get started, go to mintmobile.com slash built. Right now, all three-month plans are only $15 a month, including the unlimited plan.
All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network. You can use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts.
To get this new customer offer and your new 3-month premium wireless plan for just $15 a month, go to mintmobile.com slash built. That's mintmobile.com slash built. Cut your wireless bill to $15 a month at mintmobile.com slash built. $45 upfront payment required, equivalent to $15 per month. New customers on first 3-month plan only. Speed slower above 40GB on unlimited plan. Additional taxes, fees, and restrictions apply.
Seamint Mobile for details. Finding the right candidates to hire can be like trying to find a needle in a haystack. There
There's just too many resumes, but not enough with the right skills or experience you're looking for. But not with ZipRecruiter. ZipRecruiter finds amazing candidates for you fast. And right now, you can try it for free at ZipRecruiter.com slash built. The second you post your job, ZipRecruiter's powerful matching technology works to show you qualified people for it.
So there's no need to sift through all those resumes. Ditch the other hiring sites and let ZipRecruiter find what you're looking for, the needle in the haystack. Four out of five employers who post on ZipRecruiter get a quality candidate within the first day. Try it for free at this exclusive web address, ziprecruiter.com slash built. Again, that's ziprecruiter.com slash built. ZipRecruiter, the smartest way to hire.
When it comes to ensuring your company has top-notch security practices, things can get complicated fast. Vanta automates compliance for SOC 2, ISO 27001, HIPAA, and more, saving you time and money. With Vanta, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center.
Over 7,000 global companies like Atlassian, Flow Health, and Quora use Vanta to build trust and prove security in real time. Our listeners can claim a special offer of $1,000 off Vanta at vanta.com slash built. That's V-A-N-T-A dot com slash B-U-I-L-T for $1,000 off.
Welcome back to How I Built This Lab. I'm Guy Raz. So in 2014, only a few years after its founding, DeepMind was acquired by Google. This is, from a business perspective, an amazing story. But you had only around 75 employees, I think, at DeepMind. And Google acquired the company for between reportedly $5 and $650 million. And then you became...
for about nine years, sort of an independent part of Google. And we'll talk about why that changed recently. But essentially working on different projects, including projects around drug development and working with trying to model proteins that could potentially offer life-saving cures. Yeah, yeah. So that was another big success, which is protein folding.
So your body is largely built out of proteins. These are the molecules that are the building blocks of, you know, most of biology. And it's not very difficult to know what the atoms are that make up the molecule. It's a certain chain of atoms. But what happens is the different atoms in that chain attract or repel each other in different ways. And the result of that is that chain actually folds up into a three-dimensional shape and
And that three-dimensional shape can be all sorts of things. It can be an axle. It can be a spiral. It can be sort of a sheet. It can be sort of a sphere that contains something. You can build all sorts of interesting structures out of these shapes, sort of like 3D Lego blocks, if you like. And these shapes are very important if you want to understand what that protein does. Now, the problem is that while it's easy to find out the molecules that make up the protein, it's very difficult to find out the shape.
And the techniques that people used to use would often require several years of research to find out what the shape was and cost maybe $200,000. It's maybe something that somebody would do as a PhD thesis is find out the shape of one protein. And there are hundreds of millions of proteins.
So there was this computational challenge that had been around for many decades, which is, is it possible to take the molecule, just the knowledge of the atoms, and compute what
what the shape is directly rather than go through this laborious process of several years of experiments and all these sorts of things. And so that was called the protein folding problem. How does a protein fold into the three-dimensional shape? And can you predict this computationally? And so people have been trying to do that for a long time. And we thought using advanced machine learning, deep learning techniques and so on, that we had a shot at solving this problem.
And yeah, long story short, we spent a few years working on it and we basically solved the problem. Yeah.
And just for some clarification around it, I mean, what does it mean in practical terms? I mean, is it ready to go in terms of enabling treatments and drug development? Yeah, it's not quite that direct. What we did is we folded all the proteins known to science and we released them all to the public for free. So you can find them all online on the internet and you can look at the shapes and so on. And we've had about 1.7 million researchers use that resource.
um it doesn't mean you suddenly know how to build a drug or something rather but it does mean that you can now see what the different shapes of the proteins are the proteins that you might have the drugs the proteins that might be i don't know part of your liver or some other part of your your biology and you can see how they maybe interact with each other and all sorts of things like this so it's it's not like it you suddenly can solve the problem but
But before, when you're operating in the dark and you didn't even have any idea what any of these things looked like, let alone how they might connect together and other things like that, now you can see a whole lot of this information. And so that's incredibly useful when you want to go and then develop drugs and so on. So you might see, I don't know, there's a particular problem taking place and it's to do with a particular protein. You might be able to then go, okay, what proteins are going to connect into this other protein and act on it in certain ways? And you can then target specific proteins
things that look like they're going to be very interesting and so on. So it's not a solution to drug discovery and everything, but it's a great enabler, an accelerator of this kind of process.
Can you explain what happened around 2017 in the field of AI research? My understanding is that large language models were essentially introduced. And I guess that was like a turning point in the acceleration of the development towards artificial general intelligence. I think it was. Google had invented an extremely powerful algorithm called Transformer.
And we'd been experimenting with it for things like translation between different multiple language targets in and out and all sorts of things like that. And we'd found it to be very, very scalable. And then what happened was that another company, OpenAI, latched onto the idea that this is, in fact, extremely scalable, more scalable than anybody had appreciated.
And they just basically scaled it up and scaled it up and scaled it up. And, you know, it just kept on scaling, basically. And so it was a bit of a surprise in just seeing how far these language models could go when you made them extremely big and you started feeding in, you know, a significant chunk of text from the internet. And so that came as a surprise to many people. They didn't think it would go quite so far, you know.
So last year in 2023, DeepMind, which had been essentially an independent part of Google for a decade, was merged with Google's AI division. And I think that was in response to what was happening with OpenAI and maybe even some other platforms.
competitors in the space to really kind of ramp up what Google could do around artificial intelligence. Did part of that feel like you were joining an arms race? Did it feel like, okay, all hands on deck, we've got to compete against these other companies? Yeah, so what happened basically was that it was becoming clear that extremely scaled-up models were going to become a really important thing in the future. And
And for Google, it was important that we had the biggest and best models. And it doesn't make sense to have two different groups both developing big models. We needed to come together. We both had a lot of expertise in this sort of thing. Come together and use all the resources in terms of all the people together and then all the compute and everything to make the best models we possibly could.
Around that time, Shane, you signed a letter that was signed by many other people in the AI industry, warning of extinction risk. This is one of the quotes. It says,
Everyone signed this letter. I mean, Sam Altman signed this letter, Jeffrey Hinton, the sort of the godfather of AI, the heads of AI at Microsoft and Anthropic and everyone signed it. And again, I...
This technology is going to happen, right? But part of me is like, okay, all these people who are creating this technology and warning that there's a risk of extinction from the technology they're creating are signing this letter to sort of say, no.
We should be figuring this out. But at the same time, like we're just going to keep moving forward and marching forward. And so to me, there's sort of a dis again, I'm not and I'm not criticizing or attacking you. I'm just trying to figure understand your thinking around this, because on the one hand, you're signing a letter that's that sort of ringing the alarm bells. And then, you know, and then a couple hours later, you're going back to your desk and doing your job.
Yeah, and doing my job is often working on safety, to be clear. I mean, why am I doing what I do? Well, one, I think that something deeply transformational is about to happen in the coming years. The world that we live in is being shaped by human intelligence.
the clothes I wear, the headphones I have on, the internet we're talking to each other, the words we're using, the concepts we're using. Even the atmosphere we're breathing at the moment has been affected by human intelligence and combustion engines that we've invented and all these sorts of things, right? So human intelligence is profoundly powerful thing that's affected the world very, very deeply in many, many ways. Now what's about to happen is that machine intelligence is going to arrive.
and it is going to be potentially very, very powerful. So this is going to be a deeply transformative event. Now, this could be amazing. This could be unbelievable. This could be a new golden age for humanity, opening up all sorts of possibilities, solving all kinds of problems, and just being really a mind-bendingly fantastical thing. But like any very, very powerful technology, there are things that we don't understand going into this.
There could be unintended consequences. We could possibly get some things wrong, right? And so we need to take it very, very seriously. We need to take seriously the possibility that things could possibly go wrong when we're going into such a transition. And then the other point is that I don't see any way to stop this. Can't put the genie back in the bottle. You can't put the genie back in the bottle. Intelligence is profoundly valuable for many, many reasons.
And I don't know of any way to globally get everybody to stop using and developing this very, very valuable technology. And so as far as I can see, this will be developed. And so the important thing is that we understand that this is indeed extremely transformative and valuable.
and we approach it with an appropriate level of care.
so that we can understand where the risks lie, understand where the challenges are, and we can navigate that wisely. So we end up in a future where this powerful intelligence is giving humanity many wonderful blessings and gifts, and we avoid all kinds of people misusing it or different types of problems where it's been misfiring in some way and causing some sort of problem or something like that.
I get nervous when I hear a future of maximum human flourishing. It really feels in some ways like a false promise. Not to say that parts of that won't happen, but I
A version of that quote is almost exactly what Eric Schmidt said when your model beat the go in 2016. You know, this is going to usher in an era where humanity is the winner. And I'm not trying to be cynical here. And by the way, I really appreciate that you acknowledge you don't have the answers. I mean, you're not a policy guy. You're a scientist.
I don't believe anybody does. We're going into something which is at least as profound as the Industrial Revolution. Yeah. Now, could you have anticipated all the consequences of the Industrial Revolution before it happened? There's no way you could. It affected everything. It affected...
how cities are built. It affected international trade. It affected health. It affected diet. It affected the structure of families. It affected culture. It enabled mass industrialized murder. It enabled massive warfare too, right? Of course, there were hugely profoundly negative consequences, but
massive benefits too. Yes. All around us, we see the benefits of it. So these deep transformations are subtle and complex and you can't see all the different things come out. So that's why I signed the letter. I signed the letter to say to people, hey, wait a minute. There is enormous potential here, an enormous promise here.
But there can be some bad things here, too. And we need to be really serious about this. We need to understand how big a transition this is. And we need to treat it with the appropriate care so that we can get the benefits and try to avoid the downsides. We're going to take another quick break. But when we come back, AI with a million times the power of a human brain and why we should all be paying attention. Stay with us. I'm Guy Raz and you're listening to How I Built This Lab.
How I Built This is sponsored by ADT. ADT spends all of their seconds helping protect all of yours because a lot can happen in a second. Like one second your baby can't walk and then suddenly they can. One second you're happily single and then the next second you catch a glimpse of someone and well you're no longer single. Maybe one second you have a business idea that seems like a
pipe dream. And the next, you have an LLC and a dream come true. And when it comes to your home, one second you feel safe, and the next, even if something does happen, you still feel safe, thanks to ADT. After all, ADT is America's most trusted name in home security. Because when every second counts, count on ADT. Visit ADT.com today or call 1-800-ADT-ASAP.
Have you ever covered a carpet stain with a rug? Ignored a leaky faucet? Pretended your half-painted living room is supposed to look like that? Well, you're not alone. We've all got unfinished home projects. But there's an easier way. When you download Thumbtack, it's easier to care for your home from top to bottom. Pull out your phone and in just a few steps, you can search, chat, and book highly rated pros right in your neighborhood.
Welcome back to How I Built This Lab. I'm Guy Raz. Here's more from my conversation with Shane Lay, co-founder of Google DeepMind.
I think sometimes we humans take an ahistorical perspective on things because we're looking at what is going on at this point in time. And we're not sort of fully thinking about the sweep and scale of human history. But let me actually try and take a historical perspective for a second because you could argue that our brains, our human brains, are not more intelligent than a human brain 30,000 years ago. Maybe marginally, right? Like…
But could a human 30,000 years ago in the right environment be a member of the Manhattan Project team? I think it's possible. I don't know how much has changed in the brain in 30,000 years. But what we're talking about, especially if your prediction is right and it's four years away…
is a machine that could go from early Homo sapien to Manhattan Project physicist in a matter of weeks, eventually days, and then minutes, and then seconds. I mean...
That's kind of what could happen. A machine that just gets infinitely smarter faster. Yeah, I mean, it's not clear how infinitely smarter faster it can get. There may be limits to how quickly certain things can happen. We don't know what those limits are yet, so I don't want to make too many promises, but there may be certain limitations and certain scaling, certain exponential costs and so on as certain things develop. But
At a high level, I agree. One way I think about it is this. The human brain consumes something like 20 watts. It weighs a few pounds. And it sends signals via axons inside the brain. They're electrochemical wave propagations. They travel at about 30 meters per second. And the cycling, the frequency of the signals is on the order of 100 hertz. Now, if you compare that to just a present-day supercomputer...
Instead of 20 watts, you can have easily 20 megawatts. So you get a million times the energy consumption. Instead of the size of the brain, it can be a million times that size. Instead of sending signals through an axon at 30 meters per second, you can send signals at the speed of light, 300,000 kilometers per second. Instead of a frequency of transmission in the signal of 100 hertz, you can be a billion hertz, 10 billion hertz. So if you look at
energy consumption, physical space, speed of signal transmission, and the frequency on the signal, you're looking at six to seven orders of magnitude in every direction just with present-day technology. Now, how intelligent a system is
will it be possible to construct given these sorts of parameters? And I think the answer is probably an extremely intelligent system. Extremely intelligent. Extremely intelligent. So I think the answer is we don't know, but I also think that it's not implausible to imagine a near future, not so distant, where machines are becoming exponentially more intelligent to the point where
The definition of an artificial general intelligence system isn't can it perform all the cognitive functions of a human, but rather –
What can't it do? Yeah. Right. What can't it think? So that definition of AGI to me is the bar. That's the entry level into AGI. But I do not think that's where it stops. I think you start going into what they call ASI or artificial superintelligence. And it may be the case in certain dimensions is the scaling of capability changes.
tops out at a certain rate, and you don't actually go that much beyond humans. But in other dimensions, it may go far beyond humans. And so in some shape or form that's not very well understood, I think we will end up with systems that are, in general, far more intelligent than humans. And so this is why it is so important that people think about what's coming. They think about
how to navigate the possibilities that are opening up so that these very capable systems can solve cancer, that can create clean energy systems like solving fusion, can do all sorts of amazing things to benefit humans.
That is why this is so important. That is why I talk about this. This is why I've been talking about this since back in, you know, 2005 when people would listen to me. You know, AGI is a thing. It is coming. And when it comes, it's going to be so important that this is handled very, very carefully so that humanity can get the benefits of this.
I know the Biden administration put together an executive order last year, which is sort of a one of the most robust kind of attempts to build guardrails. Still, a lot of people don't think that that's enough. You are you work at one of the most valuable companies in the world, one of the most powerful companies in the world, right?
So what can you do? I mean, is there an answer? You're talking to me about this. You're trying to shake the public. And hopefully our listeners are like, OK, what do we do? But you're one of the people working on this at a company that has access to the White House and 10 Downing Street and the halls of power around the world.
How do you build in a protective system? Yeah, it's a very complex thing because it's not about a specific thing. It's like the internet is a good example.
It's not like there's a very particular technological solution to get the value out of the internet and avoid the problems. It actually affects all kinds of aspects of society, of markets, of social relations, all kinds of things. So it's really something that a very broad range of people from society have to engage with. That's really the only way to do it. It's not going to be solved by some people in a company somewhere. That's not how it's going to work. It's something that society as a whole needs to engage with.
Because that's the nature of the transformation, which is a work here. And it would be inappropriate, I think, if it was like, you know, society was relying on a few people in a company to do the right thing. That's not really a robust way to go about deep transformations.
So we need to, as a society, and as I said, it's not just the machine learning people, but all sorts of aspects of society, all sorts of people from different disciplines need to think very carefully about what's becoming possible and take seriously, because for a long time, a lot of people didn't take this seriously, take seriously the idea that general intelligence in machines is actually coming. What does that mean? What are some of the implications of that? What do we need to start doing to prepare for that? Are there certain things that just shouldn't be allowed?
Are there certain rules or policies or regulations or other stuff like that so we can try to navigate this as wisely as possible? All right. This might be a weird question, Shane, but when you think about you, who you are, your family, are there – I'm going to sound like I'm a survivalist or crazy conspiracy theorist –
Are there things that you ever think about, like, I better do this? I better think about the way I'm doing this in my life to prepare for the potential downside consequences of what this other thing could mean?
No, not really. The way I think about it is that the maximum leverage I have as an individual in all of this is to support things like AGI safety research. I've been a vocal advocate for this for many years, including when it was widely ridiculed and there's a lot of eye rolling going on. I led the AGI safety group here at DeepMind for many years and
and was always advocating publicly for people to take this topic seriously. I personally think about the subject quite a lot and different approaches and the pros and cons of them and discuss this with various people.
And I engage with other companies and with government. I've talked to the UK government here about these subjects and so on, and consulting them about AGI safety and the challenges around that and the different approaches and so on. So as an individual, rather than prepping for some sort of scenario,
I don't know what you have in mind there going. You can move to New Zealand. You're a New Zealand citizen. I'm a New Zealand citizen. I can move to New Zealand and have a bunker there. But, you know, that's not okay. There's something that's happening and it's big in the world and it's going to affect a lot of people. It's going to affect everyone. So the point of leverage that I have in this whole process has been to stick my neck out
and say, look, you know, this is serious. We have to take this seriously. We have to think very hard about questions like AGI safety and socio-technical risks and all kinds of things like this and policy and governance questions and all this sort of stuff so that we have the best chance possible of navigating this wisely in a way which is broadly beneficial.
to the world. That's the point of leverage I have. There's no use going and trying to build a bunker somewhere. That's not going to get you very far. We're all going to, either it's all just going to work out or we're all going to die. So I hear you. I wouldn't put it in such an extreme spectrum. I think with these technologies, there's actually a wide range of possibilities. There are some very, very good ones. And there's probably many mixed scenarios. So if you look at, again, something like the internet,
You know, the internet is not all blessings, right? It's not all good, but it has its wonderful aspects as well, right? It has its wonderful aspects. And so I think realistically, there will be positive and negative aspects of artificial intelligence. And if we can do this wisely, we can get a lot of the positive benefits. And the positive benefits can be amazing, but we need to take it seriously if we're going to navigate this well. That's Shane Legg, co-founder and chief AGI scientist at Google DeepMind.
Shane Legg, thanks so much. Thank you. Hey, thanks so much for listening to the show this week. Please make sure to click the follow button on your podcast app so you never miss a new episode of the show. And as always, it's free. This episode was produced by Sam Paulson with music composed by Ramteen Arablui. It was edited by John Isabella with research help from Carla Estevez. Our audio engineer was Sina Lefredo. Our
Our production staff also includes Alex Chung, Casey Herman, Chris Messini, Terry Thompson, J.C. Howard, Malia Agudelo, Neva Grant, and Catherine Seifer. I'm Guy Raz, and you've been listening to How I Built This Lab.
If you like how I built this, you can listen early and ad-free right now by joining Wondery Plus in the Wondery app or on Apple Podcasts. Prime members can listen ad-free on Amazon Music. Before you go, tell us about yourself by filling out a short survey at wondery.com slash survey.
Hi, I'm Lindsey Graham, the host of Wondery's podcast, American Scandal. We bring to life some of the biggest controversies in U.S. history, events that have shaped who we are as a country and continue to define the American experience. We go behind the scenes looking at devastating financial crimes, like the fraud committed at Enron and Bernie Madoff's Ponzi scheme.
American Scandal also tells marquee stories about American politics. In our latest season, we retrace the greatest corruption scheme in U.S. history as we bring to life the bribes and backroom deals that spawned the Teapot Dome scandal, resulting in the first presidential cabinet member going to prison. Follow American Scandal on the Wondery app or wherever you get your podcasts.
You can binge this season, American Scandal Teapot Dome, early and ad-free right now on Wondery Plus. And after you listen to American Scandal, go deeper and get more to the story with Wondery's other top history podcasts, including American History Tellers, Legacy, and even The Royals.