cover of episode A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

2024/6/7
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Kevin, is it true you got fired last week? No, but there was a brief moment where I suspected that I might have been. What happened to you? So I got an email from someone in the tech department at the New York Times basically saying, we're going to need all your equipment back. And then it had a big list of all the equipment that I've sort of gotten from the New York Times over the years. And God knows you have soaked these people for everything they're worth. This man has a vision pro and God knows what else. Yeah.

And I wrote back and I said, well, that's going to be hard because I still use this stuff. And the guy wrote back and he said, so wait, you're telling me you're not being off-boarded? And I said, I sure hope not. This would be a weird way to find out.

And then he came back a couple minutes later and he said, oh, sorry, the mistakes on our end, clerical error. We don't actually need this stuff back. You know, this is such a great prank and I wish I had pulled it. I should have been in touch with the IT department years ago and just said, I need you to tell Kevin he's being off-boarded.

Because it brought me so much joy. Anyway, very scary 10 minutes. You know, I was a contributor at Vox Media, and they had sent me, you know, a variety of things over the years to appear on podcasts, you know, a microphone, a mic stand, whatever. And at some point, you know, they had sent me a sort of similar thing saying, please send all this back. And I thought,

the amount of time it's going to take me to package all of this stuff up and send it back, like, I absolutely do not want to do this. But lucky for me, my agreement was extended. And then when it finally lapsed, nobody emailed me. And so I still have all the equipment. Wow. And you're admitting this. And I'm admitting this. And let me just say to Vox Media, you can take it from my cold dead hands. Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha

I'm Kevin Russo, tech columnist at The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, Canadian Prime Minister Justin Trudeau is here to talk about tech, TikTok, and what he's doing to solve the country's AI brain drain problem. And then, OpenAI whistleblower Daniel Coccatello joins us to talk about why he risked almost $2 million in equity to warn us about what he calls the reckless culture inside that company.

Have you ever been to Canada? I have. You have a good experience there? I did. I've had many good experiences in Canada. What do you do in Canada? Well, sometimes I go to Montreal. I've been to jazz festivals there. I've been to Toronto. Niagara Falls is lovely. What about you? Well, I once had a lovely layover in the Montreal airport, and I said, this country's onto something, and I want to learn more. And today, we're actually going to do that.

You know, Kevin, a few weeks ago, we got one of the strangest emails of our entire lives as podcasters. Yes. Where what we assumed was a spam email that purported to come from the office of the prime minister of Canada came in, and it said that the prime minister might want to have a conversation with us. Yes. And this was shocking to us because why would the prime minister of Canada want to come on an American tech podcast?

But it turns out he's been thinking about tech and AI for a very long time. He's a big science fiction nerd. He's also done a lot to try to position Canada as a leader in AI and technology development in general. And we should say like Canada has a very long history with AI. In fact, a lot of the major advances in AI over the past two or three decades have come out of Canada and Canadian universities. So

When we got the offer, we thought about it for a little while, and then we said, yeah, that sounds like actually a really great conversation. I didn't think about it for all that long. I thought it seemed like a good idea. Yeah, so here today, we are going to be talking to Prime Minister Justin Trudeau. He's only the second head of state to ever appear on Hard Fork after Sundar Pichai of Google, which I thought was an interesting statistic. All right, let's bring him in. This is Prime Minister Justin Trudeau.

Prime Minister Trudeau, welcome to Hard Fork. Oh, it's great to be here. If you're wondering why we look like this, it's because I told my friend who is Canadian that we were interviewing you and he sent over a collection of Canadian paraphernalia to put in the studio today to show our respect. So in addition to our hockey jerseys, which we have on, we also have some hats behind us. We have maple leaves and raptors.

I've got a CBC radio belt buckle here. I've got some Canadian money and a bottle of Wayne Gretzky's 99 proof whiskey. Mostly drunk. Yeah. I was going to say, how early did you guys start on that? We've been preparing all night for this. What should I be bracing myself for? So did we miss anything? No.

You missed a little bit of Montreal Canadiens paraphernalia or the Expos, but that's okay. You'll forgive us? I'm enough for that, and I've got stuff in my background on that. We should talk a little bit about how this moment that we're having came about. A few weeks ago, we got an email from a very nice man in your office saying that we might want to have a conversation. Of course, I panicked, and I thought, is this because I stopped watching hockey after the LA Kings lost in the 1993 Stanley Cup Finals? But they tell

us that you've actually been spending a lot of time thinking about tech these days and specifically AI. So maybe we should just start there. Tell us, Prime Minister, how are you thinking about AI right now? Well, it's not that I've been thinking about it right now. I've spent my life thinking about it from reading old Asimov as a kid to studying a few years of engineering to being basically a science nerd and geek most of my life, despite the fact

that I went into English literature and teaching and now politics, I've always been fascinated by the intersection between technology and real life and how we sort of take things for granted, the big transformations from one industrial revolution to the next and how rapidly the pace of change is going. So thinking about that stuff while I'm trying to, you know, build...

fairer, more just, more equal society for everyone is sort of a natural bridge for me. Yeah. One of the reasons we were excited to talk to you about AI today is because I think AI is surprisingly Canadian. Like a lot of the very foundational research in AI from people like Jeffrey Hinton, Yoshua Bengio, sometimes those two get called sort of godfathers of AI, that

came out of Canadian universities, Ilya Setskever, who just left OpenAI, did a lot of his formative AI research in Toronto. Every time I go to an AI event in San Francisco, I meet a bunch of young Waterloo engineering grads who are working on this stuff. Why do you think Canada has been so excited about AI for so long?

Well, we had an approach, and Jeff Hinton really is at the center of this, where as a country, as a government, we've always done scientific research really, really well. We're not always great at commercializing, at building platforms, at

scaling a great idea up into a world-dominating technology. But so many of the breakthroughs happen because we fund our scientists well. And in AI, through the long AI winter where a whole bunch of countries around the world were chasing different shiny objects, Canadians just kept sort of

powering away, supported by our science ecosystem to be able to get through it in a way that has revolutionized the world. So we're proud of Canada's early role in developing AI. We're right now fighting to make sure we keep our skin in the game, not just because it's so important and competitive,

But because I do feel that the way Canadians approach things is a framework that we need to make sure that is part of AI as it develops. Now, I'm sure that you would love to keep more of the AI talent that you're developing there in the country. Lately, it seems like a lot of the best researchers have come to do their work here in the United States. How do you keep them in Canada?

One of the things we did in our most recent budget is put in about a $2.4 billion compute strategy designed to increase the investments and the accessibility of computing power for small businesses, for researchers, for academics.

Canada does have a few advantages if you're comparing a data center in Canada to one in Texas or in California. Our temperatures being much lower on average means less cooling. Our access to clean green electricity at cheaper costs and reliable is good. Our data regime, our international connectivity

connections means that a lot of people are looking at Canada as a smart place and a stable place to invest when certain places around the world are getting a little more uncertain and complicated in their various strategies.

But how do you address the brain drain of the talent specifically? Because a lot of these people, they build some AI stuff or they do some AI research in Canada, and then Google comes to them or Microsoft comes to them or Meta comes to them and offers them a huge pile of money to move to the U.S. or sell your startup to us. And it sort of all ends up kind of enriching the U.S. tech ecosystem instead of the Canadian one. So what about the specific, maybe call it the top couple of hundred people in tech in Canada? How do you keep them in

Well, a lot of what we're trying to do is make sure that there is better access to capital. We've created opportunities so people can scale up a $10 million company to being a hundred or even a billion dollar company without having to sell out to a billion dollar company already. Like those are things that lots of different places are struggling with.

But I think one of the things that is Canada's advantage comes down to our access to talent from around the world. The fact that it's easier to emigrate to Canada when you have, you know, tech abilities than just about any other country. That lots of, I remember talking with some folks in Silicon Valley saying, wow, if you can guarantee me, you know, quick visas for engineers online,

I'll keep opening engineering centers in Canada. And we took them up on that. We created a three-week global skills immigration program that has a lot of people coming in and setting up shop. Ultimately, if people are just driven by money, there are places you can make a lot more money than in Canada. But if the

the schools your kids are going to, if the community you live in, the quality of life, the opportunity to have so many of the advantages of a developed, advanced nation without some of the inconvenience that come with some other places. Canada still has a very, very strong pitch to the world.

So you mentioned that you had announced a package of $2.4 billion Canadian dollars to fund Canadian AI research labs and get people access to AI compute there. This is a world where the Saudi government might be spending $40 billion on AI soon. Sam Altman's out there saying he might need $7 trillion to build AI chips.

I'm very curious if you're worried at all that some of the liberal democracies are going to get outmatched when it comes to trying to keep a handle on this stuff, if they have enough money to really compete here.

If it's only about money, then I think we have a larger problem. I think one of the things we've seen as different countries are trying to make advances in AI, I mean, take China, for example, where their access to data because of lack of privacy protections for civilians, for individuals and citizens, means that they have massive amounts of data that they can scrub through that is usually pretty good data.

but that's a part of the ingredient. A lot of the other part is, you know, the brilliant minds that can think around corners, that you bring together diverse, creative, well-educated folks who are just pursuing this for the love of creativity and to find out where they're going. That's harder for authoritarian states or money to buy. And even if you purchase the best

academics and researchers around the world and, you know, set them up in a suburb of Dubai, you don't have the kind of dynamism around you that is going to deliver the same kinds of things. That's what I believe. I believe the ecosystem that you're actually living in is as important as the ecosystem that you work in. So we need to make sure we're investing to be

competitive so that someone's not saying, well, either I can starve and stay in Canada or I can go make gazillions by going somewhere else. It should be a little trickier decision than that of, okay, I would make more money, but I

I don't know if I want to be part of that. I don't know if I'd have the same quality of life. And I think that's the pitch that we have to keep making. And quite frankly, it's the kind of thing I'm hearing from a lot of young people these days for whom everything around their life matters as much as the dollar figures they're making in success. That value in life is different, and that's certainly something we're building on in Canada.

I'm curious, you laid out this $2.4 billion package to do what you called securing Canada's AI advantage. I'm very curious what it means to you to have an advantage in AI. Well, I think that catchphrase is partially the fact that we had and have an advantage in that, as you said earlier, we were part of the creation of the foundation of modern AI.

I think the advantage that we come at are a series of things. First of all, the incredible diversity that's built into Canada, the ability to make sure, I mean, so much talk is about the black box of AI that is going to take on the biases of whoever actually programs a given algorithm, making sure that you do have

a greater range of experiences and diversity as Canada has in building those algorithms. And I don't think Canada wants to become the only AI powerhouse in the world, but Canada has been able to develop real expertise in terms of cyber expertise, our communication security establishments, one of the best in the world. Some of the things we do are of such value to our partners that even that we're not the biggest

We find these niches where we can be really, really impactful in positive ways. I think that's part of Canada's advantage. But like, what do you want AI to do in Canada? There's such a wide range of predictions about what AI is going to be in five years, 15 years, 20 years. Like, what is your vision for it? Yeah.

I think one of my biggest preoccupations, because we know technology can be positive or negative, how can we maximize the chance that it actually leads to better outcomes and better lives for everyone? We see always a concentration of new technologies in the pockets of those with the deepest pockets.

And that can go through a period of accentuating wealth inequality and a feeling of being left out or left behind by large swaths of the population. If we can be part of democratizing it, of making sure that people are comfortable with the opportunities and advantages it gives and that it's helping everyone move forward and not just those who own the algorithms or the computing power –

That's what I most like. What it's actually going to do, I think it's going to be doing bits of everything, I think, in the coming years. But the impacts on society, if it

creates opportunities for people to succeed, if it lifts people out of poverty instead of exacerbating differences between haves, have-nots, and have-yachts, then we're on the right track. I love have-yachts. I've never heard that one before. What's a have-yacht? Is that Canadian? That's if you have a yacht. There's haves and have-nots, and then there's have-yachts. That's good. I like that.

Obviously, one of the biggest questions surrounding AI right now is about jobs and the effect on the labor market of all this new technology. I noticed in your big AI package that you are investing 50 million Canadian dollars into a bunch of efforts to support workers who are impacted by AI. So I'm just curious, how big an effect do you think AI will ultimately have on

on labor markets? Do you think we're going to see millions of jobs disappear in the next few years because AI can do them better? Or do you think people will just kind of shift the mix of tasks that they do at their jobs and that there won't be kind of mass unemployment?

I think ideally it'll be the latter. Obviously, anytime any new technology comes in, it'll be disruptive. People have to learn how to do new things. Some things that people did, you know, won't really be, you know, doable or worth doing anymore or able to do as a job. But

And what AI and technology in general is so good at is replacing some of the things where we're not adding our full value as an individual to a particular task. And I mean, I sit down with my kids. I have two teenagers who are like every teenager right now using chat GPT in their homework assignments. And the conversations that I have with them is amazing.

You know, my daughter explains to me, no, no, I use it to help me with the first draft. And then that frees me up to be able to really make sure that every sentence is mine and that it's saying exactly what I want. And it's adding to my ability there. And that idea in the workplace, I mean, in the federal public service, everyone's starting to use it in different ways, but there are big concerns about what that happens. And I know you guys have had conversations about that.

imbalance between a manager and an employee and that productivity gain and who's going to benefit from it and are we all going to overload people? These are the things that I hope the conversation goes. I have to say, Prime Minister, that your daughter said to you exactly what I would say to my dad if he asked me why I was using ChatGPT, which is, oh, no, no, this is just the first draft.

Trust me. You know, so I think we have a very savvy operator in the household here. Yeah. But that's also reassuring a little as well that she can, you know, know exactly what to say to me as a former teacher to reassure me means at least she knows what is right and what is wrong, which is still a step forward. But you were a former teacher, like if you were in a classroom right now and you found out that all your students are using ChatGPT to write the first drafts, like what emotions does that bring up for you? Yeah.

Well, first of all, I was a teacher in exactly the first generation to grapple with Google in the classroom, right? I mean, okay, Google now has, like, how, and I was, you know, one of the quick ones to say, okay, let's use this tool. And I used to do projects like have the kids, you know, find a copy of a famous poem, Desiderata, on the internet, and everyone find a different copy from Mike's homepage to the poetry archive to whatever. And then we'd compare it and see that,

there was a comma or a word different in each and every single one. And understanding that and being thoughtful about the critical thinking skills that go into it. I mean, I used to laugh when, yeah, I regularly laugh because I was a math teacher telling kids, well, don't imagine that you're going to be able to wander around the rest of your life with a calculator in your pocket. And of course, now we all do, right? So embracing technology instead of fighting it and trying to find the human strengths within it

And what are your advantages? What is it that you bring that is unique and that is your strength to whatever task you're doing? And yeah, if you can sort of automate some of it, you know, typewriters meant that people with great penmanship no longer had a huge advantage over those of us who are messy writers, right? Like there's always been those technologies, but instead you can focus on the content. And that's my hope for that as a teacher and as a leader. Yeah.

Prime Minister, what do you make of the discussion around existential risk in AI? There are some people who are very worried that this technology is going to get more powerful and more intelligent than humans and kind of take over and either disempower or maybe even harm us. And there are other people who think, well, that's all science fiction and we should focus on the risks in the here and now. So what do you make of that debate? Um...

First of all, I think it's an interesting philosophical debate, but I don't think there's much we're going to be able to do now to prevent what's going to happen down the line. I mean, obviously, we need to be setting up the right kinds of parameters and maximizing our chance that good and thoughtful people are engaged in creating with the right kinds of parameters around it.

But there's no question that this technology is going to get more and more powerful. But the idea that we might end up with a sentient computer that will decide that the greatest threat to human beings, if that's our job, is to protect human beings, is other human beings, and suddenly makes drastic social change. Yeah, that is sort of dystopian science fiction.

I think there are ways we need to be responsible about how we manage and how we build in those expectations from now on.

But I also think that AI for good is going to be one of the most powerful tools we have to counter the AI for bad that is going to be created by bad guys out there and therefore easing off or saying, okay, well, let's slow this down because we don't understand all the consequences is probably not the right track, even though I can understand how appealing that would be to people. No, let's just figure it out and keep making sure that

that those of us who have what we consider to be positive values and thoughtful approaches that are benevolent or at least looking for positive outcomes,

are fully in the game with every new step of technology. I mean, one of the things that's so different about AI is that this time around, the people who are building it are the ones going around saying, hey, look, this stuff could be really dangerous. I'm curious, have people like Sam Altman, Sudhar Pichai, have they been through your office or in meetings with you? And what are they telling you? Are they ringing alarm bells?

I had a great sit down with Jeff Hinton just a few months ago where absolutely he was standing up because he likes to stand up with conversations around exactly how we, you know, where we're going and how we can try and control things. Yashua, Benji and I were talking recently about PNL.

the national security implications of AI and how we can move forward as only governments can on some of these angles. So, yes, I've spoken with folks in the tech industry in the U.S. as well, Alexander, and others about these issues and others. And the intersection between democracy and our incredibly successful, you know,

Western capitalist democracies in creating so many of these things, but are we putting them at risk by undercutting democracy when Meta decides it doesn't want to be in the news business anymore, for example? These are things that I think we need to have continual conversations on. Canada's in a fight right now with Meta because they don't want to support local independent journalism. And that's something that worries me as a foundational building block of democracy. Yeah.

Well, since you brought it up, I would love to ask you about this. You know, I'm all for countries taxing tech platforms. But, you know, in this case, in my view, you sort of broke the open web because you're taxing them just for the right to show a link. And I wonder, do you ever think about just like taxing them on their digital ad revenue or something? First of all, yeah, I know that's the frame that the tech companies would love to say. This is not about a link tax. That's not it at all.

all. And that's not actually the approach we're taking. I'll point out that Meta has been for many years engaged in conversations with folks about making arrangements with big media companies. What our focus is

Well, make sure those are transparent. Make sure those are available to small local journalists as well, because local news breaking really, really matters. And people who can tell local stories and not just on national platforms is a core building block of democracy. And the fact that Meta would rather get out of the news business, something they've been trying to do since 2020, by the way, and the fight they're having with Canada right now, they're having once again with Australia. There's this debate

by Meta that it's too complicated to be in the business of supporting rigorous journalism because that's, you know, not as lucrative as cat videos or whatever it is. This is a problem because the

The reason that Facebook and Meta and Google and these companies exist is because countries like ours built systems in which they could benefit from incredible freedoms and incredible innovation. And that is actually at risk as we move down towards a weakening of democracy because of a weakening of journalism. So we're saying, look, if Meta's making billions of dollars in profits,

Some of that needs to help fund local journalism. Well, I suspect we won't come to agreement on this one in the time that we have. But I do want to ask an adjacent question, which is that now that we're in this world of generative AI, you have companies like OpenAI, Google, and some upstarts.

And they're all reading those same Canadian news sites and they're scraping them and they're sort of repurposing them. I would argue plagiarizing them without compensating those publishers. Can you see expanding the approach that you've taken to some of these AI companies and say, hey, if you want to plagiarize journalists with AI, we're going to charge you for that too? I think the larger conversation that that is part of

has to do with putting the onus and the responsibility on some of these mega platforms that wield so much power in people's lives. Governments used to be able to pretty much

protect and control every aspect of people's lives 50 years ago. And it wasn't necessarily for the better. It was just the way it was. We controlled our borders. We controlled what came in, including content and books and things like that. It was horribly abused in some situations, but we at least had the tools.

Governments no longer have the tools to even protect people in what is a huge part of their lives because, you know, they're connecting with some server in Romania or dealing with misinformation from Iran or whatever. We just don't have those tools. So there has to be a conversation about how we build complete security.

societies that do have responsibilities and that there are people with power, even if it's not political power, power over those spaces to think positively

responsibly about what it is, whether it's on protecting kids. We just moved forward with online harms legislation that is putting the onus on platforms to say, are you doing things that has the well-being of kids in mind? Now, what platforms decide to do and how they do it and how they justify it, that's an ongoing conversation. We're not dictating to them. But we are saying, yeah, you have to think about

protection of kids. What I want is not to, for government to legislate what platforms should do or not do, because that's a recipe for disaster. We all know how slow governments end up working. We're always playing catch up. And, and, you know, the political debates are not necessarily these days as deep and thoughtful as, as we would sometimes like them to be. But, but,

Can we put the onus of leadership and responsibility that goes with it increasingly on platforms around journalism, around protection of free speech, but also protection against hate speech? Can we find those balances? Can we make sure that there is a common grounding in facts and not...

not just deep fakes, but the level of misinformation and disinformation that we've seen over the while. These are the things that I think needs to be shifted. And I think platforms need to be a positive part of that conversation. One more social media question while we're on the topic. Should Canada ban TikTok? Um...

So many months ago, the Canadian government made the decision that all government phones could not have the TikTok app because it was too much of a security risk in terms of that. As a byproduct of that, the prime minister's family is on government phones. So my

teenagers don't have TikTok anymore, which is lovely for me. They must love that. They are grumbling about my job on this and on a few other things. But I think there's two questions around TikTok. One is the data security issue is real.

And the second more nebulous one is, is this good for kids? Is this good for people to be on social media? And I think those two issues are being conflated a little bit. People are sort of leaping on TikTok as, oh my God, it's connected to China. So, you know, we should ban it and then we should get on to banning, you know, Instagram reels and YouTube shorts and all those things that are so, so, you know, popular for very powerful reasons right now. So, yeah,

I think on the data security side, I mean, the head of the spy agency just came out and said, yeah, it's a real risk that China is accessing so much of Canadians' data through TikTok. And we are looking carefully at how to counter for that.

But to use those tools to sort of say, well, TikTok's also bad for bad for people. So let's ban that. I think that is satisfying as it might be for for some parents and some some people and some teachers, quite frankly, that's not necessarily the path we want to take. You mentioned deep fakes as something that exists. I understand you're going to be up for election here in a bit. Have you been deep faked already? And are any of them good?

I know I have been deepfaked many times. One of the ones that made me laugh the hardest was someone did a series of pictures of all the 23 Canadian prime ministers going back 150 years as classic rock, 80s glam rock characters. And I found that very, very funny. Which one were you?

Oh, I know. I had very, very big hair. I see. I see. Got it. Bulging pecs and stuff. It was very nice to see, but it was sort of silly. I have a level of protection because I'm prime minister around that of people sort of expecting, although I did have a yoga instructor friend of mine who admitted that, hey, I just spent 300 bucks on that Bitcoin you were promoting online. I'm like,

I don't promote Bitcoin online. You got scammed. How could you fall for that? It was, you know, so that happens. But I think much more about the fact that deep fakes are now impacting, you know, high school students are impacting people in their works. The accessibility of that technology is something that I just don't worry about in terms of a political context.

it's much larger than that. Yeah. I mean, we're always so sad on the show when anyone buys a Bitcoin, but to know that your own likeness had been used to promote, it must be especially upsetting. Devastating. Can I ask one follow up? Yeah. So I had a follow up on deep fakes, uh, which is that, you know, so, so far, you know, there's been some defects of you. There's been some funny ones. You're not particularly, you know, as it pertains to, you know, maybe this election. And of course you're, you know, you're, you're the most famous politician in Canada. So people are sort of already broadly aware of you. Um,

One concern that you hear sometimes about deepfakes, though, is that not all politicians are as widely known and that in maybe a local race, a provincial race, one of these things might get a little bit further. So I just wonder, have you seen anything like that? And do you worry about what is sometimes called synthetic media sort of just upsetting politics in general in Canada? Yeah.

Well, I mean, let's be very clear. Even before the arrival of deep fakes and increasingly sophisticated synthetic media, we have seen misinformation, disinformation, polarization, foreign interference by countries like Russia and China and India that are trying to destabilize us.

our democracy on various ways be very, very impactful. Like some of the fake tweets or the bots that are out there to drum up support or criticize, like have become more and more sophisticated. So this is sort of a next level tool. And ultimately the answer to that is,

is sort of similar to the answer to everything, is to empower citizens to be more discerning, to be more thoughtful, to make sure that local journalists are able to actually cover those races that aren't with national figures, to make sure that there are consequences and frameworks and independent

you know, bodies not associated with government that are weighing in on fact checking and saying that this does matter. And all those sorts of things, I think, will be more and more urgent to deal with deep fakes. But that might also give an impetus for more people to realize, okay, just because I saw this local councillor explaining that they, you know, they would really want to, you know, do something horrible, that's

that should clue in that we can't be just taking everything at face value. I'm curious if you turn to AI at all in your own life, is it part of your work life? Do you use it for fun when you're not working? How have you intersected with these tools? Do my streaming algorithms count? Obviously, sometimes those algorithms work really well for good recommendations.

Myself, I tend not to use it, even though my kids use it. I had a friend of mine who's big into AI. I said, well, listen, AI could write you a platform on how Canada should handle AI. I'm like, okay, why don't you do that? Do that and then show it to me. And they showed me two pages of work that they generated in about 10 seconds. And it looked great.

I could see how if you put that in the window, someone would say, oh, there's an AI strategy for Canada. But as soon as you scratch the surface a little bit, okay, well, why this and what does that work? You could see it was just, it was a facsimile of a platform, wasn't a real platform. So that reassured me a little bit that there's still a lot of hard thinking that needs to go into solving these problems that are not just cosmetic. Yeah.

Yeah. I know we only have a minute left. One question on national security and AI. Obviously, we could spend an entire podcast just talking about this, but I'm curious, do you think governments need to be building their own AI systems in this day and age, or is it enough to just have the private sector building this stuff and governments regulating it?

No, I think governments need to be using and aware of, you know, the various, you know, most advanced technologies out there, particularly because we know other people or adversaries or, you know, countries that don't wish the best for us will be developing those as well. And there are going to be, you know,

I know how scary it sounds. Oh, government's developing AI. And that's not what I'm talking about. But I am saying that if we're going to make sure we're keeping Canadians safe, like I said earlier, AI for good done by the good people with careful, rigorous oversight and transparency is going to be a very powerful tool to counter the AI for bad that we know is going to be out there. Yeah.

Well, thank you so much for your time, Prime Minister. It's nice to talk to a world leader who is not guilty of 34 felonies. And I'm not saying anything. And if you want, as a parting shot here, I want to offer you a free slogan that you can use for your national AI efforts if you want. This is just an idea.

We put the A in AI. Oh, my God. What are you doing? What do you think? I think that's a real winner. You have a future in politics. I hope you run against me because it'll be really hard. All right. Well, thank you so much, Prime Minister. Great to talk to you. Thank you, guys. Great to see you. When we come back, Kevin gets an open AI whistleblower to go on the record about what he saw inside that company that pushed him into leaving it. Woo-hoo!

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.

Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

So a big week for you. I heard you were up all night on Sunday. I was, yeah. Well, not all night, literally. That would be a little extreme. You stayed up till 9.30? Yes, way past my bedtime because I was working on this story. Do you ever think about just finishing it during the working day or do you always leave it to the last second? Oh my God.

You know, some of us have deadlines and editors who want things by certain times. We can't just laze about and then, you know, fire off a quick email to our subscribers. Well, that was your first mistake was taking that job for yourself. But so let's talk about this story, Kevin. Yes. Because it was a very good story. I learned a lot. And it's something that I've wanted to know for a long time, which is what are the people inside OpenAI actually think about what's going on there?

Yeah, I mean, this has been a big question that I've been pursuing along with a bunch of other reporters ever since the board coup last year at OpenAI. There seems to be a sort of steady drumbeat of intrigue and infighting at OpenAI that we've learned about since the board coup last year. And obviously, we've talked about some of the more recent developments, the

departure of Ilya Sutskovor and Jan Leakey, two of the most senior safety people at OpenAI who were concerned about the direction this technology was going. And then I got wind of this group of current and former OpenAI employees who were preparing to make some kind of a statement. So

So this was a story that came about because of a guy named Daniel Cocatello, who's a former open AI researcher who left the company earlier this year. And he has basically organized this group. You know, you could call these people whistleblowers. You could call them, you know, concerned insiders. But either way, this was...

But, you know, whatever you want to call them, this is a group that was inside the company and became concerned about the direction that the company was moving and decided to speak out. And some of those people are still there, right? Yeah.

Yeah. So this group, I call them whistleblowers in my story, whatever you want to call them. It is roughly a dozen or so people who have now signed this public statement calling for changes at the big AI companies. And among those people are some former employees and a few more people who are signing this letter anonymously because they still work at OpenAI. So tell me a little bit about who Daniel is and how he got to this point.

So Daniel is a 31-year-old former researcher at OpenAI. He was part of their governance division. And he's a person who's just spent a long time thinking about risk in AI. So he came up through the sort of effective altruist movement. He was part of this AI research nonprofit called AI Impacts. And then in 2022, he went to work at OpenAI. And his specialty is forecasting. He is a person who has strong views about

when various sort of AI events will unfold. And he has really tried to be rigorous in studying those. And when he got to open AI, that was sort of the thing that he was focused on. But then he sort of gradually became worried that the company was not doing enough to promote safety and that it was just sort of racing to build shiny products and get them out into the world.

And we'll ask him all about that. But in the meantime, this group that he helped bring together to raise their concerns, what do they think and what are they asking for? So the open letter is primarily concerned with what they call the right to warn. Basically, they want open AI and other leading AI companies to commit to

not only creating internal ways for employees to raise concerns about safety, but they also want stronger protections for whistleblowers or other people who voice those concerns to maybe outside groups or regulators. Right. OpenAI has this whistleblower hotline that you could call, but it just goes to someone at OpenAI. My understanding is they want this hotline to actually go to a regulator or some external authority who can help.

Yes, and they are not just focused on OpenAI, they want these policies to be implemented all across the AI industry. Is this a thing where these whistleblowers have given us a really detailed list of their complaints and helped us understand exactly what it is they're seeing inside these labs that is making them so nervous?

Yes and no. So in my conversations with Daniel and other members of this group, they have been pretty light on specifics when it comes to incidents that made them concerned about safety at OpenAI. And in part, that's because they believe they are still bound by these confidentiality agreements that they have signed with OpenAI.

So they're not getting into the kind of nitty-gritty inside information, but they did provide some specifics, and they provided a lot more sort of generalities about kinds of changes at OpenAI and messages from leadership that they believed signaled that this company was not taking safety seriously enough. So what did OpenAI say, Kevin, when you asked them about all this? Yeah.

So a spokeswoman for OpenAI sent me a statement that said, quote, we're proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and will continue to engage with governments, civil society, and other communities around the world. They also pointed out that OpenAI does have an employee hotline for integrity issues and

And they also said that they have started this new safety committee that will be reviewing feedback from these former employees as part of their sort of assessment of safety culture at OpenAI. Well, I'm excited to talk to Daniel about all of this, Kevin, because it plays into something I've been thinking a lot about this week, which is privacy.

and who wields power in OpenAI. So I imagine you saw this story in the Wall Street Journal about Sam Altman's investments this week. Yes. So the journal had this great piece. It revealed that Altman has more than 400 investments. Increasingly, it seems like companies in his portfolio are now signing deals with OpenAI for various things. And Sam has equity in those companies, which means that as those companies grow, he will get rich.

But as I've been thinking about this and talking about it with people, Kevin, I don't actually think that money is probably what this is about for Sam Altman, right? Because this is a person who is already rich. But if he has a piece of all the biggest and most important companies in AI, then if you fast forward to the time period that Daniel's worried about when AGI is arriving, one person is going to have a staggering amount of power in that universe, right? He will be the CEO of OpenAI, and he will be a major shareholder of Vendor.

many of the most important companies in the space. Why do I bring this up? In that kind of world, you want to know who is this person and what governance structures are around him to reign him in. Because right now it does seem like we're on trajectory for Sam Altman to be one of the most powerful people in the world. Totally. I think that raises one of the questions that I had when I first heard about Daniel and this group of OpenAI insiders, which is why are these people coming forward now?

And I think this group really just feels like we are getting to a point where this technology is legitimately going to be dangerous. Now, there are people who disagree with that, who think we're nowhere near that. But these people are convinced that we are getting to an important inflection point, not just in the history of AI, but sort of in the history of humanity as we approach these generally intelligent systems.

So for that reason, it's becoming more important. You know, five years ago, 10 years ago, it may not have mattered that Sam Altman had, you know, a hand in many of the hottest startups in Silicon Valley. It may not have mattered that, you know, AI was being sort of recklessly developed. But these people feel like as we get closer to a moment where the entire world is going to start caring deeply about the progress of AI,

it actually becomes much more important to trust the people and companies that are building it. Yeah. Now, I do feel like we should say there are still a lot of people out here who think that the folks involved here are being ridiculous. They think that they are afraid of a ghost. They think there is no way that these systems, as they are currently designed, will ever amount to anything half as powerful as what is being discussed.

And I go back and forth on this all the time, sort of depending on who is the last person I have talked to, right?

But Kevin, this week is a week where I am taking these people seriously, okay? Because when you do what Daniel and others invite us to do, which is to just look at the trend lines of how this stuff is developing, and you look at how much more powerful it gets every time you add a little bit more compute, you can actually just extrapolate forward in a pretty linear way and say, you know what? The future might indeed look like the past five years where this stuff just keeps getting exponentially better.

And I think if you believe that there is even a chance that that is true, then you do want to pay at least some attention to what folks like Daniel are saying. Absolutely. Yeah, and I think, you know, we've seen this kind of story play out before, like with social media, right? There were these companies, they had these grandiose missions of connecting the whole world. They hired all these people to make those visions a reality, including some pretty idealistic people who wanted the companies to live up to their stated goals.

And when those companies started, you know, making decisions, cutting corners, putting maybe growth and profits ahead of things like trust and safety, we saw people start to speak out about that. And so in some ways, this is a very familiar cycle that we've just seen play out with social media that I think we're just starting to see play out with AI. Yeah. Well, I think it's going to be a great discussion and we should bring Daniel in here. When we come back, we're going to talk with Daniel Cocatello, formerly of OpenAI.

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. Daniel Coccatello, welcome to Hardfork.

Thanks for having me. Excited to be here. So let's start with your experience at OpenAI. You joined the company in 2022 as a researcher in the governance division. You were working on things like predicting the path of AI progress. At what point did you start to worry that safety was being deprioritized? It sort of crept up over time. I think the core thing here is that

It's more important to prioritize good governance and safety the closer you get to incredibly powerful and dangerous AI systems. And when I joined early on, I thought, like, we're talking a good talk. There's a bunch of people here who really care about this stuff. And we're sort of leading the industry, in fact, in terms of our, like, willingness to talk about these things. And I didn't.

basically, gradually lost hope that we were really actually going to pivot more to investing in these things to get ready, at least not to nearly the extent that we need to. You told me when we spoke earlier about some advice that you got after you joined OpenAI, basically from someone telling you, don't rock the boat, don't come here and expect to change the place. Tell me about that advice and what that meant to you at the time.

I mean, to be honest, it was a relief to hear that advice because I don't want to be doing that sort of stuff. Like I'm a researcher. I really like trying to predict the future of AI.

By contrast, engaging in internal company politics and trying to argue with people to change their policies is super stressful and not my cup of tea. So actually, when I heard that, I was like, great, I didn't want to do that anyway. I'm going to keep my head down and do my research. Yeah. Do you want to walk us through incidents that you observed that you found the most disturbing? Yeah, so this is tricky because...

I'm still bound by this confidentiality stuff. So there's a lot of stuff I would like to talk about, but I can't talk about it. I mean, I can talk about some of the things that are already public, such as the DSP thing. Yeah, give us that story. So this is, I think, in late 2022. And we had this internal self-governance structure set up called the Deployment Safety Board.

And this board had some people from Microsoft on it and some people from OpenAI on it. And it was supposed to deliberate and then eventually approve major deployments on the GPT-4 scale or bigger. So it had been set up prior to reaching the GPT-4 scale, but it was set up with the expectation that once we hit that level and thereon, it would be something that would gate deployments. You'd have to get approval from the DSP.

While the DSB was still deliberating about this, me and various other people started hearing rumors that Microsoft had just gone and deployed it anyway, specifically in India. So we had some conversations about this. We made inquiries to managers and so forth. I don't want to get into too much detail about who said what to whom and so forth, but I came away disappointed because it seemed like we had this self-governance structure in place.

And there was a violation of the structure. And rather than holding Microsoft accountable for doing this, we were afraid to damage our relationship with Microsoft, given the sensitivities involved. You know, they have all this compute. We're trying to be partners with them and so forth. And...

You know, what happened in India wasn't that big of a deal. GPT-4 isn't actually that dangerous or scary, but I think this is an ominous sign about the ability of tech companies to self-regulate like this. Companies can make all these commitments and have all these structures, but then they can be violated and the rest of the world can just not know about it.

Kevin, you actually verified with Microsoft that this happened, is that right? Yes. It was an interesting chain of events. So when I was getting ready to publish the story, I went to Microsoft, I said, "Hey, there's this claim being made that you all were testing this GPT-4 based model in India without approval of this safety board." Their first response was they denied it. They said, "This never happened. We never use GPT-4 in India in these tests."

And then after the story was published, I got a note saying, actually, that was wrong. A version of the model that eventually became GPT-4 was included in a small test in India, and it hadn't been reviewed by the DSB, although they say it was reviewed and approved later. So essentially saying our denial was wrong and these allegations were correct.

Correct. And so, Daniel, I think you're making a really interesting point here, because what you just said, you know, I would agree with, which is GPT-4 is not that scary. But for that very reason, it seems like you it should not be a big deal to take it before this board that has been set up for this exact purpose and get approval to launch something. And I guess the question is, well, if you're not going to even test for GPT-4, which we can all agree isn't that scary, when are you actually going to start doing these tests? I mean, was that what you were feeling? Right.

Well, I mean, in OpenAI's defense, they are doing more stuff now. Like the DSP did, in fact, eventually approve GPT-4 employment. So I do think there is some progress in that direction. The point that I'm trying to make is that we shouldn't have to trust the company to keep their own commitments. We shouldn't have to trust any of these companies to keep their own commitments. There should be some way to tell whether the commitments are being followed.

I know you say you're bound by these confidentiality agreements, but I really want to press for just as many specifics as you can offer, because I think that will help people understand how you reached the conclusion that the company was being reckless, that they were not taking safety seriously enough. So anything more you can share, any incidents or conversations or things that maybe convinced you that there was a problem here? I know you want me to give more concrete details, but I just can't. I'm sorry. I will say...

I can talk about some things that have already been discussed, but I'm going to try to avoid violating confidentiality. Sure. Well, so what else is out there that is sort of publicly known that maybe contributed to your sense that the company was acting recklessly? Yeah. So there's been this story about accelerating ship production globally.

This flies in the face of some of the arguments that some of us at least have been making for why what we were doing was good. This is the story, just so I'm clear, about Sam Altman trying to go out and raise a bunch of money, perhaps as much as $7 trillion to build a whole bunch of the chips that you need to train powerful AI models. Is that the chip scaling story? That's exactly right. And again, I don't know all the details and the details I do know I'm not supposed to talk about, but...

You know, that was disappointing to me because one of the arguments that had circulated at OpenAI prior to that was this hardware overhang argument. The idea was, yeah, sure, maybe the world is not ready for AGI yet. We have to improve our governance and regulation of this technology.

But, you know, AGI isn't a single binary thing. It's not like you go from zero to one, nothing to AGI. It's rather going to be this intense period of rapid gains across all sorts of capability benchmarks. We want that intense period to be less intense. You know, we want it to be smeared out more and for the rapid capability gains to be spread out over months or years rather than over weeks or months. Right.

And so the argument was, if we don't build AGI now, if we try to hold off or delay or pause, all of this compute is going to build up in the world. And so when someone finally violates the agreement to pause, they're going to just have all this massive amount of compute and they can just

scale up by orders of magnitude really quickly, blow through all those capability milestones really quickly. So this is the hardware overhang argument for why actually slowing down would be a bad idea. And we need to build AGI as fast as possible so that the transition from pre-AGI to post-AGI is slower. And then now, it seems that Sam Altman is going to just make there be more compute in the world, which directly undercuts that argument at least.

I'm very curious, Daniel, what the board drama at OpenAI last year when Sam Walton was fired and then brought back and the board was reconstituted, what that felt like to you from the inside as someone who presumably was already slightly worried about how the company was treating safety and then to have this whole big corporate governance blow up. I'm just curious what that felt like from your perspective.

I mean, the short answer is it was devastating for me. Prior to the board crisis, I still had hope that we were going to get all that stuff in place. We would have, you know, pivoted to investing a lot more in alignment research and, you know, engaged in some sort of internal governance or self-governance stuff that made us more democratically accountable and transparent. But, you know, after Ilya Suskovor and Helen Toner and Dash, after that whole fight, it was like, oh man, things have sort of polarized. Like,

I don't think that we're going to like talk it out and pivot. And then also just sort of on a personal level, I think there was a lot of anger directed at safety people such as myself, because the perception was that the old board was being dishonest and that they didn't really fire Sam because he was insufficiently candid towards them, but rather they fired him because they wanted to slow down and he didn't and they need an excuse.

So I think that as a result, there was this sort of like, you know, fuck you, safety people. That's not a fair move, you know? And I think that's false, by the way. Like, I don't think... I think that it really was just that they didn't trust Sam. And I think they probably had good reasons, although I wish they would say more about that. Did you take a side during this? Like, who did you sort of find yourself aligning with when that crisis happened? So...

This is something that's been maybe a sort of formative experience for me. Like when the news hit on Friday that this was happening, I got scared and I basically went and hid for the weekend. So I just went and spent time with my family. And then towards the end, there was like the double crisis where the board appointed Emmett Shear and Ilya went into the office to talk about it. And people flipped out on Slack. And, you know, there's just this huge uproar.

And then I did do something. I basically just piped up a little bit and said, like, I tentatively support Ilya. I trust his judgment. I want to hear what he has to say. It was just a very stressful experience, basically. It really sounds like after this board crisis and, you know, coupled with a couple of those events that you just mentioned, you had the sense of this company is not for me anymore. It is not the company that I thought I was signing up to work for. Does that seem fair?

Yeah. Yeah, I think that's fair. I mean, but it's really not about me. Like, this wouldn't be such a big deal if AI capabilities were going to be fixed at roughly GPT-4 level for the next five years or for the next 10 years.

And then on your way out, something unusual happens. And this is how I actually saw your name pop up for the first time in all of this, which is that OpenAI has this standard sort of off-boarding paperwork that when you leave the company, you're asked to sign and there's a non-disclosure part of it, there's a non-disparagement part of it, or at least there was.

And you did something unusual, which is that you refused to sign that paperwork, even though per the agreements you signed when you joined OpenAI, doing so might cause you to lose out on all of your vested equity, all of your sort of wealth that you had accumulated. And my understanding is that was a significant amount of money, about $1.7 million.

So you were basically refusing to sign this paperwork that would have guaranteed you a large sum of money in exchange for the right to speak out about the company. So walk me through that decision. The way it works is around the time you join the company, you sign something that says your vested equity goes away if you don't sign a general release when you leave the company within 60 days.

And then when you leave the company, they give you the general release and it has all this stuff in it, including non-disparagement and including the fact that you can't talk about this release. At some point, I think I deliberated enough. I discussed this with my wife. She's been really helpful throughout this whole thing. I think this is the sort of decision that we would have to make as a family because obviously it affects both of us so much.

And we were just like, no, we're not going to sign this. I don't think it's ethical. So I wrote this email to OpenAI saying, I don't think this is fair. I want to be able to criticize the company in the future. And so I cannot sign this. And then they were like, okay, we wish you the best on your future endeavors. And I was like, okay, well then that's it, I guess. After 60 days, the equity is going to be gone. And so I started trying to move on with my life, um,

and get back to my research and try to figure out what my next career step was going to be. Right. Just to clarify, you discussed this with your wife because this was not a small amount of money for you guys, right? She doesn't work at OpenAI. Oh, yeah. No. First of all, it's hard to actually say how much this is worth. What I did is I just asked them how much would this sell at the last tender? They said a certain amount and then I just multiplied that by the amount that I had and got $1.725 million.

And that's like 85% of our net worth at the time. Wow. And then after you refused to sign this paperwork, Vox reported on the existence of this sort of general agreement that all departing OpenAI employees were being forced to sign, which included this non-disparagement agreement and

the sort of threat of maybe clawing back some of this vested equity. And the company actually responded to this. They basically said, we've never enforced this on former employees. We've never clawed back vested equity. We don't plan to. And Sam Altman said that he was genuinely embarrassed that he didn't know about the existence of this agreement and the terms of it. I guess I'm just curious if you believe that. I mean, I do find it hard to believe, but I will say for my

For my part, I'm pleased that this happened. Like, I think if you had asked me like a month ago or whatever this was before the Vox story, what I thought would happen, I would have said OpenAI would be like, we're a corporation. This is just like our policies. What's the big deal? But actually like a lot of people inside OpenAI as well as outside were pretty upset about this, which was gratifying. And then also, you know, Sam apologized and it's going to change. And that's actually what leads to this to today with the proposal.

Yeah, let's talk about that. So you leave OpenAI earlier this year, and then you sort of start putting together this group of current and former employees who share some of these same concerns. How did this whistleblower group, if you want to call it that, come together? Yeah, so...

After the Vox thing happened and this all started blowing up, a ton of people started reaching out to me, offering thanks and support and so forth. So this lawyer, Larry Lessig, offered pro bono services. And so I engaged with him. Various people were reaching out to me. And I was saying, if you feel similarly to me, like, here's a good lawyer, you should get in touch, you know? And so this group sort of congealed. And this core group of us met up and discussed what to do. And then we were like, how about this? How about we...

we meet them with a positive proposal for what good policies would look like. And the core idea in this letter is, hey, let us talk about the risks that we believe that the companies that we worked at are creating related to artificial intelligence. Yeah, that's right. So it's actually kind of complicated. There's four points that we're proposing. So principle one is basically don't do the sort of thing that OpenAngest did with the silencing of criticism and the equity clawbacks and stuff like that.

Principle two is this, like, what would the ideal system look like? The ideal system would be an anonymous reporting hotline so that employees at these labs that are building this incredibly powerful technology can communicate with the people who need to know. People like regulators and independent watchdog groups with relevant expertise. Principle three and four are sort of saying...

You should promote a culture of, you know, open criticism. Three is basically saying employees who are raising serious concerns about various risks should be able to talk about confidential information in the course of doing so to the public, only insofar as it supports a risk-related concern. And then principle four says we can actually walk that back

if principle two is working really well. If we have our good system in place where all the people who need to know can be informed, then we can walk back the thing about releasing confidential information and we can just be like, nope, you go back to silence, you know? So that's the overall proposal.

So that seems like a pretty reasonable set of requests to me. But there is this thing that I'm struggling with, Daniel, which is that, you know, you were mentioning earlier that the board didn't share a lot of details back when it fired Sam. And this wind up sort of coming back to bite people who are really concerned about safety issues because it sort of felt like there was no supporting evidence, right, to back up some of the claims that were being made.

And, you know, as I'm hearing your story, the only thing that I have to go on is that Microsoft tested a thing, you know, without running it by a board. So I just, my question is essentially like, why should I essentially believe that things are as dangerous as you're leading me to believe when no one involved is willing to really tell me anything of much substance? Yeah, good question. I think in my case,

I'm not trying to call out OpenAI in particular. I mean, I do have my disagreements with their company criticisms, but honestly, I think that probably if I went to these other companies, I would feel pretty similarly. So I don't want to be like specifically saying OpenAI is really bad and that's why we need these policies. Instead, I want to say this policy would be good and I can justify this without...

giving specific details without violating confidentiality. Even the stuff that's already public, I think, is enough evidence to justify this policy. Yeah. Don't you feel like you'd build more momentum, though, if at least someone somewhere was willing to say, hey, you know, it turns out that OpenAI has GPT-6, like, you know, sitting over there in a vat or, you know, whatever it is that's going to raise our hackles. Like, I'm just curious how you think you can advance this movement when it seems like we're just sort of like stuck at this very high level of abstraction. I mean, yeah.

OpenAI doesn't have GPT-6 in advance. Okay, sweet. Thank God. I revealed some information there. I mean, this is an alternative strategy we could have taken. We could have been like, now is not the time. Wait until it's like right on the brink of catastrophe with terrible things right about to happen. And then, you know, blow the whistle and stuff like that. And...

I think that's just cutting it too close. For one thing, AGI, by definition, can automate a lot of work. So that means that the size of the silo of humans that you need to be in this inner circle who knows about it can be pretty small. So you could end up in a situation where there's like this secret project within the company that has access to the latest model and is using it to make even more powerful models and is automating a ton of stuff. And they can be actually doing quite a lot of research really fast because most of the work is being done by the AIs.

And there's only a small group of humans who even know about this. And that's one of the reasons why I think it's dangerous to wait until then and plan to whistleblow right when you're right on the brink. I mean, I'm empathetic in the sense that journalists wind up being in this position a lot, right? Like, we constantly walk around ringing alarm bells saying, hey, this bad thing could happen. And if we do our jobs right, someone intervenes to stop the bad thing from happening. And then that winds up often making us look foolish because the bad thing never happened, right? So we're sort of the boys who cried wolf.

At the same time, I do think that this movement, like this contingent of safety folks needs to find a more persuasive story to tell. Because I think that right now it's not connecting with people because they use chat GPT and they have trouble understanding how this is going to end the world. That reminds me of the other thing I was going to say, which is that this whole issue about like, is AGI coming soon? Is AGI risk a real thing?

is like 90% not about what's happening in the labs in secret. I think that publicly available information about the capabilities of this models and the progress that's been happening over the last four years is already enough to make extrapolations and say, holy cow, it seems like we could get to AGI in 2027 or so or sooner.

you don't need any secrets to be able to make that inference. And then for the second thing about like, is this a big deal? Is this really dangerous? Again, you don't need any specific secret. I did hear from some, you know, a lot of people when this article came out about you and this whistleblower group thought, you know, these people are taking a brave stand. I did also hear from people who had more critical views of this. And I just want to let you respond to some of these. One

One thing that really stood out to a lot of people who read this article is about you and your personal predictions about the trajectory of AI. So one of the things that you told me that was in the article is that you believe that AGI, which you define basically as an AI system that can sort of do any kind of job, like anything that would be economically valuable, that that could arrive as soon as 2027, so three years from now, which is a very short timeline.

You also have a P-Doom, which is the probability that AI will catastrophically harm humanity.

of 70%, which is among the highest I've ever heard from someone who has directly worked at one of the big AI companies. So people are sort of citing these figures and saying, well, Daniel, he's not like a sort of normal run-of-the-mill AI researcher. He is a doomer. He is someone who believes that this stuff is going to kill us all, and we shouldn't take him seriously as a result of that. So I'm curious what you make of that. Yeah. So, I mean, first of all, I will

Totally admit that my PDoom is higher than most people's. I want to inject a note of humility here. Like the PDoom thing is more of a vibe than a serious number because it's an inherently...

unpredictable sort of situation. I think that I feel much more confident about my timelines. I think this is something that I have expertise in. I was hired at OpenAI to try to forecast the future of AI, and I spent years trying to do that. And I think there's now lots of evidence that you can draw from, scaling laws, trends on benchmarks and so forth. So I do feel somewhat confident in my claim that AGI could happen in the next few years. But I also could say that like

There's a bunch of other people who came together to make this proposal, and I bet they would all have much more like normal opinions about PDoom. And they still are concerned and think this letter is a great idea and are advocating for it. So I think that this is something that hopefully a lot of people can get behind. I'm curious if you've heard from anyone at OpenAI since this letter was published this week. Yeah. Yeah, a few people. What did they say?

Some people reached out being like, you go, thank you for doing this. It's very brave. Some people reached out being like, I feel a little bit betrayed. Like I thought we were going to work together to get better policies, but now you've been critical of OpenAI and you're making it seem like it's all our fault. But like other companies are probably just as bad. Well, the second group just sounds like they're whining and they need to get over it. Yeah.

I mean, I'm happy to say other companies are probably just like, I think this is a systemic problem, you know? But I actually think it's a revealing point because I think that there are probably people inside OpenEye who think, well, because there's another worse company somewhere that justifies some amount of the actions that we're taking or some amount of the flaws that we have. And, you know, I can understand why you would have that point of view, but that's really not what you're saying. You're saying a very dangerous thing is somewhat imminent and people working on it need to be able to discuss that openly with the public.

Well, I'm definitely not going to tell you who it was who was saying these things, but it was more reasonable than you make it sound, I think. Got it. So what comes next for you?

- I want to get out of the media spotlight. I think this has all been very stressful for me. Yeah, so like, technically I'm supposed to be on vacation right now. This is my parents' house. - So mostly we just need to get Kevin to leave you alone, it sounds like. - Yeah. - Well, I hope you do actually get some vacation on top of your big week.

I also am just really grateful for the time and the opportunity to speak with you. And I hope this continues a conversation that I think is fairly overdue. Thank you. And likewise. Thanks, Daniel. Thanks, Daniel.

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.

So

Special thanks to Paula Schumann, Weaving Tam, Kate Lepresti, Jeffrey Miranda, and thanks to Alex Goldberg for all the Canadian gear that he lent us that we're not giving back. You can email us at hardforkatnytimes.com. Maybe it's time to blow the whistle at your company. Think about it. Come on. What do you got to lose? What do you got to lose? What do you got to lose?