cover of episode A.I. Doomsday with Tristan Harris

A.I. Doomsday with Tristan Harris

2023/5/25
logo of podcast On with Kara Swisher

On with Kara Swisher

Chapters

Tristan Harris discusses his background in tech and social psychology, and how he became concerned about the manipulative tactics used in the attention economy, leading him to advocate for ethical design within tech companies.

Shownotes Transcript

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.

It's on!

Hi, everyone, from New York Magazine and the Vox Media Podcast Network. I'm Ron DeSantis with a face for Twitter spaces. Just kidding. This is On with Kara Swisher, and I'm Kara Swisher. And I'm Neymar Raza. That's not so nice, Kara. Don't we have faces for radio? Guess what? I wasn't meaning to be nice. Oh.

So mean. Anyways, we're not going to focus on Elon today or on Ron DeSantis. We have much bigger things to talk about, like artificial intelligence. But explain very quickly what's going on with this Twitter proclamation, his announcement. It's interesting that they're doing this. Of course, they're trying to pretend the media is losing its mind. It isn't really. It's interesting. Ron DeSantis doesn't want to announce on mainstream media that he's running. He wants to announce with Elon and David Sachs.

That's fine. Whatever. Good luck, guys. Good luck making media. They want to make it into like media is so mad, and they're not. It's an interesting stunt. We'll see if it works. They've been doing presidential announcements the same way forever. This is good for Elon Musk only because he's trying to build that, as we talked about extensively, a media company. That's his effort. And good luck. Good luck. Media is hard, and it's not very lucrative. He's got Tucker Carlson. He's got Ron DeSantis. Yeah. Got David Sachs.

He doesn't have us. No. But anyways, you're off to give a commencement speech today at Cooper Union. Are you ready to drop the wisdom on young people, Kara? I was trying hard not to do the drop wisdom thing because I know my own kids would be like, Mom, no, tell us what to do. Are you going to announce a presidential bid at Cooper Union? Yes. That's how I'm doing it in front of the students at Cooper Union. That's a good idea. Yeah.

No, I just, I want to just talk about tech because they invited me because I'm a tech person. So where it's going and a little bit about AI and generative AI and where it's going, because this will affect these students. They're focused on this kind of stuff and it will impact them.

I'm sure this is going to scare them. So you're basically telling them they have no jobs when they graduate. They're going to be worse off than millennials? No, I talk about their responsibility to monitor this the way we have not monitored early social media and early internet. And they have a responsibility to figure out what works best. And it's in their hands. So that's what I'm saying. For this generation, I think it'll be about...

Asking good questions, giving good instructions, being very attentive to detail, actually, because that's such a skill that will define your ability to interact with and use AI. Yeah. But no commencement yellow pages of AI-protected jobs. I don't think they want to hear that. Like, okay, you're leaving. Good luck. You're fucked. I don't think that's really what I'd want to hear. Good luck, you old lady. Fuck.

fuck you, old lady. What are you doing? That's not my goal. I want to leave them with some level of hope, but also responsibility. Very sweet. They could point to someone, you, you're out of work. You, you're out of work. You, you're, I don't think that's real. You don't have a job. You don't get a job. You don't get a job. You're such an anti-Oprah, Kara. I'm not going to do that. No, I'm not going to do that. I'm 100% not going to do that. Yeah, why would I do that to people when they're happy? But we're joking, but the topic of AI and what it means for future livelihoods is obviously a very critical topic. It came up during Sam Altman's

congressional testimony. Senators like Richard Blumenthal noted that AI eating jobs is their worst nightmare. And they asked Sam Altman about this. Let's play a clip. There will be an impact on jobs. We try to be very clear about that. And I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that.

But I'm very optimistic about how great the jobs of the future will be. Well, that was a full answer of everything. Everything will work or not. Everything will work. And if it doesn't work, we'll be in partnership, except it'll really be on you. Yeah, exactly. That's the answer. You know, that's a good answer. Well done, Sam. I mean, one thing about the testimony, don't say he didn't warn people. No one said Facebook's going to cause an insurrection unless you stop it, right? That's the kind of thing. So I appreciated that, but that was a

He should run for office. That's a very deft, yeah, political answer. Well done. Well done. Political answers to the politicians. A lot of action will be needed, according to one global survey by Goldman Sachs. As many as 300 million jobs worldwide could be automated and replaced by AI. I imagine the number could be much higher.

How do you see this playing out and what do you think will need to be done? You know, Scott always says it's overstated in the short run and understated in the long run. I think ultimately people will shift and it makes sense. A lot of jobs are rote and silly and shouldn't be there, right? And work itself is changing as we've talked about a lot so drastically, you know.

I think it's like 60 million hours people are saving not commuting. Like, what do you do with that time? It's a time for, it's an opportunity for creativity. And the government's got to be there to help figure it out along the way, because that's why we elected the government. And of course, the government is non-functional right now with the debt ceiling crisis. But I'm not quite as worried for the jobs thing as others, but I do think jobs will be affected. Like, I don't sort of sound like Sam Altman, but it's true. I think in the long run also, in a nation where social safety nets have been

so fraught and so politicized and so often dismissed as communist or socialist, AI might be the jumpstart we need to actually invest in some of these welfare programs, like healthcare for all or job training or even universal basic income. Because if you look forward, it might be very hard. And things like job training might become easier with AI, actually. But I also think we should ask the question of what's the default assumption on who owns the

Right.

Mary Gray, a great MacArthur Fellow winner, talks about this, but kind of the default assumption of who owns your work is also given to corporates, which may not be the case. Once again, they're plumbing our stuff to sell back to us. That's what the internet was, right? And we made the internet, and the internet is us, and then they sell it back to us. Yeah, we are the product, we are the manufacturer, and we are the customer. This is really not a good deal, guys. Yeah.

Yeah, yeah. Anyway, Sam got a lot of praise last week during his testimony for acknowledging the dangers of AI, and you just gave him some praise right now. He advocated for regulation. He, you know, used the word dangers, but he kind of punted to the government on the solution in many cases. And then earlier this week, after the testimony, he and other leaders of OpenAI followed up with this kind of lengthy report on possible regulation where they described what they see as out of scope, too burdensome. It's basically like regulate us, but just not too much. Yeah.

Well, there is a balance, of course, as you should when you're doing any kind of legislation. But I think this is going to be a global thing. I think Europe will be involved. Yeah, they're already way ahead. They're like a two-year head start on us. Yeah, exactly. So in this case, I think it's because there's such important questions and people know what have seen what happened when it's, you know...

It's already damaging enough that we have some history here, whether it's the insurrection or not to completely blame social media, of course not, but we've seen what can happen. And so if you understand that it's that times better.

infinity, it's really important. And I think people do, I do get a sense from, I've been interviewing a lot of elected officials recently, and I think they do get it. They were so enthusiastic about the internet. They're more like, hmm, about this, which I think is a good thing. It's really hard to talk about AI as distinct from social media because it's also built on top of that kind of broken and poorly regulated layer. So we saw this earlier this week with this AI-generated deepfake of the Pentagon building on fire. Mm-hmm.

And then what happens when fake information meets platforms that aren't good at dealing with disinformation? On safety, say, perhaps where a president should announce his election. Yes. Yeah, a lot could go wrong. Twitter didn't stop it. But, you know, I think there's one guy in, like, Singapore in charge of that. Like, probably. Who knows?

And he's part-time. It's probably his cousin. Exactly. But there was a photo of the Pentagon building on fire that circulated first on Facebook and then more powerfully on Twitter because it was circulated by a Blue Check account called

Bloomberg feed, which has no relationship to Bloomberg News. Well, also more powerfully because Facebook took it down right away and Twitter didn't. That's really why it was more powerful. Because the guy in Singapore was sleeping because the time difference. Yeah, I guess. Yeah, Cousin Greg was busy figuring out who's going to take over at Waystar right now. Go, Jo. Who is? You know. Don't tell us. I do. We don't want spoilers on this show. We

We get very many emails when we spoil things. I shall not. But back to this. We're going to ask our guest Tristan Harris in a minute, but I want to know, Kara, do you see Sam and Sundar as different from the social media CEOs of yesteryear who

Oh, yes. Who said, regulate us, and then new government wouldn't really regulate them, especially if they threw money at them? Yes. Maybe not Sundar. Sundar's from the old school. He has a personality that's very calming. But I do. I do. I think they're not sunshine and roses, which is a very big difference. And they were sunshine and roses. They were. This is going to be great. We're

Arab Spring, dingity-ding, that kind of stuff. So I think, yes, they are more honest. Yes, thank you. Thank you. Well, they have to be because also the world has wisened up. Even the folks on the Capitol have wisened up. This is not Mark Zuckerberg's 2018 testimony. No. I mean, they have to. I mean, they'd look ridiculous if they said, this is all going to be great. It's killing us. I don't think that would be a good...

Well, someone who might say that is Tristan Harris. No, I'm just kidding. He wouldn't say this is all going to be great. He'd say, this is not going to be great. This is not going to be great. Our guest today, Tristan Harris. He's the former Googler who co-founded the Center for Humane Technology with Aza Raskin and others. And he came up as a tech ethicist who rose to kind of more mainstream prominence because of the Netflix documentary, The Bigger.

the social dilemma. I met him a long time ago when he had just started talking about this. And I think I did one of the first interviews with him about it on my previous podcast, Recode Dica. Six, seven years ago, right? Yeah, yeah, exactly. And so I thought what he was saying made a lot of sense. And it was my experience, what he was talking about inside. I was outside of these companies, but we had talked about a lot of things that are, of course, all came true. And we both had the same concerns. We were starting to see real changes

hair on the dog of social media, like, hmm, this seems problematic, this seems problematic. And I just saw him recently in D.C., where he gave what was initially a confidential presentation to lawmakers about what was happening with gender of AI. It was a packed room, and people were—I was not gobsmacked, but a lot of people there were—

But you texted after and you were kind of in a little bit of awe slash... Yes, I thought he would do it again. I thought he did it again. And so did Asa Raskin, who partners with him at the Center for Humane Technology. Look, a lot of people have been early to this. You know, Joy Boulamwini was there early. Timnit Gebru. Timnit Gebru, of course, off Google. Yeah. Kay Crawford. There's dozens of people.

Again, many of which are women, which is, of course, they can see safety issues better than men can. They just can. So competitive advantage is being a little unsafe. Yeah, a little less safe. And so there's been a lot of people here along with him. He just happens to be also was there early, too. Yeah. He's a bit of Cassandra. Yes, so am I. Yes, you are. So let's take a quick break and we'll be back with the meeting of the Cassandras, your conversation with Tristan Harris.

This episode is brought to you by Shopify.

Oh, man.

Welcome, Tristan. Now, you and I met, let's go back a little bit, when you were concerned about social media. I think it was one of the first interviews you did. It was 2016, 2017. Right. I think it was right after Trump had gotten elected. That's correct. And I was really choosing to come out and say, you know, you and I were both. In a little booth in Stanford. I remember that. It was very small. The Stanford radio. Yeah. But what, Tom,

about for people who don't know you, both of us are probably seen as irritants or Cassandras, I guess, she was right.

Whatever, John the Baptist. Any of those precursors. Lost his head. Okay. Talk about what got you concerned in the first place, just very briefly for people to understand. Yeah. So I guess for people who don't know my background, I was a tech entrepreneur. I had a tiny company called Apture. We got talent acquired by Google. In college, I was part of a class called the Stanford Persuasive Technology Lab Behavior Design class.

And studying the field of social psychology, persuasion, and technology. How does technology persuade people's attitudes, beliefs, and behaviors? And then I saw how those techniques were the mechanisms of the arms race to engage people with attention. Because how do I get your attention? I'm better at pulling on a string in the human brain, in the human mind. And so I became a design ethicist at Google after releasing a presentation inside the company in 2013 saying,

saying that we were stewarding the collective consciousness of humanity. We were rewiring the flows of attention, the flows of information, the flows of relationships. I sort of said, you know, I'm really worried about this. I actually thought the presentation was going to get me fired. And instead I became a design ethicist getting to study how would we- To give you a job of these worries, right? It's better than me leaving and doing something else. But I tried to change Google from the inside for three years before leaving. When you look back on that,

I think they wanted to have you there. You're kind of like a house pet, right? But you know what I mean? Like, oh, we got a design. Yeah, I know. But they don't like the house pets that bite. And you started to bite. Yeah. Well, I think, you know, it's funny now because if you look at when we get to AI, which we're going to get to later.

people who started AI companies actually started with the notion of we can do tremendous damage of what we've created. There's a whole field of AI safety and AI risk. Now imagine if when we created social media companies, Mark Zuckerberg and Jack Dorsey and all these guys said, we can wreck society. We need to have a whole field of social media safety, social media risk.

And they had actually had safety teams from the very beginning figuring out. They hated when you brought up negative things. Yeah, they hated it. They denied that there was even any issue. And it was hard to see the issue. And we had to fight for the idea, you and I, that there was these major issues. Addiction, polarization, narcissism, validation seeking, sexualization of kids, online harassment, bullying. These are all digital fallout of the race to the bottom of the brainstem for attention. The race to be more and more aggressive about attention.

So I was frustrated that especially Facebook, because I had more contact with that company, wasn't going to do more and that people were in denial about it. And it goes back to the Upton Sinclair quote. You can't get someone to question something that their salary depends on them not seeing. Yeah. And their boss, their boss who runs everything. He really was like a...

like a brick wall on that. We're a neutral mirror for society. We're just showing you the unfortunate facts about how your society already feels and works. Yeah, I keep saying finish college, you'll understand. You might want to take World War II, maybe. Throw in some Vietnam War and perhaps, you know, go back to World War I because it's all like...

You know, that's recent history. So a couple of months ago, you and Asa released – had a presentation that I went to here in Washington called The AI Dilemma, laying out the fears. You know, I think there's a proclivity to say calm down. It's don't be so Terminator.

There's a proclivity to say don't be so sunshine, right? That there's – let's focus not on the existential fears but the current ones we can work on. Now, one of the people that have been working on it feel like you can't guess what it's going to do at this point.

And that when you get overly dramatic, it's a real problem. Do you do yours was pretty dramatic when you were doing in front of a group of Washington people. When you say you can't guess what is going to what's going to happen with this, that we don't know. So let's deal with our current problem.

versus our supposed fears. Yeah, I disagree. First of all, there's a whole bunch of harms with AI and all the stuff around bias and fairness and automating job applications and police algorithms and loans. And those issues are super important and they affect kind of, you know, the safety of society as it exists. I think the things that we're worried about are the ways that the deployment of AI can undermine the container of society working at all. Mm-hmm.

cyber attacks that can break critical infrastructure, water systems, nuclear systems, the ability to undermine democracy at scale. In Silicon Valley, it's common for AOR researchers to ask each other what their, it's called PDoom, the probability of doom. Explain what PDoom is calculating and tell me what's your PDoom. So I don't know if I have a PDoom. I would say that

We – and you were sort of – I want to make sure I go back to the thing you were saying earlier. Can we predict what's going to happen? I would say we can predict what's going to happen. I don't mean that it's doom. What I mean is that a race dynamic where if I don't deploy my AI system as fast as the other guy, I'm going to lose to the guy that is deploying super fast. So if Google, for example – That's internally –

Capitalist companies and then also other countries. Yes, exactly. And that's just a multipolar trap, a classic race to the cliff. And so Google, for example, had been holding back many advanced AI capabilities in the lab, not deploying them because they thought they were not safe. When Microsoft and OpenAI hit the starting gun and said in November, we're going to launch ChatGPT and then boom, we're going to integrate that into Bing and actually make this the way we're going to make Google dance, as Satya Nadella said, that is

hit the starting gun on a market, a pace of market competition. Right. They have to. Then now everybody is going, we have to. Yeah. And we have to what? We have to unsafely, recklessly deploy this as fast as possible. So that we're out front. Like my Google just asked me to write an email. They usually want to finish sentences. Now they're like, can I write this email for you? I was like, go fuck yourself. No, I don't want you to.

Right. Well, and then Slack has to integrate their thing and integrate a chatbot. And then Snapchat integrates my AI bots into the way that it works. Spotify. If TikTok and I haven't even seen Spotify. I mean, the point is, this is what I mean by you can predict the future. Because what you can predict is that everyone that

can integrate AI in a dominating way to become, in the case for the race to engagement in AI, it's the race to intimacy. Who can have the dominant relationship slot in your life? If Snapchat AI has a relationship with a 13-year-old that they have for four years, are they going to switch to TikTok or the next AI when it comes out? No, because they've already built up a relationship with that one. Unless AI is everywhere and then you have lots of relationships.

like you do in life. But what they'll want to be incentivized to do is to deepen that relationship, to personalize it, to have known everything about you and to really care about you. You want to leave me now. Don't leave me now. And, you know, I mean, even Facebook did that when you wanted to delete your account in 2016. They would say, do you really want to leave? And they would literally put up photos of the five friends and they would calculate which of the photos, which five friends could I show you that would most dissuade you from doing that.

And so now we're going to see more and more sophisticated versions of those kinds of things. Yeah. But that race to intimacy, that race to become that slot in your life, the race to deploy, the race to therefore move recklessly, those are all predictable factors. So just to be clear, because you're sort of challenging me, you know, can we predict where this is going? And the point is we can predict that it was going to go so recklessly and go so quickly because we're also deploying this faster than we deployed any other technology in history. So the most consequential technology, the most powerful technology we have ever deployed, and we're deploying it faster than...

than any other one in history. So for example, it took Facebook four and a half years to get to 100 million users. It took TikTok nine months. It took ChatGPT, I believe, two months. - And they have the app now. But in the presentation, in that vein, you cite a study where 50% of AI researchers say that PDoom, their PDoom is 10% or higher, but it's based on a non-peer reviewed survey, on a single question survey, they had only about 150 responses.

Should we be swayed by that data that they're worried? Because there is that ongoing theory that the people who make this are worried. The cooks are worried about what they're doing.

Yeah, so one critique of that survey is it's somehow all about AI hype, that the people who are answering the survey are people inside the companies who want to hype the capabilities so that they get more funding and everybody thinks it's bigger than it actually is. But the people who answered that survey were machine learning researchers who actually publish papers and conferences. They're the people who actually know this stuff the best. If you go inside the industry and talk to the people who build the stuff, it's much higher than that survey is. Again, this is why we're doing this. Oh, I was at a dinner party years ago when they were –

top people. Like, I was sort of like, huh, that's interesting. Yeah, they're very top people. I mean, don't trust a survey. Trust, there's a document of all the quotes of all the founders of AI companies over all the years of saying these quotes about we're going to wipe out, you know, there's a strong chance we'll wipe out humanity, we'll probably go extinct. They're not talking about jobs. They're talking about a whole bunch of other scenarios. So,

Don't let one survey be the thing. We're just trying to take one data point. People are worried. People are deeply worried. Yeah. You use the metaphor of a golem. Explain the golem. So the reason that we actually came up with that phrase to describe it is that people have often said, and this is pre-GPT-4 coming out,

Like, why are we suddenly so worried about AI? AI has existed for 20 years. We haven't freaked out about it until now. And, you know, Siri still mispronounces my name and Google Maps still says my, you know, pronounces the street address that I live on wrong. And why are we suddenly so worried about AI? And so one of the things that in our own work and trying to figure out how we would explain this to people was,

was sort of realizing that we needed to tell the part of the story that in 2017, AI changed because a new type of AI, sort of class of AI came out called Transformers. It's 200 lines of code. It's based in deep learning. That technology created this brand new explosive wave of AI that are based on generative, large language, multimodal models, GLLS.

We said, how can we differentiate this new sort of era of AI that we're in from the past so that people understand why this curve is so explosive and vertical? And so we said, okay, let's give that a name so that people can track it better as public communicators. Aza and I care deeply about precise communication. So we just said, let's call them Gollum class AIs. And a Gollum, of course, is the famous –

animate capabilities. And that's one of the other factors about generative large language models is that as you pump them with more information and more compute and you train on them, and they actually gain new capabilities that the engineers themselves didn't program into them. Right. That they're learning. They're learning. Now, let me be clear. You do not believe these are sentient. No. And this has nothing to do with... Make that clear. They're not humans. There's this fascinating tendency when human beings...

like think about this where they get obsessed with the question of whether they can think sci-fi sci-fi that's why yeah and it actually kind of demonstrates just kind of like a predispositions of humans so imagine neanderthals are baking homo sapiens in a lab and they become obsessed with the question when it comes out when this thing is more intelligent is it going to be sentient like neanderthals it's just a bias of how our brains work right when really um

the way that what really matters is can you anticipate the capabilities of something that's smarter than you so imagine you're neanderthal you're living in a neanderthal brain you can't think about humans once they pop out inventing computation inventing energy inventing oil-based hydrocarbon economies inventing you know language right so we don't know which you're essentially saying we don't know it's it's inconceivable what it is but it's not sentient and i think that's because then we attribute emotions to it like it would well we just it just

maybe eventually those questions will matter, but they're just not the questions that matter. The question is whether or not it is safe, uh, sentient, it doesn't have to be. There's enormous dangers that can just emerge from just growing these capabilities and entangling this new alien intelligence with society faster than we actually know what's there. Alien is an interesting word that you use. Um,

Because it's one that Elon Musk used many years ago. He said they treat us like aliens would treat a house cat. But then he changed it to we're an anthill and they're making a highway. They don't really – they're not mad at us. They don't care. No, they just – they're just doing things from their perspective that make sense. Makes sense. But just like – by the way, just like social media was. Social media was doing – so social media already – let me argue that AI might have already taken control of humanity in the form of first contact with AI, which is social media. Right.

What are all of us running around the world doing every day? What are all of our political fears? What are all of our elections? They're all driven by social media. We've been in the social media AI brain implant for 10 years. We don't need an Elon Musk brain implant. We already have one. It's called social media. It's been feeding us the worldviews and the umwelts that define how we see reality for 10 years. And the noisiest people, yeah. And the noisiest people. And that has warped our collective consciousness. Mm-hmm.

And so are you free if all the information you've ever been looking at has already been determined by an AI for the last 10 years? And you're running confirmation bias on a stack of stuff that has been preselected from the outrage selection feed of Twitter and the rest of it. And so you could argue that AI has already taken over society in a subtle way. I don't mean taken over in the sense that its values are.

But in the sense that, you know, just like we don't have regular chickens anymore, we have the kind of chickens that have been domesticated for their meat. We don't have regular cows. We have the kind of cows that have been domesticated for their milk and their meat. We don't have regular humans anymore. We have AI engagement optimized humans. So one of the things you did, you and Asa did, was you made a lot of news when you tested Snapchat's AI. It's my AI called as if you were a 13-year-old that gave him advice how to set the mood for sex with a 35-year-old.

They've fixed it. They think they've fixed it. Is it tested a few days ago? It still happens. It still happens. It doesn't – it's suggesting you bring candles for your first romantic time with a 13-year-old with a 38 or 41-year-old. I think it was. Right.

So it doesn't say a couple of the suggestions, but it still does say some of those things. And you can still get it to those things. By the way, I've gotten emails from parents since we gave that presentation, and their kids have independently found it doing things like that. Doing things like that. So it's still not – they just can't anticipate all the problems. Well, it's actually even worse than that. It's just important for listeners to know.

Just to be fair to Snapchat, they actually did not roll that My AI bought out to all of its, I can't remember if it's 700 million users. They didn't roll it out to all their users. They rolled it out to only paid subscribers at first, which is something like two to three million users.

But of course, just two weeks ago or something like that, they released it to all their users. Why do they do that? Because they're in a race to dominate that intimate spot in your life. Everyone wants to be the Scarlett Johansson, her AI bot in your ear. You both signed a letter calling for the six-month pause on giant AI experiments. Elon did too. Elon Musk did too. It's unfortunate that that letter got

defined by Elon's participation in it. Yes, because he looked like he was doing his own business. Well, later, obviously, he then also started his own AI company. And so obviously it delegitimizes. Yeah, he also laughed and said he knew it would be futile to sign it. So why make that? Many people think it was a futile effort.

Well, these are separate topics. I want to make sure we really slow down and actually distinguish here. The founders of the field of machine learning started that, you know, helped sign that letter. Steve Wozniak started the letter. The co-founder of Siri signed the letter. Andrew Yang, et cetera, all of us at Center for Humane Technology signed.

That letter is because the Overton window of society about how unsafe and dangerous this is was not well known. The purpose of that letter was to make it very well known that this field is much more dangerous than what people understand. And I think there is a legitimate – we know the Future of Life Institute folks who are really kind of spearheading the letter –

There was a lot of debate about what is the appropriate time to call for a slowdown. And by the way, I think slowdown is also badly named on retrospect. I think something like redirection of all the energy of those labs into safety work and safety research and guardrails. So imagine it's six months of instead of an AI winter, an AI harvest, an AI summer. When you harvest the benefits that you have, you do understanding on what are the capabilities inside of everything that's been released. Did you imagine this was going to happen? That they would go, oh, yes, oh, yes, I see your point. Well,

Connected to the team that did it and kind of being privy to some of the internal conversations, I think we were all surprised how many incredible people did sign the letter. They did, yeah. Many people signed the letter. It's funny that people look at it and maybe say, this is futile, but it's like saying, you know...

Just because something is hard doesn't mean it shouldn't be the intention. And one of the interesting things is that if you talk to an engineer and you say, oh, like we're going to build this AGI thing, they're like, oh, that sounds really hard. But it's like, but we're so compelled by the idea of building these AI systems, these AGI systems, a God that I could talk to that they say, I don't care how hard it is. And so they keep racing towards it. And it's been 30, you know, a hundred years, whatever, 50 years, people have been working on this. In other words, we don't, we don't say because something's hard, we're going to build it.

We shouldn't keep going and try to build it anyway. Whereas if I say coordination is hard for the whole world, people say, oh, let's just throw up our hands and say it's never going to happen. We need to get good at coordination. All of our world's problems are coordination problems. Right. We do it with nuclear energy. We do it with a lot of things. We have a limit of nukes to nine countries. Just to put a pin on it, though, if I said it's inevitable that all countries are going to get nukes.

Let's not do anything about it. In fact, let's just let every country pursue it and just like not do anything. We probably wouldn't be here today. A lot of people had to be very concerned about it and move into action to say something different needs to happen. But people can, a nuclear war we got, we saw it happen with the atom bomb. So tell me, give me your best case against a

And one of the more compelling criticisms is the U.S. is going to fall behind China. This is something I heard from Mark Zuckerberg about social media in general or tech in general. Which is interesting because I would argue –

It is. It absolutely is. China has shown itself to have very few governors on itself. I would say the unregulated deployment of AI would be the reason we lose to China. If worse actors do beat you in dominance, in deploying AI, people with no morals, with no safety considerations, with no concerns, with different values as a future of the world kind of society, Chinese digital authoritarianism values or something like that.

or Chinese Communist Party values, then we certainly won't want to lose to that. So I think if there was a sincere risk that that would happen, there would be a good reason to say, let's not call for that. But I would actually argue that the unregulated deployment of AI is what is causing the West to lose to China. Let me give you the example of social media. Social media was the unregulated deployment of AI to society. The

The breakdown of democracy's ability to coordinate because we no longer have a shared – That's good for authoritarianism. That's really good for authoritarianism. Why are democracies backsliding everywhere around the world all at once? Barbara F. Walter wrote a book called How the Next Civil War Starts.

She talks about democracies, democracies that are backsliding everywhere. I'm not blaming it all on social media, but we're seeing it happen rapidly in all these countries that have been governed by the information environment created by social media. And if a society cannot coordinate, can it deal with poverty? Can it deal with inequality? Can it deal with climate change? So we shot ourself in the foot and now we're going for the arms.

Yeah. That kind of thing. I'm going to go to – I've interviewed a number of times. One we did in 2017, as I said, before you and Asa founded the Center for Humane Technology. Back then, you were focused on social media, as we discussed earlier, showing why revenue models built on monetizing our attention are bad for us because a lot of this is about monetization and who's going to have the next intimate relationship, which they've been trying to do forever in different ways through Siri and all kinds of different things. But now they really want you to be theirs, essentially. Let's pick a clip from it. Right.

Right now, essentially, you know, Apple, Google and Facebook are kind of like these private companies who collectively are the urban planners of a billion people's attentional landscape. Right. That's a great way to put it. We kind of all live in this invisible city. Right. Which they created. Which they created. And there's what's the question? What's unlike a democracy where you have some civic representation and you can say, well, who's the mayor? And should there be a stoplight there? Stoplight on our phone or blinker signals between the cars or these kinds of things?

We don't have any representation except if we don't use the product or don't buy it. And that's not really representation because the city itself is... So attention taxation without representation. Maybe, yeah. But so I think, you know, there's this question of how do we create that accountability loop?

You know, that was very well put. And I took it further. I said it's like the purge. They actually own the city and they don't do anything. Oh, yeah. We can't do anything and they won't do anything. They have no stop signs. They have no streets. They have no sewage, everything else. So I took your thought a step further. Could –

Talk about AI firms becoming the new urban planners of the, I guess, attentional landscape because that's what they want. It's more than attention they want. They want to own you, right? I mean, it's what you're saying. Well, so there's really – I want to separate between two different economies. So there's the engagement economy, which is the race to dominate, own, and commodify human experience. So that's the – Social media. Social media. Social media is the biggest player in that space. Right.

But VR is in that space. YouTube is in that space. Netflix is in that space. It's the race to say – Look at me. Look at me. All the things that construct your reality that determine from the moment you wake up and your eyes open to the moment your eyes close at the end of the night, who owns reality?

Your attention. That's the engagement economy. That's the attention economy. And there are specific actors in that space. AI will be applied to that economy just like AI will be applied to all sorts of other economies. Also, the cyber hacking economy. AI will be applied to the battery storage. It's more like the internet. Yeah.

It's a bigger. AI is a much bigger thing. So there's a subpart of the AI economy, which is the engagement economy. And AI will supercharge the harms of social media there. Because before we had people A-B testing a handful of messages on social media and figuring out like Cambridge Analytica, which one works best for each political tribe. Now you're going to have AIs that do that. And there's a paper out called – I think it's called Silicon Sampling. So you can actually –

a sample, a virtual group. Like instead of running Franklin's focus groups around the world, you can kind of have a language bot, chat bot that you talk to and it will answer questions as if someone is a 35-year-old in Kansas City, has two kids. And so you can run

even perfect message testing. Right. So you don't need to talk to people. So you don't need to talk to people anymore. You know what they're going to say. You can do a million things like that. And so the loneliness crisis that we see, the mental health crisis that we see, the sexualization of young kids that we see, the, you know, online harassment situation that we see, all that's just going to get supercharged with AI. Mm-hmm.

And the ability to create alpha persuade, which is just like there was alpha go and alpha chess where the system is playing chess against itself and kind of getting much, much better. It's now going to be able to hyper manipulate you and hyper persuade you. So what you're talking about is social media as a lower –

Being than AI. AI powers everything. Social media is one. But we couldn't even regulate social media. Is society aware of the need for regulation since we didn't do it for social media? So the point we made in this AI dilemma presentation is that we were too late with social media because we waited for it to entangle itself with society.

with media, with elections, with business, because now businesses can only reach their consumers if they have an Instagram page and use marketing on Facebook and Instagram and so on. Social media captured

too many of the fundamental life's organs of how our society works. And that's why it's been very hard to regulate. I mean, you know, certain parties benefit, certain politicians benefit. Can you regulate, would you want to ban TikTok if you're a politician or a party that's currently winning a lot of elections by being really good at TikTok? Right. So once things start to entangle themselves, it's very hard to regulate them. There's too many vested interests. Right.

With AI, we have not yet allowed this thing to roll out. I mean, now it's obviously happening incredibly fast. We gave the presentation a few months ago. The whole point of it was before GPT-4 was we need to act before this happens. One good example of this happening in history was a treaty to ban linoleum

blinding laser weapons from the battlefield before they were actually ever used to blind soldiers. Yes, this would be a high-energy laser that has the capability to point at everyone and it just blinds them. But we're just like, you know what? In the game of war, which is a ruthless game where you kill other human beings, even as ruthless as that game is, that is just a, we don't want to allow that. And even before it was ever deployed,

That was one of maybe the most optimistic examples where humanity could sort of use our higher selves to recognize that's a future game. It goes into the killer robot part of the portion of the show, right? Right. Then there's the slaughter bots. How do we ban autonomous weapons? How do we ban recombinant DNA engineering and human cloning? Things like this. And so this is another one of those situations. And we need to look to especially the example of the blinding laser weapons because that was in advance of the technology ever getting fully deployed because a lot of the kind of

guardrails that we're going to need internationally are going to be saying no one would want that future race to happen. So let's prevent that race. Right. So but that's nation states. Now, AI, anybody could do it. The same thing with CRISPR, though. They definitely, the scientists got together and had standards. And this is much easier to be able to do what you want if we are all in a group together coordinating this. So if I want to steel man the AI doomers and the P doomers that have a really high number for that P doom number. Mm hmm.

It's because it's so hard to prevent the proliferation that many people think that we're doomed. Just to really clear and why that's also a very legitimate thing. That is certainly – That would be my biggest doom. This is too easy for lots of people. It's too easy. So let's just hang there for a moment. Just really recognize that. That's not being a doomer. That's just being an honest viewer of these are the risks. Now, if something other were to happen –

You could involve governments and law to say, hey, we need to get maybe more restrictive about GitHub and Hugging Face and where these models go. Maybe we need export controls. There are people who are working on models of how do we – just like there's 3D printed guns as a file, you can't just send those around the open internet.

We put export controls on those kinds of things. It's a dangerous kind of information. So now imagine there's a new kind of information that's not a 3D-printed gun, but it's like a 3D-printed gun that actually self-replicates and self-improves and gets into a bigger and bigger gun. And builds itself. And builds itself. That's a new class. That's not just free speech. The founding fathers couldn't anticipate –

Something that self-replicates and self-improves being a class of speech. That's not the kind of speech that they were trying to protect. Part of what we need here are new legal categories for these new kinds of speech. Sam Alton, who runs OpenAI, was on the Hill calling for AI regulation. They all are. You can't say you didn't warn them, right? A lot of tech CEOs have claimed they want regulation, but they've also spent a lot of money previously on stopping antitrust, stopping fraud.

algorithmic transparency, stopping any privacy regulation. Do you believe this class of CEOs? Because a lot of them are saying, this is dangerous. Would you please regulate this? Yeah. So you're pointing to what happened with social media, which was that publicly they would say, we need regulation. We would need regulation. When you talk to the staffers- They never said, this is dangerous. We need regulation. He's saying- They never said dangerous. He says dangerous. He says dangerous. And I want to golf clap that, you know,

We always want to endorse and celebrate when there is actually an honest recognition of the risks. I mean, to Sam Altman's credit, he has been saying in public settings, I think much to the chagrin of maybe his investors and other folks, that there are existential risks here. I mean, what CEO goes out there saying this could actually wipe out humanity and not just because of jobs? I mean, so we should celebrate that he's being honest about the risks. We actually do need an honest conversation about it.

However, as you said, in the history of social media, it is very easy to publicly advocate for regulation and then your policy teams follow up with all the staffers and then say, let me redline this, redline that. That's never going to work. And they just sort of stall it so nothing actually ever happens. I don't think it's that bad faith in this context. I do think that some kind of regulation is needed. Sam Altman talked about

GPU licensing, licensing, doing a training runs. If you're going to run a large frontier model, you're going to do a massive training run. You've got a license to do that. You're building a, just like we have the Wuhan Institute of Virology was a biosafety level four lab doing advanced, you know, kind of gain of function research. If you're building a level four lab, you need level four practices and responsibilities. Even there though, we know that that may not have been enough, whatever such practices. We're now building AI systems that are super advanced and

And the question is, do we actually have the safety practices? Are we treating it like a top lab? Well, the first thing is, are we treating it that way? And then the second is, do we even know what would constitute safety? So this is getting to the end question you're asking. Can we even do this safely? Is that even possible? Right. Because think of AI as like a biosafety level 10 lab. Imagine we had something called, I'm inventing it right now, but a biosafety level 10 lab where I invent a pathogen that the second it's released, it kills everyone instantly. Let's just imagine that that was actually possible. Mm-hmm.

Well, you might say, well, let's let people have that scientific capacity. We want to just see, is that even possible? We want to test it so we can build a vaccine or prevention systems against a pathogen that could kill everyone instantly. But the question is to do that experimental research. What if there was, we didn't have biosafety level 10 practices. We only had biosafety level 10 dangerous capabilities. Would we want to pursue biosafety level 10 labs? I think that AI, the question, the deeper question is,

With great power comes – you cannot have the power of gods without the wisdom, love, and prudence of gods. And right now we are handing out and democratizing godlike powers without actually even knowing what would constitute the love, prudence, and wisdom that's needed for it. And I think the story in the parable of the Lord of the Rings is that there are some – you know, why did they want to throw the ring into Mount Doom? There are some kinds of powers that when you see them, you say, if we're not actually wise enough to hold this ring and put it on –

Right. I get that. I understand that. One of the things is that when you get this dramatic, like I said at the beginning, does that push people off? Like this is a pathogen we get. Like we've just been through COVID and that was bad enough and there's probably a pathogen that could kill people instantly. Yeah.

It's not how people think. Yeah, well, let's actually just make that example real for a second because I'm not, that was a hypothetical thing of about safety level 10 thing. Can AI accelerate the development of pathogens and gain-of-function research and people tinkering with dangerous lethal bioweapons? Can it democratize that? Can it make more people able to do that? More people be able to make household explosives with household materials? Yes. We don't want that. That's really dangerous. It's a very concrete thing. That's not AI doomers. There's real concrete stuff we have to respond to here. We'll be back in a minute.

Tell me something that AI could be good for because I talk about that because I think I'm a little less extreme than you. There are – and I think at the beginning of the internet, I was like this could be great. And of course then you saw them not worrying about the not so great. And I think it's sort of that tools and weapons, speaking of which, from Microsoft. That was the Microsoft president, Brad Smith, talked about tools and weapons. Some are a knife is a tool and a weapon.

So what is the tool part of this that is a good thing? So first of all, I think this is another one of those things, just like we say, is the AI sentience. That when people hear me saying all this, they think I don't hear or don't know about or aren't talking about all the positives it can do.

This is another fallacy of how human brains work. Just like we get obsessed with the question of is it sentient, we get obsessed with the one-sidedness of one. It has all the positives. You can, just as fast as you can design cyber weapons with AI and accelerate the creation of that, you can also identify all the vulnerabilities in code or many vulnerabilities in code. You can invent cures to diseases. You can invent new solutions for battery storage. We're going to have, as I said in Social Dilemma, what's going to be confusing about this era is

is it simultaneous utopia and dystopia. I can't think of so many good things about social media. I couldn't. I can think of dozens here, dozens here. And there I was like-

Maybe we'll all get along and do better. Social media is like increasing the flows of information. People are able to maintain many more relationships. Old high school sweethearts. Sure, but not like this. This is gene folding. This is drug discovery. This is real movement forward. Absolutely. But I'll tell a story. I mean, so the real confusing thing is

Is it possible on the current development path to get those goods without the bats? What if it was not possible? What if I can only get that, you know, the synthetic biology capabilities that let me solve problems, but there is no way to do it without also enabling bad guys to do it? Then to create this pathogen that you're talking about, for example. So just to make it personal.

My mother died of cancer. And if you told me that there, you know, I like any human being would do anything to have my mother still be here with me. And if you told me that there was an AI that was going to be able to discover a cure for my mother that would have her still be with me today.

Obviously, I would want that cure. If you told me that the only way for that cure to be developed was to also unleash capabilities, that the world would get wrecked. This is a dinner party, one of those dinner party questions. Would you kill 100 million people to save? But it's real. Yeah. I mean, I'm just saying there's certain domains where there's no way to do the one side without doing the other side. And if you told me that, just really on a personal level.

As much as I want my mom to be here today, I would not have made that trade. Well, you're talking about an old Paul Virilio quote, which is you can't have a ship without a shipwreck or electricity without the electric chair. We do that every day. Net cars have been great. Net they've been bad now. You know what I mean? But if you have godlike powers that can kind of break society in much more fundamental ways. So now, again, we're talking about benefits that are unimaginable.

Literally, God-like solutions for every problem. But if it also just undermines the existence of how life can work. That's your greatest worry is this idea of reality fracturing in ways that are impossible to get back. No, I mean all of it together. If AI is unleashed and democratized to everybody, no matter how high the tower of benefits that AI assembles is,

if it also simultaneously crumbles the foundation of that tower, it won't really matter. What kind of society can receive a cancer drug? If no one knows what's true, there's cyber attacks everywhere, things are blowing up, and there's pathogens that have locked down the world again. Think about how bad COVID was. People forget going through one pandemic, just one pandemic. Imagine that just happens like

a few more times. Like that can quickly, we saw the edges of our supply chains. We saw how much money had to be printed to keep the economy going. It's pretty easy to break society if you have a few more of these things going. And so again, how will cancer drugs sort of flow in that society that has kind of stopped working? And I don't mean, again, AI doom, Eliezer Yudkowsky, AGI kills everybody in one instant. I'm talking about dysfunction at a scale that

that is so much greater. Are we getting closer to regulation? Did you find those hearings interesting?

Did you have any good takeaways from them? And where is it going to go from here? Who knows where it's going to go? I didn't see all of the hearing. I was happy to see a couple things, which is based on structural issues. So one was actually the repeated discussion of multilateral bodies. So something like an IAEA, like the International Atomic Energy Agency, something like that for AI that's actually doing global monitoring and regulation of AI systems, of large frontier AI systems.

I think Sam was proposing that. That was repeated several times. I was surprised to see that. I think that's actually great because it is a global problem. What's the answer when we develop nuclear weapons? Is it that Congress passes a law to deal with nukes here? No, it's a global coordination around how do we limit nukes to nine countries? How do we make sure we don't do above ground nuclear testing? So I was happy to see that in the hearing. I was also happy to see multiple members of Congress, including I think it was Lindsey Graham and the Republicans who are typically not for new regulatory agencies,

But them saying they recognize that we need one because the system is, you know, E.O. Wilson, if we have Paleolithic emotions, medieval institutions and godlike tech, medieval institutions and medieval laws, 18th century ideas, 19th century laws and ideas don't match for 21st century issues like replicant speech. Larry Lessig has a paper out about replicant speech. Should we protect the speech of generative ideas?

The same way we protect free speech. The founding fathers had totally different ideas about what that was about. No, we need to update those laws. Part of our medieval institutions are institutions that don't move as fast as the godlike tech. So if a virus is moving at 21st century speeds and your immune system is moving at 18th century speeds, your immune system being regulation. So do you have any hope for any significant legislation? I mean, Vice President Harris met with – they're all meeting with everybody.

for sure and early compared to the other things I don't remember Cara but when we did that briefing in D.C. back here in whatever it was February or March

We said one of the things we really want to happen is for the White House to convene a gathering of all the CEOs. And that I would have never thought would have ever happened. And it did happen. I would have never thought there would be a hearing. And they mentioned it at the G7 this week. And they did it. They mentioned the G7 this week. So there's things that are moving. I don't want people to be optimistic, by the way. There needs to be a massive effort and coordinated response to make the right things happen here. Right. Vice President Harris led that meeting and told them they have ethical, moral, and legal responsibility to ensure the safety and security of their products.

They certainly don't seem protected by Section 230. They're probably not protected. There is liability attached to some of this, which could be good. That's good. Is there any? We talk to people inside the company. All we're trying to do is figure out what needs to happen. And often the people inside the companies who work on safety teams will say, like, I can't advocate for this publicly, but

You know, we need liability because talking about responsibility and ethics just gets bulldozed by incentives. There needs to be liability that creates real guardrails. Right. Let's do a lightning round. What you would say to the following people if they were here right now. Sam Altman, CEO of OpenAI, what would you say to him, Tristan?

Gather all of the top leaders to negotiate a coordinated way to get this right. Move at a pace that we can get this right, including working with the Chinese and getting a multilateral negotiations happening and say that's what needs to happen. It's not about what you do with your company and your safety practices and how much RLHF. Multilateral. But get coordination. Satya Nadella and Sundar Pichai, I'm going to mush them together. Okay.

to retract the arms race instead of saying, let's make Google dance, which is what Satya Nadella said. We have to find a way to move back into a domain of advanced capabilities being held back. Buying ourselves a little bit more time matters. Yeah. Well, they've been sick of being pantsed the entire last decade. I think they want to do that in some fashion. Reid Hoffman, Mustafa Suleiman, co-founders of Inflection AI, which put out a chatbot this month. I mean, honestly, it would be the same things with SAM. It's like, let's

Everyone needs to work together to get this right. We need to see this as dangerous for all of humanity, right? This isn't us versus the tech companies. This is all of us are human beings, and there's dangerous outcomes that land for all of us. What about Elon Musk? He signed the AIPause letter, has been outspoken on the danger for years. He was one of the earliest people.

people that were talking about it along with Sam, as I recall, a decade ago. But he, of course, started his own company, XAI, when he wants to get to the truth AI, whatever that means. We need to escape this logic of, I don't think the other guys are going to do it right. So I'm going to therefore start my own thing to do it safely, which is how we got to the arms race that's now driving all the unsafety. And so the logic of, I don't believe in the way the other guys are doing it, and mostly for competitive reasons, probably underneath the hood, I'm doing my own thing. That logic doesn't work. He's very competitive.

Do you blame them personally for putting us at risk? Or is it just one of these group things that everyone goes along? So there's this really interesting dynamic where when there is a race, which all the problems are driven by races. If I don't do the mining in that version place or if I don't do the deforestation, I just lose to the guy that will. If I don't dump the chemicals and my competitors do. And I'll do it more safely. Right, I'll do it more safely. So better me doing it than the other guy as long as I get my profit. And so everyone has that self-reinforcing logic. So there's races everywhere that are the real driver of most of the issues that we're seeing.

And there's a temptation once we diagnose it as a race, a bad race, to then absolve the companies of responsibility. I think we have to do both. Like there's both a race and also Satya Nadella and Sam, you know, helped accelerate that race in a way that actually we weren't trajectoring that way. There was human choices evolved at that moment in the timeline. I talked to people who helped found some of the original AGI labs early in the day.

They said, you know, if we go back 15 years, they would have said, let's put a ban on pursuing artificial general intelligence, building these large systems that ingest the world's knowledge about everything. We don't need to do that. We should be building advanced applied AI systems like AlphaFold that says, let's do specific targeted research domains and applications. If we were living in that world, how different might we be? You know, we had three rules of technology we put in that AI dilemma presentation. When you invent a new technology, you create a new class of responsibilities, right?

Second rule of technology. If the new technology you invent confers power, it will start a race. If I don't adopt the plow and start outcompeting the other society, I'll lose to the guy that does adopt the plow. If I don't adopt social media to get more efficient about the car, etc.,

So that's, it starts a race. Third rule of technology. If you do not coordinate that race, the race will end in tragedy. We need to become a society that is incredibly good at identifying bad games rather than bad guys. Right now, all we do have bad guys. We have, again, CEOs that do bear some responsibility for some choices.

But right now, we're always just – that drives up polarization because you put all the energy into going after one CEO or one company when we have to get good at slaying bad games. Well, except wouldn't you agree that one of the reasons social media got so out of whack was because of Mark Zuckerberg and his huge power? Like he had a power over the most big – the biggest thing and just –

Mark Zuckerberg made a ton of bad decisions while denying many of the harms most of the way through until just recently, including that it was a crazy idea that fake news had anything to do with the election. Later, they found the Russia stuff was, oh, this is all overblown, which I understand there's the Trump Russia stuff, which is –

There may have been overblown stuff there. But the Facebook content, they said, oh, it didn't really reach that many people. And it ended up reaching 150 million Americans. No, I get it. Facebook's own research said that 64% of extremists – I sat on the other side with him. We could go on forever about that. Jeffrey Hinton, who was known as one of the godfathers of AI, not the only one, had recently been sounding the alarm. Do you think others would follow suit? That was a big deal when he did that. He really was. I was very aware of him in AI. Yeah.

Do you think it'll change the direction or is he just Robert Oppenheimer saying I have become death? You know, one of the things that struck me both, you know, I came out too, right? I was an early person coming out and I've seen the effects of insiders coming out. Frances Haugen, the Facebook whistleblower, is a close friend of mine. And, you know, her coming out made a really big difference. The social dilemma I know impacted her. It legitimized for many people inside the companies that they felt like something was wrong.

And now many more people, you know, came out. I think the more people come out, the more the big names come out, the Jeff Hintons come out. It actually makes more people question it. Just I think this few days ago, there's now a street protest outside of DeepMind's headquarters in London saying we need to pause AI. I don't know if you saw that. No. It's comparable to climate change in a lot of ways. There are real people inside their own companies that are saying there's a problem here, which is why it's really important that –

When the people who are making something, who know it most intimately, are saying there's a real problem here, when the head product guy at Twitter says, you know, I don't let my own kids use social media, that's all you need to know about whether something is good or safe. So one of the things, there's some proposals you brought up. There's one based on a work by Taiwan's digital minister who's so creative where 100 regular people get in a room with AI experts and they come out with it.

proposal. That's an interesting one. You come up with one having a national televised discussion. Major AI labs, lead safety experts, and other civic actors talk on TV. That's hard because then you get a, on one hand, I could see that working but not working. Yeah, to be done carefully. Let me explain the Taiwan one really quickly. Okay. So let's imagine there's kind of two attractors for where the world is going right now.

One attractor is I trust everyone to do the right thing and I'm going to distribute godlike AI powers, superhuman powers to everyone. Everyone can build bioweapons. Everyone can make generative media, find loopholes in law, manipulate religions, do fake everything.

That world lands in continual chaos and catastrophe because it's just basically I'm handing everyone out the power to do anything. Oh, yeah. Everyone had superpowers, yeah. Right. So that's one outcome. That's one attractor. Think of it like a 3D field and it's kind of like sucking the world into one gravity well. It's just like continual catastrophes. Kind of like guns, but go ahead.

The other side is dystopia, which is instead of trusting everyone to do the right thing with these superhuman powers, I don't trust anyone to do the right thing. So I create this sort of dystopian state that sort of has surveillance and monitors everyone. That's kind of the Chinese digital authoritarianism outcome.

That's the other deep attractor for the world given this new kind of tech that's entering into the world. So the world is currently moving towards both of those. And actually, as the more frequently the continual catastrophes happen, the more it's going to drive us towards the direction of the dystopia. So in both cases, we're getting a self-reinforcing loop. So the reason I mentioned Taiwan is what we need is a middle way or third attractor, which is what has the values of an open society, a democratic society in which people have freedom and

But instead of naively trusting everyone to do the right thing, instead of also not trusting anyone to do the right thing, we have what's called warranted trust. So think of it as a loop. Technology, to the degree it impacts society, has to constitute a wiser, more responsible, more enlightened culture. A more enlightened culture supports stronger, upgraded institutions. Those upgraded institutions sets the right kind of regulatory guardrails, et cetera, for better technology that then is in a loop with constituting better culture. That's the

upward spiral. We are currently living in the downward spiral. Technology decoheres, addicts, outrages, loneliness, culture. That incoherent culture can't support any institutional responses to anything. That incapacitated, dysfunctional set of institutions doesn't regulate technology, which allows the downward spiral to continue. The upward spiral is what we need to get to. And the third way, what Taiwan is doing is actually proving that you can use technology in a way that gets you the upward spiral.

Audrey Tang's work is showing that you can use AI to find unlikely consensus across groups. There's only so many people that can fit into that town hall and get mad at each other. What if she creates a digitally augmented process where people put in all their ideas and opinions about AI and we can actually use AI to find the coherence, the shared areas of agreement that we all share and do that even faster than we could do without the tech. So this is not techno-utopianism, it's techno-realism of a

Right, where people feel that they've been putted and at the same time don't feel the need to scream. Right. Which is absolutely true. She's really quite something. Having a national debate about it, I know people –

We'll just take away whatever they want from it. Yeah. Let me explain that, though, which was that modeled after the film the day after. So in the previous era of a new technology that had the power to – I was there in college when that happened. In college when I came out. I was not born yet, but I – Let me just explain. This is a movie about the nuclear bomb blowing up, and they convened groups all over the country to talk about it, watch the movie, and then discuss it. And it really was terrifying at the time. But we were all joined together in a way we're not anymore.

I can't even imagine that happening right now. It was a made-for-TV movie commissioned by ABC, where the director, Nicholas Meyer, who also directed Star Trek II, The Wrath of Khan, and some other great films...

They put together this film that was basically noticing that nuclear war, the possibility of it, existed in a kind of a repressed place inside the human mind. No one wanted to think about this thing that was ever present. That actually was a real possibility because it was the act of Cold War and it was increasing and escalating with Reagan and Gorbachev. So they decided let's make a film that became the largest made-for-TV-watched film in all of TV history. 100 million Americans tuned in, and I think it was 1983. Yeah.

watched it once, they had a whole PR campaign, put your kids to bed early, which actually increased the number of people who actually did watch it with their kids. Reagan's biographer later, several years later, said that Reagan got depressed for weeks, he watched in the White House film studio,

And when the Reykjavik Accords happened, because they actually, I should mention, they aired the film the day after in the Soviet Union a few years later in 1987. And it scared basically the bejesus out of both the Russians and the U.S. Yeah, it was quite something at the time. And it made visible and visceral the repressed idea of what we were actually facing. We actually had the power to destroy ourselves. And it made that visible and visceral for the first time. And the important point that we mentioned this AI dilemma talk that we put online is

Is that after this, you know, one and a half hour or whatever it was film, they aired a one hour debate where they had Carl Sagan and, you know, Henry Kissinger and Brent Scowcroft and Eli Wiesel, you know, study the Holocaust to really debate like what we were facing. And that was a democratic way of saying we don't want five people at the Department of Defense in Russia and the U.S. deciding whether humanity exists tomorrow or not. Yeah.

And similarly, I think we need that kind of debate. So that's the idea. I don't know about a TV broadcast. Well, you know, I don't even work today. Honestly, I don't. I think everyone is so – what's interesting is that was very effective. That's an interesting thing to talk about the day after because it did scare the bejesus. Watching Jason Robards disintegrate in real time was disturbing. But there was nothing like that. And now there is a lot like that, right? Everybody is constantly hit with information every day. We didn't –

It was unique because we used to have a commonality that we don't have. So you have gone on Glenn Beck podcast, God save you, Brian Kilmeade podcast. We do a lot of media across the board. Right, exactly. Do they react differently from your message than progressive audiences? No. Because again, can they split like progressive? Tech companies are bad.

- Well, let me say it differently. - Conservatives surveillance and the deep state. - Yeah, well, exactly. Social media got polarized. So actually one of the reasons I'm doing a lot of media across the spectrum is I have a deep fear that this will get unnecessarily politicized. We do not, that would be the worst thing to have happen is when there's deep risks for everybody. It does not matter which political beliefs you hold.

This really should bring us together. And so I try to do media across the spectrum so that we can get universal consensus that this is a risk to everyone and everything and the values that we have and people's ability to live in a future that we care about. I do this because I really want to live in a future that kids can be raised and we can live in a good world.

As best as we can. We're facing a lot of dark outcomes. There's a spectrum of those dark outcomes. Let's live on the lighter side of that spectrum rather than the darkest side where maybe the lights go out. So one last question. How do you think the media has been covering it? Because there is a pressure if you cover it too negatively. It's like, oh, come on. Don't you see the better, you know, are you missing the bigger picture?

And I know from my personal experience, I'm so sick of being called the bummer by Ernie Erton. It gets exhausting. But at the same time, you do want to see maybe this time we can do it better. Give me hope here, because I definitely feel the pressure not to be so negative. And I still am. I don't care. And I think in the end, both of us were right back then, but it doesn't feel good being right. Everything creates externalities, you know, effects that show up on other people's balance sheets. If you're

and you think you're just communicating honestly, but you end up terrifying people. Maybe some shooters come around and they start doing violent things because they've been terrorized by what you've shared. I think about it a lot. I think a lot about responsible communication. So I think there's a really important thing here, which is that there's kind of three psychological places that I think people are landing. The first is what we call pre-tragic. I borrow this from a mentor, Daniel Schmachtenberger, who we've done the Joe Rogan show with.

A pre-tragic is someone who has actually doesn't want to look at the tragedy of whether it's climate or some of the AI issues that are facing us or social media having downsides. Any issue where there's actually there is a tragedy, but we don't want to metabolize the tragedy. So we stay in naive optimism. We call this kind of person a pre-tragic person because there's a kind of denial and repression of actual honest things that are facing us. Because I want to believe, well, things always work out in the end. Humanity always figures it out. We muddle our way through. Those things are partially true, too.

But let's be really clear about the rest. Okay, so that's the pre-tragic. Then there's the person who then stares at the tragedy. And then people tend to get stuck in tragedy. You either get depressed or you become nihilistic. Or the other thing that can happen is you actually, it's too hard and you bounce back into pre-tragic. You bounce back into, I'm just ignore that information, go back to my optimism because it's just too hard to sit in the tragedy.

There's a third place to go, which is we call post-tragic, where you actually stare face to face with the actual constraints that are facing us, which actually means accepting and grieving through

Some of the realities that we are facing. I've done that work personally and it's not about me. I just mean that I think it's a very hard thing to do. It's the humanity's rite of passage. You have to go through the dark night of the soul and be with that so you can be with the actual dimensions of the problems that we're dealing with. Because then when you do solutions on the other side of that, when you're thinking about what do we do, now you're honest about the space. You're honest about what it would take.

to do something about it. - So you're not negative, but people will cast you as that. - So there's something called pre-trans fallacy where someone who's post-tragic can sound like someone on the other side. It can sound confusing. So I can sound like a doomer, but really it is, I'm trying to communicate clearly. People often ask me like, am I an optimist? - Are you a prepper? - No.

Had to ask. Had to ask. Sam Altman has his little home. I know he does. I know he does. Yeah. He wanted to ask me what was my plan. You know, just joking. We were joking around. And I said, well, you're smaller than I am. I'm going to beat you up and take your things and take your whole plan. He's like, that's a good plan. Take his house and fix her or whatever it is. Yeah. I was like – he goes, that's a good plan. I go, it's an excellent plan. Yeah. I think I can take you. Yeah.

If it came to that. I think we need to get good at holding each other through to the post tragic. I don't know what that looks like, but I know that that's what guides me and what we're trying to do. And if there's anything that I think I want to get even better at is, um,

And it's hard once you take people through all these things to carry them through to the other sort of side. Right, because they get hopeless. They get hopeless. Yeah, you can be hopeless. After that thing, I came back. I'm like, we are fucked. Like we were so – you know, after that thing. And I thought that's not going to go over well because most people hide on Instagram or TikTok. That doesn't feel good. Let me run away from myself again. Let me scroll a bunch of photos. This is going to be a difficult time.

The more we can go through and see the thing together, I think part of being post-tragic is actually going through it with each other, like being there with each other as we go through it. I'm not saying that just as a bullshit throwaway line. I really mean it. I think we need to be there for each other. All right. Post-tragic, hand in hand. Here we go. Let's do it. Thank you. Okay. Thanks. Thanks.

Hold me, Kara. Hold me into the post-ragic. No, I will not. No, thank you. What is that? I don't know. I think he's right. I think he's right about what's going to happen. I think he's 100% right. Yeah, I think the Taiwanese minister, Audrey Tang,

saying, you know, how do we make this world more humane? She's fantastic. She is the hopeful version of that, but she's just as worried. It's just that she is saying, okay, now what are we going to do? And I think that's probably the part that just I need to work on. Like, it's all, you know, the end is near. Oh, oh, okay. Really? Can we do anything to stop it? So are you pre-tragic, post-tragic, or just what was, the other one was like bathing in, staring into the abyss. What are you? Just tragic, I guess. You know, in this case,

You fall into the abyss, right? You don't just stare into it. It envelops you. It's like a black hole in a lot of ways. But I guess I would say post-tragic, I'm like, all right, what are we going to do about it? Anyone who has...

kids or family has to say that. You can't say, oh, the world is ending. Let's all, you know, eat Twinkies and forget about it. I think you have to be post-tragic. I'm going to put myself and I think you in a different category, which is we're post-post-tragic. Oh, all right. You know, we're not as doomy because you have to make sense of this in some way. You have to unleash it. And I liked when he kind of was saying, okay, well, we were wrong to say AI winter and AI pause. We should have said, you know, I don't know, AI hot girl summer or whatever. Yeah. But

This idea of redirecting the technology. And that, I think, was actually a more compelling way to frame the conversation. 100%. Yeah, I think 100%. I think that letter was stupid, and I thought I said so at the time. It's like, come on. But Andrew Yang signed it. Whatever. Good luck, Andrew. That was good.

I was on the list. I think there's a lot of, you know, peacocking here by a lot of people. And so I think, like, I want to get to solutions because we have a record of what happened the first time. And so we have an opportunity then to do something. And history doesn't have to repeat itself. I know that's an old trope. You're right. History can't repeat itself. I like that conversation about history.

Yeah.

And big problems present opportunities for people to get over their problems. Yeah, I'm of the mind that people are a lot... that average people are a lot smarter and more reasonable than our public discourse would show. I remember the day after. I had not thought of that in years. It was such a memory when it came back. And it was...

People were silenced. And everyone understands the atomic bomb had been dropped. You know, people understand war. But that was an important media moment. And I had utterly forgotten, but it impacted everybody in a bad way for good, if that makes sense. Yeah. I like that you lived it, but he kind of mansplained it to you, by the way. Yeah.

It's true. He's like, let me tell you who made this film and what it was about, Kara. I'm aware. I watched it. I remember being there at the Georgetown University, and it was such a moment. It really was. When was it? 1983. I wasn't born, yeah. Oh, okay then. You shouldn't have to worry about it at all. It reminds me of a person I met.

I made a Mount St. Helens reference and they're like, what's Mount St. Helens? And I said, it erupted in whatever the year it was. And they're like, oh, I wasn't born. I was like, all right, I'm just going to leave the conversation. Mount St. Helens. But one of the things that frustrated me, not the mansplaining, was the failure of the analogies. I think he talked about AI like being these blinding lasers, but the blinding lasers are a brute singular weapon that are easier to kind of get ahead of. He talked about nuclear, which is

More app because you have nuclear energy as well as nuclear weapons. But I think we have had Hiroshima and the world was just organized differently at that time. The biosafety labs analogy was a little bit more interesting. And then you raised CRISPR, which I think is probably the best here. People will violate it. Just like what can we do as a group?

I had forgotten about laser on the battlefield. Like, what's wrong with us? And I'm surprised more people don't use it. And maybe they will in the future. Maybe the catch is off and we're over the edge as the song goes. But you should contemplate the worst thing. Yes. And I think the difference is it's easier to get a group of scientists like in CRISPR to agree on ground rules than a bunch of capitalists. Yeah, they won't. Who's incentivist to make money. And even the scientists couldn't hold it together. You had the, you know, China do it. It wasn't that many that we know of. I love that.

Again, business, tech have kind of disemboweled government and they're like, well, you've got to really figure this out, guys. This is on you. We're in partnership, but it's really on you. Elon now owns a presidential candidate for the man who has everything.

He's got this. And let's end on that. I'm going to calculate my P.D. Meanwhile, can you read us out, please? Yes. Today's show was produced by Naeem Araza, Blakeney Schick, Christian Castro-Russell, and Megan Burney. Special thanks to Mary Mathis. Our engineers are Fernando Arruda and Rick Kwan. Our theme music is by Trackademics.

If you're already following this show, welcome to the world of post-tragedy. Hey, it could be worse. If not, it's a high P-Doom for you. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more.