Support for this podcast comes from Avangrid. This is definitely a blue-collar community, and I'm kind of a blue-collar guy. Rick Sealscott didn't see himself as a farmer, but wasn't about to sell his grandparents' Ohio farm. And Avangrid Wind Farm pays millions to the community and landowners like him each year. Farming's up and down, but the wind turbines give us steady income. We're holding on to the farm, and we're making money. And I would absolutely do it again.
Discover where energy meets humanity at ovengrid.com. Hey, everybody. It's Sabrina. I'm here to talk to you one last time this week about the New York Times audio subscription. Sorry, we just know that change is hard, and we want to make absolutely certain that all of you, our lovely, incredible audience, understand what's going on. So two things. First, today's show is not as long as it looks. This week, we're doing a little bit of a
We're doing this kind of unusual thing where we attach episodes of other New York Times podcasts right to the end of our show. So today, that's the fabulous podcast Hard Fork, where hosts Kevin Roos and Casey Newton break down the latest tech news with smarts, humor, and expertise.
We're doing that because, and here's the second part of all of this, the New York Times has launched an audio subscription this week. That subscription gets you full access to shows like Us and Hard Fork, Ezra Klein, The Run-Up, The Interview, and Headlines. So this is included for people who subscribe to all of the New York Times, the big bundle with news, cooking, and games. But you can also sign up for audio separately.
Either kind of these subscriptions will allow you to log on, Apple Podcasts, Spotify, and you'll have access to all past shows. Bonus content, early access, stuff like that. Reminder, you don't have to subscribe to keep listening to The Daily. Recent episodes will still be free. But we hope you'll see it as a way to support the show and the work we do.
Okay, thank you for bearing with us. And with all of these announcements all this week, TGIF, we just want everyone to know the deal. And as always, thank you for listening. From The New York Times, I'm Sabrina Tavernisi, and this is The Daily. ♪
We are following a major breaking story out of the Middle East. Israel says the leader of Hamas is dead. Yahya Sinwar, the leader of Hamas and the architect of the October 7th attack, was killed by Israeli forces in Gaza. Today, the images of Sinwar's body lying in rubble, surrounded by Israeli troops, sent shockwaves through the region.
Sinwa's assassination dealing a major blow to Hamas amid the threat of wider escalation in this region. It was a major victory for Israel and prompted calls from Israeli leaders for Hamas to surrender. This war can end tomorrow. It can end if Hamas lays down its arms and returns our hostages. But what actually happens next is unclear.
Today, my colleague Ronen Bergman on how Israel got its most wanted man and what his killing means for the future of the war. It's Friday, October 18th. So, Ronen, we're talking to you around 3 p.m. New York time, 10 p.m. your time.
Just a few hours ago, we started getting hints that Israel possibly killed the leader of Hamas, Yahya Sunmar. And just a little while ago, it was announced that he was, in fact, killed. What was your first thought when you heard the news? So, candidly...
I thought, oh, here we go again. Because on the 25th of August, I got a call from a source who said, we believe Sinoir is dead. And then the source called again, said they thought it's Sinoir, but it was not. So I thought at the beginning, maybe it's the same thing going again. But then we got the first picture. And when you look at the picture that was just taken from the site,
Though, you know, I'm not a forensic expert, but it looks like the body of the leader of Hamas. When an hour later, my hunch was that it was him, though not yet confirmed, I thought this is a watershed moment where the war can end and maybe, maybe the hostages could come back.
This is a critical moment where things maybe can go for the first time in so long for the better. Okay, we'll get to that. But remind us first who Sinewar was. Sinewar was one of the most important people in the history of Hamas. Yehi Sinewar was born in a refugee camp in Chanyunis, south of the Gaza Strip.
And he was one of the young first followers of Sheikh Ahmed Yassin, the founder and the leader and the spiritual compass for Hamas, the jihadist religious movement he founded in the 80s in Gaza. Sinoir was leading a small, tough, brutal unit that was in charge of the internal security
executing what they saw as collaborators with Israel and people doing what they saw as blasphemy. Then he was sent to prison in Israel for multi-life prison sentences. He was released in the Shalit deal in 2011. Shortly afterwards, he took control over Hamas in the Gaza Strip, but he was
the chosen political leader, as well as the most important military leader. He took control basically over these two wings. And in that capacity, throughout the last four years, he planned, organized, pushed, motivated, executed the horrific attack of October 7th that he must have made. And he stayed in Gaza Strip forever.
During the year after the Israeli devastating invasion, leading the movement, and in spite of numerous attempts by thousands of people and intelligence operatives and commandos, American intelligence joining in, for a year, nobody knew exactly where he was or was able to get near him. Right. He was the invisible man, basically. He was a ghost.
Which is why from the very beginning of this war that came out of the October 7th attack, Israel has had him as their number one target. They've been trying to kill him for a whole year. And now they finally have. Tell us how they finally got him. You know, Sabrina, there was a Shin Bet, Israeli internal domestic secret service that is taking a primal role in the counterterrorism.
war against Hamas, who said, Sinoir is so strict with field security and maintaining secrecy that I am sure, and he said that over a year, throughout the last year, he will be taken out by mistake. Interesting. So no sophisticated, you know, cyber-sigint, visual intelligence, operatives on the ground, a small unit.
mainly by soldiers who are nine months in the military, in platoon commander's course on Wednesday in a regular patrol trying to dismantle some bomb that had a male function. Saw some people that they soon learned are terrorists walking in the horizon. One of them was
throwing grenades at them. They sent the drone. He, according to their description, even threw stones at the drone, trying to take the drone down. And in the firefight, one sniper at that force, and again, nine months in the military, those are very young, between 18 and 19 years old. Trainees, basically. Trainees. One sniper shoot one of the terrorists in the head
Then drones came to help them and took part of the building when they were hiding down. Hours later, they flew a drone to enter the building. Those are like miniature drones with cameras. They saw a body of one of the terrorists. The drone got near the body. Suddenly, one of them said, that looks like Sinoir. And as more evidence from the scene came,
The news in the circles of secrecy, the intelligence community and the leadership of the military, and later the prime minister office, heard that there is a strong chance that he is dead. It's all by coincidence. So basically, after a year of trying to kill him, using all of the technology and intelligence that Israel has at its fingertips,
They killed him kind of by accident, is what you're saying, like almost, you know, by mistake. And it was a bunch of trainees who did it. And he wasn't even in a tunnel like everyone assumed. He was just kind of walking around out there. So one of the things that we already identified, this phenomena, which was discovered by Israeli intelligence only in the later stages of the war.
is that it is very hard to spend constant days and weeks in the tunnels. You know, our embed with the IDF tours in Gaza, I have been in many of these tunnels. And let me tell you, Sabrina, it's even in the tunnels that are built for Hamas leadership, we have been in a few, it's very hard to stay. Humid, claustrophobic, small tunnels
narrow, and everybody needed to go out from time to time. Sinoir thought that the area is free from enemy hostiles. He was wrong. He was killed. Now, Ronan, I understand from your reporting recently that this was not the first time that the IDF was at least close to killing him. Tell me that story.
Yes, okay. So in January of 2024, so this year, Sinuar is in a tunnel under Chaniunis. This is where he was born. With his family, hostages, bodyguards, and other Hamas militants. And he's watching the nightly Israeli news as he does because, you know, he's fluent in Hebrew and an avid, consistent student of Israel.
And Galan, the defense minister, started talking about Sinoir, about the hunt for Sinoir, the number one wanted on the Israeli kill list. He says that Sinoir can hear the D9 bulldozers hammering above his head. And Sinoir says,
It's like, wait, that's correct. I can hear the bulldozers. So he realizes that the Israelis must know where he is. So he leaves in a hurry. He leaves everything behind. He takes his family, but leaves behind a lot of money, $1 million. And importantly, he also leaves behind a computer with a bunch of documents.
And these documents turn out to be the most important insight on the attack of October 7 and SINUA's roles in it. We'll be right back.
Listen wherever you get your podcasts.
So what are these important documents on this computer that they found? And what's the story that the documents reveal? So what is there is the first understanding of the decision-making process, the preparations, the deception, everything that Hamas leaders were doing throughout the last two years before the war. There are the minutes of these meetings, 10 meetings from July 1st
2021 until the 7th of August 2023, so exactly two months before the attack, where Hamas leaders, Sinoir and five other military leaders were talking freely because they were sure that Israeli intelligence,
doesn't listen in to that room. So these documents are minutes of high-level meetings about military plans from 2021 on. Yes, it's the first understanding of the decision-making process, the preparations that Hamas leaders were doing during these two years.
So, Ronan, what story do those minutes tell? So I think the first story is that the deception is so sophisticated. It's tactical and strategic. They have been preparing. Sinoir is telling his troops that during the last year, do many military exercises. So the Israelis will get used to see you drilling holes
And having forces marching or running from one place to another in the Gaza Strip would not look, you know, odd or as part of preparation for war, but just another military exercise. And also, he says, do not do this in hiding. Because if you try to hide something, then the Israelis will think that you are trying to hide something. But if you do it in the open, if you bring television, then they think that you don't mean it.
So he's trying to deceive Israel here. He's trying to lull Israel into a sense of complacency. Exactly. And the other thing that they were doing was to try and recruit partners to convince the other members of the so-called Axis of Resistance, so Iran and Hezbollah, to convince them to join in, to attack together. And in one of the minutes, Sinuai is saying,
Let us not forget that beyond breaking the defenses of Israel, our goal is to destroy the state of Israel, to collapse the state of Israel. If we attack alone, he says, we will probably cause damage, a lot of damage. But if we get the front, the axis of resistance to join us, we might be able to achieve our primal goal to take down the state of Israel, collapse it, or at least destroy
withdrew the state of Israel years backwards. And so they sent a special envoy. They sent him secretly to Beirut and to Tehran to try and convince the Iranians and Hezbollah that this is the time to attack. Now, as a senior Iranian intelligence official told him, and he updates the military council when he comes back, this is in August, just two months before the war,
The Iranian told him, listen, we agree with the plan in principle. We are with you, but we need a little more time to prepare. So that actually clarifies something we've been wondering throughout this entire war, which has been to what extent did Iran know about and participate in the planning and execution of October 7th? Exactly. Hezbollah in Iran tells them, we might support you if you start the war.
but not joining you on that day to a synchronized multi-front regional war surprise attack because we need more time. And Sinoir decided that in spite of the fact he is not getting their consent to a synchronized attack, he will attack anyway. And they give two reasons. One,
is that they are afraid that the new anti-missile defense system is going to be deployed by Israel in 2024. And the other one is a breakdown of the new government of Israel. Netanyahu's attempt for the judicial overall is weakening
Israel society. And just to explain, you're talking about that very divisive effort by Netanyahu's new right-wing government to overhaul the country's court system. A lot of Israel was very unhappy about it. There were huge protests. For a moment there, the country basically ground to a halt. Yes. And so Sinoir says, this is a historical opportunity, historical window for us to attack. And this is why they decide to attack even when they are alone.
And he wants to attack Yom Kippur, but then they decide on the last holidays of the high holidays, the 7th of October. They set the date in August and they strike. And we know what happens next. 1,200 Israelis die on October 7th. Hundreds of hostages are taken. And some of them, of course, today still are in Gaza, around 100. Many others are dead.
And then in Gaza, over 40,000 people have been killed. And now the war has moved to Lebanon and potentially even Iran. In a way, Sinoir, a year delayed, but he got what he wanted, a regional war. And Ronan, just explain what his death means for this regional war. Sinoir had a plan. He knew that he's the most wanted person by Israel. And he
Long ago, they already agreed what would happen if he's killed. And this is for the continuity of this organization. Sinoir is being succeeded by his brother, Mohamed, who is considered to be even a more hardliner, more extreme, more lethal and brutal than his brother. He has already assumed control. So as important as Sinoir was, they planned for this.
And a new leader is literally already in control just hours later, his brother, Mohammed. Yes. And he will need to make a call. Is he going to agree for a deal, thrive to end the war?
or continue it. Well, let's talk about that deal, the potential for a ceasefire deal between Hamas and Israel that would cease hostilities and bring hostages back home to Israel. I mean, this was something that Israel and the U.S. accused Sinoir of really being the blockage for. What becomes of that deal now? I am not sure it was just Sinoir. I believe that it's 50-50 responsibility with the government of Israel, with the prime minister of Israel,
Maybe even a different balance. And at the end of the day, the government of Israel will need to make two critical decisions now. This is a watershed moment in history. They can use that and have the Qataris pick up the phone, call Hamas and say, let's reconvene and let's have a deal to free the hostages because they are dying.
And there are dozens of hostages that are still alive, and there's a chance to free them. And the other decision is whether to use that moment to end this multi-front regional war. And maybe use that moment to bargain some kind of a deal with Iran that would also bring some peace to the north with Hezbollah and Iran.
at the end, form something that could turn the page into something better, a better day for the Middle East. Unfortunately, I'm not sure that this will happen because Israel is now gearing towards a massive attack on Iran.
But Ronan, I want to understand that because Israel has always said since October 7 that they want to eliminate Hamas. And a huge part of that was killing the head of Hamas, Sinwar. And now they've done that. And Gaza is quite devastated. So why is this not a reason to say mission accomplished and end the war? The first thing is hubris.
Israel has gained some successes in its war against Hezbollah, Israeli defense establishment regaining its pride for these successes. And they feel that this can go on. And even without a clear exit strategy, they believe they can, some of them believe they can just win again and again. The second is Netanyahu is angry
coerced by extreme parts of his coalition to continue, not sign a ceasefire, and start some kind of implementation of military rule in Gaza. So, Israel coming back to Gaza, and we hear parts of the coalition talking about reestablishment of settlements in Gaza that were taken down when Israel disengaged from the Strip. And
Also, Prime Minister Netanyahu knows that when the war is over, officially over, so there's some kind of an agreement, there will be questions that he will be asked. There will be a demand to establish an inquiry panel, maybe elections. Ben thinks that he's afraid of. So in other words, he's concerned with his own power. And the continuation of the war, as horrible as it sounds, is good for the integrity of the coalition.
Ronan, do you know what the reaction has been inside Gaza to this stuff? So our colleague Bilal has spoken with different people in Gaza. And, you know, it's diverse and on a vast variety, as you can expect. Some people blame Sinoir for October 7 and everything that happened after, so they're happy. Some people...
saw him as a local and as a regional hero, and they are sad. They see him as the symbol for Palestinian struggle and Palestinian defiance, who caused the enemy by far the most devastating damage. And there are many who says it just won't change anything. And I think maybe this is the most gloomy and sad example
Where even after the death of Sinhua, the mastermind of the attack, it would just continue the same. Ronan, what about inside Israel? What has been the reaction inside Israel? You're in Tel Aviv. What do you think Sinhua's killing means for the Israeli people, for the Israeli psyche? People are cheering. They see him as nothing less than the Satan himself.
They identify him, and I think with all good reason, with the atrocities, the kidnapping, the sexual abuses, the mass murder of October 7th, and the war that ensued. And so, first of all, they have a sense of retribution. I think most of the public would realize that there is no more symbolic end to the war than this one. But they also understand it's not the end, and they are very much concerned about it.
And this is common between all Israelis. They are very much concerned to the fate of the hostages. And they also understand now, I think better than a year ago, but they understand now it's not just about Hamas, but it's also the Netanyahu government. Ronan, thank you. Thank you, Sabina.
On Thursday, during a campaign visit in Milwaukee, Vice President Kamala Harris, when asked about Sinwar's death, said, quote, this moment gives us an opportunity to finally end the war in Gaza. Meanwhile, in Israel, Prime Minister Benjamin Netanyahu told Gazans that if they set aside their weapons and returned the hostages, Israel would, quote, allow them to leave and live. But he made no mention of bringing the war to an end.
We'll be right back. Here's what else you should know today. On Thursday, an independent panel reviewing the failures that led to the attempted assassination of former President Donald Trump in Butler, Pennsylvania, said that agents involved in the security planning did not take responsibility in the lead up to the event, nor did they own the failures in the aftermath.
The panel, which included former Department of Homeland Security Secretary Janet Napolitano, called on the Secret Service to replace its leadership with people from the private sector and shed much of its historic role investigating financial crimes to focus almost exclusively on its protective mission.
And federal prosecutors have charged a man they identified as an Indian intelligence officer with trying to orchestrate from abroad an assassination on U.S. soil.
An indictment unsealed in Manhattan on Thursday said that the man, Vikash Yadav, whom authorities believe is currently in India, directed the plot that targeted a New York-based critic of the Indian government, a Sikh lawyer and political activist. The charges are part of an escalating response from the U.S. and Canada to what those governments see as brazenly illegal conduct by India, a longtime partner.
Today's episode was produced by Muj Zaydi, Rob Zipko, Diana Nguyen, and Eric Krupke. It was edited by Paige Cowett and MJ Davis-Lynn, with help from Chris Haxel. Contains original music by Rowan Nemistow and Diane Wong, and was engineered by Chris Wood. Our theme music is by Jim Brunberg and Ben Landsferk of Wonderly.
Special thanks to Patrick Kingsley. Remember to catch a new episode of The Interview right here tomorrow. David Marchese talks to adult film star turned influencer Mia Khalifa.
I am so ashamed of the things that I've said and thought about myself and allowed others to say and jokes that I went along with and contributed to about myself or about other women or anything like that. I'm extremely ashamed of that. That's it for The Daily. I'm Sabrina Tavernisi. See you on Monday.
We just got another email from somebody who said they thought I was bald. I have an apparently crazy bald energy that I bring to this podcast. What do you think is bald seeming about you? I think for me, they think of me as a wacky sidekick, which is a bald energy. Is it? You know?
I think so. I don't think of, I don't associate wacky and bald because, because I'm thinking Jeff Bezos. I'm like, I'm like, I know a lot of like very hardcore bald men. So do you think that maybe people think that I sound like a sort of titan of industry plutocrat? I would not say that's the energy you're giving is plutocrat energy, but, um, Oh really? Cause I just fired 6,000 people to show that I could. Yeah.
You did order me to come to the office today. I did. I said there's a return to office in effect immediately. No questions.
I'm Kevin Russo, tech columnist from The New York Times. I'm Casey Noonan from Platformer. And this is Hard Fork. This week, are we reaching the AI endgame? A new essay from the CEO of Anthropic has Silicon Valley talking. Then, Uber CEO Dara Khosrowshahi joins us to discuss his company's new partnership with Waymo and the future of autonomous vehicles. And finally, internal TikTok documents tell us exactly how many videos you need to watch to get hooked. And so I did. Very brave. God help me.
Well, Kevin, the AI race continues to accelerate, and this week the news is coming from Anthropic. Now, last year, you actually spent some time inside this company, and you called it the white hot center of AI doomerism. Yes, well, the headline of my piece called it the white hot center of AI doomerism. Just want to clarify. Blame the headline.
Well, you know, reporters don't often write our own headlines, so I just feel the need to clarify that. Fair enough. But the story does talk about how many of the people you met inside this company seemed strangely pessimistic about what they were building. Yeah, it was a very interesting reporting experience because I got invited to spend several weeks just basically embedded at Anthropic as they were gearing up to launch an update of their chatbot, Clawed.
And I sort of expected, you know, they would go in and try to impress me with how great Claude was and talk about all the amazing things that would allow people to do. And then I got there and it was like, all they wanted to do was talk about how scared they were of AI and of releasing these systems into the wild. I compared it in the piece to like a, being a restaurant critic who shows up at like a buzzy new restaurant.
And all anyone wants to talk about is food poisoning. Right. And so for this reason, I was very interested to see over the past week, the CEO of Anthropic, Dario Amadei, write a 13,000 word essay about his vision of the future.
And in this essay, he says that he is not an AI doomer, does not think of himself as one, but actually thinks that the future is quite bright and might be arriving very quickly. And then shortly after that, Kevin, the company put out a new policy, which they call a responsible scaling policy that I thought had some interesting things to say about ways to safely build AI systems. So we wanted to talk about this today for a couple reasons.
One is that AI CEOs have kept telling us recently that major changes are right around the corner. Sam Altman recently had a blog post where he said that an artificial superintelligence could be just a few thousand days away. And now here, Amadei is saying that AGI could arrive in 2026, which check your calendar, Kevin, that is in 14 months. And certainly there is a case that this is just hype. But
But even so, there are some very wild claims in here that I do think deserve broader attention.
The second reason that we want to talk about this today is that Anthropic is really the flip side to a story that we've been talking about for the past year here, which is what happened to OpenAI during and after Sam Altman's temporary firing as CEO. Anthropic was started by a group of people who left OpenAI primarily over safety concerns. And recently, several more members of OpenAI's founding team and their safety research teams have gone over to Anthropic.
And so in a way, Kevin, Anthropic is an answer to the question of what would have happened if OpenAI's executive team hadn't spent the past few years falling apart? And while they are still the underdog compared to OpenAI, is there a chance that Anthropic is the team that builds AGI first? So that's what we want to talk about today. But first...
I want to start by just talking about this essay. Kevin, what did Dario Amadei have to say in his essay, Machines of Loving Grace? Yeah, so the first thing that struck me is he is clearly reacting to this perception, which I may have helped create through my story last year, that sort of he and Anthropic are just doomers, right? That they are just a company that goes around warning about how badly AI could go if we're not careful. And what he says in this essay that I thought was really interesting and important is,
you know, we're going to keep talking about the risks of AI. This is not him saying, I don't think this stuff is risky. I've been, you know, taken out of context and I'm actually an AI optimist. What he says is it's important to have both, right? You can't just be going around warning about the doom all the time. You also have to have a positive vision for the future of AI because that's what, not only what inspires and motivates people, but it matters what we do,
I thought that was actually the most important thing that he did in this essay was he basically said, look, this could go well or it could go badly. And whether it goes well or badly is up to us. This is not some inevitable force. You know, sometimes people in the AI industry, they have a habit of talking about AI as if it's just kind of this disembodied force that is just going to, you know, happen to us. Inevitably. Yes. And we either have to sort of like get on the train or get run over by the train.
And what Dario says is actually different. He says, you know, this is, here's a vision for how this could go well, but it's going to take some work to get there. It also made me realize that for the past couple of years, I have heard much more about how AI could go wrong than how it could go right from the AI CEOs, right? As much as these guys get knocked for endlessly hyping up their products, they also have, I think to their credit, spent a lot of time trying to explain to people that this stuff is risky. And so there was something,
almost counterintuitive about Dario coming out and saying, wait, let's get really specific about how this could go well. Totally. So I think the first thing that's worth pulling out from this essay is the timelines, right? Because as you said, Dario Amadei is claiming that powerful AI, which is sort of his term, he doesn't like AI,
AGI. He thinks it sounds like too sci-fi. But powerful AI, which he sort of defines as like an AI that would be smarter than a Nobel Prize winner in basically any field and that it could basically control tools, go do a bunch of tasks simultaneously. He calls this sort of a country of geniuses in a data center. That's sort of his definition of powerful AI. And he thinks that it could arrive as soon as 2026.
I think there's a tendency sometimes to be cynical about people with short timelines like these, like, oh, these guys are just saying this stuff is going to arrive so soon because they need to raise a bunch of money for their AI companies. And, you know, maybe that is a factor. But I truly believe that at least Dario Amadei is sincere and serious about this.
This is not a drill to him. And Anthropic is actually making plans, scaling teams and building products as if we are headed into a radically different world very soon, like within the next presidential term. Yeah. And look, Anthropic is raising money right now. And that does give Dario motivation to get out there in the market and start talking about curing cancer and all these amazing things that he thinks that that I can do.
At the same time, I think that we're in a world where the discourse has been a little bit poisoned by folks like Elon Musk who are constantly going out into public, making bold claims about things that they say are going to happen within six months or a year and then truly just never happen. And our understanding of Dario based on our own conversations with him and of people who work with him is like,
He is not that kind of person. This is not somebody who lets his mouth run away with him. When he says that he thinks this stuff could start to arrive in 14 months, I actually do give some credibility. - Yeah, and you know, you could argue with the time scales, and plenty of smart people disagree about this, but I think it's worth taking this seriously, because this is the head of one of the leading AI labs sort of giving you his thoughts on not just what AI is going to change about the world, but when that's going to happen.
And what I liked about this essay was that it wasn't trying to sell me a vision of a glorious AI future, right? Dario says, you know, all or some or none of this might come to pass. But it was basically a thought experiment. He has this idea in the essay about what he calls the compressed 21st century. He basically says, what if all AI does...
is allow us to make 100 years worth of progress in the next 10 years in fields like biology. What would that change about the world? And I thought that was a really interesting way to frame it. Give us some examples, Kevin, of what Dario says might happen in this compressed 21st century. So what he says in this essay is that if we do get what he calls powerful AI relatively soon, that in the sort of decade that follows that,
We would expect things like the prevention and treatment of basically all natural infectious disease, the elimination of most types of cancer, very sort of good embryo screening for genetic diseases that would make it so that more people didn't die of these sort of hereditary things.
He talks about there being improved treatment for mental health and other ailments. Yeah, I mean, and a lot of this comes down to just understanding the human brain, which is an area where we still have a lot to learn. And the idea is if you have what he calls this country of geniuses that's just operating on a server somewhere,
and they are able to talk to each other, to dream, to suggest ideas, to give guidance to human scientists in labs, to run experiments, then you have this massive compression effect and all of a sudden you get all of these benefits really soon. And, you know, obviously the headline grabbing stuff is like, you know, Dario thinks we're going to cure all cancer and we're going to cure Alzheimer's disease. Obviously those are huge, but
There's also kind of the more mundane stuff like, do you struggle with anxiety? Do you have other mental health issues? Like, are you like mildly depressed? It's possible that we will understand the neural circuitry there and be able to develop treatments that would just lead to a general rise in happiness. And that really struck me. Yeah. And it sounds when you just describe it that way, it sounds sort of utopian and crazy, but
But what he points out, and what I actually find compelling, is like most scientific progress does not happen in a straight line, right? You have these kind of moments where there's a breakthrough that enables a bunch of other breakthroughs.
And we've seen stuff like this already happen with AI, like with AlphaFold, which won the freaking Nobel Prize this year in chemistry, where you don't just have a cure for one specific disease, but you have a way of potentially discovering cures for many kinds of diseases all at once. There's a part of the essay that I really liked where he points out that
CRISPR was something that we could have invented long before we actually did, but essentially no one had noticed the things they needed to notice in order to make it a reality. And he posits that there are probably hundreds of other things like this right now that just no one has noticed yet. And if you had a bunch of AI agents working together in a room and they were sufficiently intelligent, they would just notice those things and we'd be off to the races.
Right. And what I liked about this section of the essay was that it didn't try to claim that there was some, you know, completely novel thing that would be required to result in the changed world that he envisions. Right. All that would need to happen for society to look radically different 10 or 15 years from now in Dario's mind is for that sort of base rate of discovery to
to accelerate rapidly due to AI. Yeah. Now, let's take a moment to acknowledge folks in the audience who might be saying, "Oh my gosh, will these guys stop it with the AI hype? They're accepting every premise that these AI CEOs will just shovel it. They can't get enough and it's irresponsible. These are just stochastic parrots, Kevin. They don't know anything. It's not intelligence and it's never going to get any better than it is today."
And I just want to say I hear you and I see you. And our email address is Ezra Klein Show at Playtime Suck Off. But here's the thing. You can look at the state of the art right now. And if you just extrapolate what is happening in 2024 and you assume some rate of progress beyond where we currently are, it seems likely to me that we do get into a world where you do have these sort of simulated PhD students or maybe simulated super geniuses and super
they are able to realize a lot of these kinds of things. Now, maybe it doesn't happen in five, 10 years. Maybe it takes a lot longer than that. But I just wanted to underline, like, we are not truly living in the realm of fantasy. We are just trying to get a few years and a few levels of advancement beyond where we are right now. Yeah. And Dario does in his essay, make some caveats about things that might constrain the rate of progress in AI. Like,
regulation or clinical trials taking a long time. You know, he also talks about the fact that some people may just
opt out of this whole thing. Like they just may not want anything to do with AI. There might be some political or cultural backlash that sort of slows down the rate of progress. Um, and he says, you know, like that could actually constrain this and, and we need to think about some ways to address that. Yeah. So that is sort of the suite of, uh, things that Dario thinks will benefit our lives. You know, there, there's a bunch more in there, you know, we think sort of will help with climate change, other issues. Um,
but the essay has five parts, and there was another part of the essay that really caught my attention, Kevin, and it is a part that looks a little bit more seriously at the risks of this stuff because any super genius that was sufficiently intelligent to cure cancer could otherwise wreak havoc in the world. So what is his idea for ensuring that AI always remains in good hands? So he admits that he's not like a geopolitics expert. This is not his forte. Unlike the two of us. Right.
And there have been, look, a lot of people theorizing about what the politics of advanced AI are going to look like. Dario says that his best guess currently about how to prevent AI from sort of empowering autocrats and dictators is through what he calls an entente strategy. Basically, you want a bunch of democracies to kind of come together to secure their supply chain, to sort of block
adversaries from getting access to things like GPUs and semiconductors, and that you could basically bring countries into this democratic alliance and sort of ice out the more authoritarian regimes from getting access to this stuff. But I think, you know, this was sort of not the most fleshed out part of the argument.
Yeah, well, and I appreciate that he is at least making an effort to come up with ideas for how would you prevent AI from being misused. But as I was reading the discussion around the blog post, I found this interesting response from a guy named Max Tegmark. Max is a professor at MIT who studies machine learning, and he's also the president of something called the Future of Life Institute, which is a sort of nonprofit focused on AI safety.
And he really doesn't like this idea of what Dario calls the Entente, the group of these democracies working together. And he says he doesn't like it because it essentially sets up and accelerates a race. It says to the world that essentially whoever invents super powerful AI first will win forever, right? Because in this view, AI is essentially the final technology that you ever need to invent because after that, it'll just, you know, invent anything else it needs.
And he calls that a suicide race. And the reason is this, and he has a great quote, horny couples know that it is easier to make a human level intelligence than to raise and align it. And it is also easier to make an AGI than to figure out how to align or control it.
Wow. I never thought about it like that. Yeah, you probably never thought I would say horny couple on the show, but I just did. So, Kevin, what do you make of this sort of feedback? Is there a risk there that this effectively serves as a starter pistol that leads maybe our adversaries to start investing more in AI and sort of racing against us and, you know, triggering some sort of doom spiral? Yeah.
Yeah, I mean, look, I don't have a problem with China racing us to cure cancer using AI, right? Like that's not if they get there first, like more power to them. But I think the more serious risk is that they start building the kind of AI that serves Chinese interests, right, that it becomes a tool for surveillance and and control of people rather than some of these more sort of democratic ideals.
And this is actually something that I asked Dario about back last year when I was spending all that time at Anthropic, because this is the most common criticism of Anthropic is like, well, if you're so worried about AI and all the risks that it could pose, like, why are you building it? And I asked him about this, and his response was he basically said, look, there's this problem of...
in AI research of kind of intertwining, right? Of the same technology that sort of advances the state of the art in AI also allows you to advance the state of the art in AI safety, right? The same tools that make the language models more capable also make it possible to control the behavior of the language models. And so these things kind of go hand in hand.
And if you want to compete on the frontier of AI safety, you also have to compete on the frontier of AI capabilities. Yeah, and I think it's an idea worth considering. To me, it just sounds like, wow, you are really standing on a knife's edge there. If you're saying in order to have any influence over the future, we have to be right at the frontier and maybe even gently advance the frontier and yet somehow not accidentally trigger a race where all of a sudden everything gets out of control.
But I do accept and respect that that is Daria's viewpoint. But isn't that kind of what we observed from the last couple of years of AI progress, right? Like OpenAI, it got out there with ChatGPT before any of the other labs had released anything similar. And ChatGPT kind of set the tone for all of the products that followed it. And so I think the argument from Anthropic would be like, yeah.
we could sort of be way behind the state of the art that would probably make us safer than someone who was actually advancing the state of the art. But then we missed the chance to kind of set the terms of what future AI products from other companies will look like. So it's sort of like using a soft power in an effort to influence others. Yeah.
Yeah. And the way they put this to me last year was that they wanted instead of there to be just a race for raw capabilities of AI systems, they wanted there to be a safety race, right, where companies would start competing about whose models were the safest rather than whose models could, you know, do your math homework better.
So let's talk about the safety race and the other thing that Anthropic did this week to lay out a future vision for AI. And that was with something that has, I'll say it, kind of a boring name, the responsible scaling policy. I understand. This maybe wasn't going to come up over drinks at the club this weekend. Yeah. But I think...
think this is something that people should pay attention to because it's an example of what you just said, Kevin. It is Anthropic trying to use some soft power in the world to say, hey, if we went a little bit more like this, we might be safer. All right. So talk about what's in the responsible scaling policy that Anthropic released this week. Well, let's talk about what it is. And the basic idea is just that as
large language models gain new abilities, they should be subjected to more scrutiny and they should have more safeguards added to them. They put this out a year ago and it was actually a huge success in this sense, Kevin. OpenAI went on to release its own version of it. And then Google DeepMind released a similar scaling policy as well this spring.
So now Anthropica is coming back just over a year later, and they say, we're going to make some refinements. And the most important thing that they say is, essentially, we have identified two capabilities that we think would be particularly dangerous. And so if anything that we make displays these capabilities, we are going to add a bunch of new safeguards.
The first one of those is if a model can do its own AI research and development, that is going to start ringing a lot of alarm bells, and they're going to put many more safeguards on it. And second, if one of these models can meaningfully assist someone who has a basic technical background in creating a chemical, biological, radiological, or nuclear weapon, then it's
they would add these new safeguards what are these safeguards well they have a super long blog post about it you can look it up but it includes basic things like taking extra steps to make sure that a foreign adversary can't steal the model weights for example or otherwise hack into the systems and run away with it
Right. And this is some of it is similar to things that were proposed by the Biden White House in its executive order on AI last year. This is also these are some of the steps that came up in SB 1047, the AI regulation that was vetoed by Governor Newsom in California recently. So these are ideas that have been floating out there in the sort of AI safety world for a while. But Anthropic is basically saying we are going to proactively commit to doing this stuff.
even before a government requires us to. Yeah. There's a second thing I like about this, and it relates to this SB 1047 that we talked about on the show. Something that a lot of folks in Silicon Valley didn't like about it was the way that it tried to identify danger. And it was not because of a specific harm that a model could cause. It was by saying, well, if a model costs a certain amount of money to train, right, or if it is trained with a certain amount of compute,
Those were the proxies that the government was trying to use to understand why this would be dangerous. And a lot of folks in Silicon Valley said, we hate that because that has nothing to do with whether these things could cause harm or not. So what Anthropic is doing here is saying, well, why don't we try to regulate based on the anticipated harm? Obviously, it would be bad if you could log on to Claude, Anthropic's rival to ChatGPT, and said,
hey, help me build a radiological weapon, which is something that I might type into Claude because I don't even know the difference between a radiological weapon and a nuclear weapon. Do you? I hope you never learn. I hope I don't.
I hope I don't either because sometimes I have bad days, Kevin, and I get to scheming. So for this reason, I think that governments, regulators around the world might want to look at this approach and say, hey, instead of trying to regulate this based on how much money AI labs are spending or like how much compute is involved, why don't we look at the harms we're trying to address and say, hey, if you build something that could cause this kind of harm, you have to do X, Y, and Z. Yeah, that makes sense to me.
So I think the biggest impact that both the sort of essay that Dario wrote and this responsible scaling policy had on me was not about any of the actual specifics of the idea. It was purely about the timescales and the urgency. It is one thing to hear a bunch of people telling you that AI is coming and that it's going to be more powerful than you can imagine, sooner than you can imagine. But if you actually start to internalize that and plan for it,
it just feels very different. If we are going to get powerful AI sometime in the next, let's call it two to 10 years,
You just start making different choices. Yeah, I think it becomes sort of the calculus. I can imagine it affecting what you might want to study in college if you are going to school right now. I have friends who are thinking about leaving their jobs because they think the place where they are working right now will not be able to compete in a world where AI is very widespread. So yes, you're absolutely starting to see it creep into the calculus.
I don't know kind of what else it could do. You know, there's no real call to action here because you can't really do very much until this world begins to arrive. But I do think psychologically, we want people to at least imagine, as you say, what it would be like to live in this world because I have been surprised at how little discussion this has been getting. Yeah, I totally agree. I mean, to me, it feels like
We are entering, I wouldn't call it like an AI endgame because I think we're closer to the start than the end of this transformation. But it does feel like something is happening. I'm starting to notice AI's effects in my life more. I'm starting to feel more dependent on it. And I'm also like, I'm kind of having an existential crisis. Really? Yeah.
Not a full-blown one, but typically I'm a guy who likes to plan. I like to strategize. I like to have a five-year and a 10-year plan. And I've just found that my own certainty about the future and my ability to plan long-term is just way lower than it has been for decades.
any time that I can remember. That's interesting. I mean, for myself, I feel like that has always been true. You know, in 1990, I did not know what things were going to look like in 2040, and I would be really surprised by a lot of things that have happened along the way. But yeah, there's a lot of uncertainty out there. It's scary, but I also like...
Do you not feel a little bit excited about it? Of course. Look, I love software. I love tools. I want to live in the future, and it's already happening to me. There is a lot of that uncertainty, and that stuff freaks me out. But if we could cure cancer, if we could cure depression, if we could cure anxiety, you'd be talking about the greatest advancement to human well-being, certainly in decades, maybe, that we've ever seen. Yeah. I mean...
I have some...
because like my dad died of a very rare form of cancer that was, it was like a sub 1% type of cancer. And when he got sick, it was like, you know, I read all the clinical trials and it was just like, there hadn't been enough people thinking about this specific type of cancer and how to cure it because it was not breast cancer. It was not lung cancer. It was not something that millions of Americans get.
And so there just wasn't the kind of brain power devoted to trying to solve this. Now, it has subsequently, it hasn't been solved, but there are now treatments that are in the pipeline that didn't exist when he was sick. And I just constantly am wondering, like, if he had gotten sick now instead of when he did, like, maybe he would have lived. And I think that is one of the things that makes me really optimistic about AI is just like,
Maybe we just do have the brainpower or we will soon have the brainpower to devote, you know, world-class research teams to these things that might not affect millions of people, but that do affect some number of people. Absolutely. I just, I don't know. It really, I got kind of emotional.
reading this essay because it was just like, you know, obviously it's, you know, I'm not someone who believes all the hype, but I'm like, I assign some non-zero probability to the possibility that he's right, that all this stuff could happen. And I just find that so much more interesting and fun to think about than like a world where everything goes off the rails. Well, it's just the first time that we've had a truly
positive, transformative vision for the world coming out of Silicon Valley in a really long time. In fact, this vision, it's more positive and optimistic than anything that has been like in the presidential campaign. You know, it's like when the presidential candidates talk about the future of this country, it's like, well, you know, we'll give you this tax break, right? Or we'll make this other policy change. Nobody's talking about how they're going to freaking cure cancer. Right.
Yeah. Right? So I think, of course, we're drawn to this kind of discussion because it feels like, you know, there are some people in the world who are taking really, really big swings. And if they connect, then we're all going to benefit. Yeah. Yeah. When we come back, why Uber has way more autonomous vehicles on the road than it used to.
Well, Casey, one of the biggest developments over the past few months in tech is that self-driving cars now are actually working. Yeah, but this is no longer in the realm of sci-fi. Yes, so we've talked, obviously, about the self-driving cars that you can get in San Francisco now. It used to be two companies, Waymo and Cruise. Now it's just Waymo. And there have also been a bunch of different autonomous vehicle updates from other companies that are involved in the space. And the one that I found most interesting recently was about Uber, right?
Now, as you will remember, Uber used to try to build its own robo taxis. They gave that up back in 2020. That was the year they sort of sold off their autonomous driving division to a startup called Aurora after losing just an absolute ton of money on it. But now they are back.
in the game. And they just recently announced a multi-year partnership with Cruise, the self-driving car company. They also announced an expanded partnership with Waymo, which is going to allow Uber riders to get AVs in Austin, Texas and Atlanta, Georgia. They've been operating this service in Phoenix since last year, and that's going to keep expanding. They also announced that self-driving Ubers will be available in Abu Dhabi through a partnership with the Chinese AV company WeRide.
And they've also made a long-term investment in Wave, which is a London-based autonomous driving company. So they are investing really heavily in this, and they're doing it in a different way than they did back when they were trying to build their own self-driving cars. Now they are essentially saying, we want to partner with every company that we can that is making self-driving cars.
Yeah, so this is a company that many people take several times a week, Uber. And yet I feel like it sometimes is a bit taken for granted. And while we might just focus on the cars you can get today, they are thinking very long-term about rentals.
what transportation is going to look like in five or 10 years. And increasingly for them, it seems like autonomous vehicles are a big part of that answer. Yeah. And what I found really interesting, so Tesla had this robo-taxi event last week where Elon Musk talked about how you'll soon be able to hail a self-driving Tesla. And
what I found really interesting is that Tesla's share price plummeted after that event, but Uber's stock price rose to an all-time high. So clearly people think that, or at least some investors think that Uber's approach is better here than Tesla's. It's the sort of thing, Kevin, that makes me want to talk to the CEO of Uber. And lucky for you,
He's here. Oh, thank goodness. So today we're going to talk with Uber CEO Dara Khosrowshahi. He took over at Uber in 2017 after a bunch of scandals led the founder of Uber, Travis Kalanick, to step down. He has made the company profitable for the first time in its history. And I think a lot of people think he's been doing a pretty good job over there.
And he is leading this charge into autonomous vehicles. And I'm really curious to hear what he makes, not just of Uber's partnership with Waymo, but of sort of the whole self-driving car landscape. Let's bring him in. Let's do it.
Dara Khazr-Shawi, welcome to Hard Fork. Thank you for having me. So you were previously on the board of the New York Times company until 2017 when you stepped down right after taking over at Uber. I assume you still have some pull with our bosses, though, because of your years of service. So can you get them to build us a nicer studio? I didn't have pull when I was on the board, and I certainly have zero pull now. I've got negative pull, I think. They're taking revenge on me. Yeah.
Well, since you left the board, they're making all kinds of crazy decisions like letting us start a podcast. Oh, my God. Yeah. But all right. So we are going to talk today about your new partnership with Waymo and the sort of autonomous driving future. I would love to hear the story of how this came together, because I think for people who've been following this space for a number of years, this was surprisingly.
Uber and Waymo have not historically had a great relationship. The two companies were- It was a little rocky at first, yes. Embroiled in litigation and lawsuits and trade secret theft and things like that. It was a big deal. And so how did they, did they approach you? Did you approach them? How did this partnership come together? I guess it's time healing, right? When I came on board,
We thought that we wanted to establish a better relationship with Google generally, Waymo generally. Even though we were working on our own self-driving technology, it was always within the context of we were developing our own, but we want to work with third parties as well. One of the disadvantages of developing our own technology was that some of the other players, the Waymos of the world, etc.,
hurt us, but didn't necessarily believe us. It's difficult to work with players that you compete. So one of the first decisions that we made was we can't be in between here. Either you have to go vertical or you have to go platform strategy. You can't
achieve both. And we had to make it back. We either have to do our own thing or we have to do it with partners. Yeah, absolutely. And so that strategic kind of fork became quite apparent to me.
And then the second was just, what are we good at? Listen, I'll be blunt. We sucked at hardware, right? We tried to apply software principles to hardware. It doesn't work. Hardware is a different pace, different demand in terms of perfection, et cetera. And ultimately that fork, do we go vertical? And there are very few companies that can do software and hardware well. Apple, Tesla are arguably one of the few in the world.
and we decided to make a bet on the platform.
And so once we made that bet, we went out and identified who are the leaders. Waymo was a clear leader. First, we had to make peace with them and settle in court, et cetera. We got Google to be a bigger shareholder. And then over a period of time, we built relationships. And, you know, I do think there's a synergy between the two. So it just makes sense, the relationship. And we're very, very excited to, on a forward basis, expand it pretty significantly. Yeah.
So this was, I feel like, maybe your most consequential decision to date as the CEO of this company. If you believe that AVs are going to become the norm for many people hailing a ride in 10 or 15 years, it's conceivable that they might open up the Waymo app, right? And not the Uber app. Waymo has an app to order cars. I use it fairly regularly, right? So what gave you the confidence that in that world, it will still be Uber-friendly?
that is the app that people are turning to and not Waymo or whatever other apps might have arisen for other AV companies? I think first is that it's not a binary outcome, okay? I think that a Waymo app and an Uber app can coexist. We saw it in my old job in the travel business, right? I ran Expedia and there's this dramatic, is Expedia going to put the hotel chains out of business? Are the hotel chains going to put Expedia out of business? The fact is both thrived.
And there's a set of customers who have booked through Expedia. There's a set of customers who books hotel direct and both businesses have grown and interactivity in general has grown. Same thing. If you look in food, right? McDonald's has his own app. It's a really good app. It has a loyalty program. Starbucks has its own app as a loyalty program. Yet both are engaging with us through the Uber eats marketplace.
So my conclusion was that there isn't an either or. I do believe there will be other companies. There'll be cruises and there'll be we rides and waves, et cetera. There'll be other companies and self-driving choices. And the person who wants utility, speed, ease, familiarity will choose Uber and both can coexist and both can thrive. And both are really going to grow because autonomous will be the future eventually.
So tell us more about the partnership with Waymo that is going to take place in Austin and Atlanta. Who is actually paying for the maintenance of the cars? Does Uber have to sort of make sure that there's no, you know, trash left behind in the cars? Like what is Uber actually doing in addition to just making these rides available through the app? Sure. So I don't want to talk about the economics because they're confidential in terms of the deal. But yeah,
In those two cities, Waymo will be available exclusively through the Uber app. And we will also be running the fleet operations as well. So depots, recharging, cleaning. If something gets lost, making sure that it gets back to its owner, etc. And Waymo will provide the software driver, will obviously provide the hardware, repair the hardware, etc. And then we will be doing the upkeep and operationalization.
operating the networks, so to speak. - And for riders, if you want to get in a Waymo in one of those cities through Uber, is there an option to specifically request a self-driving Waymo, or is it just kind of chance? Like if the car that's closest to you happens to be a Waymo, that's the one you get? - Right now, the experience, for example, in Phoenix, is that it's by chance. I think you got one by chance, then you can say, "Yes, I'll do it or not."
And I think that's what we're going to start with. But there may be some people who only want Waymos and there are some people who may not want Waymos. And we'll solve for that over a period of time. It could be personalizing.
preferences, or it could be what you're talking about, which is I only want a Waymo. Do the passengers get rated by the self-driving car the way that they would in a human-driven Uber? Not yet, but that's not a bad idea. What about tipping? Like if I get out of a self-driven Uber, is there an option to tip the car if it did a good job? I'm sure we could build that. Why not? I don't know. I do wonder if people are going to tip machines.
I don't think it's likely, but you never know. It sounds crazy, but at some point someone is going to start asking because they're going to realize it's just free margin. You know, it's like even if only 100 customers do it in a whole year, I don't know. You know, it's just free money. I mean, the good news is tipping 100% of tips go to drivers now, and we definitely want to keep that. So we like the tipping habits.
But weather people tip machines as TBD. Yeah. And how big are these fleets? I think I read somewhere recently that Waymo has about 700 self-driving cars operating nationwide. How many AVs are we talking about in these cities? We're starting in the hundreds and then we'll expand from there. I know you don't want to discuss the economics, even though I would love to learn what the split is there. I'm not going to tell you. But you did recently talk about the margins on automobiles.
autonomous rides being lower than the margins on regular Uber rides for at least a few more years. That's not intuitive to me because in an autonomous ride, you don't have to pay the driver. So you would think the margin would be way higher for Uber. But why would you make less money if you don't have to pay a driver? So generally, our design spec in terms of how we build businesses is any newer business, we're going to operate at a lower margin while we're growing that business.
You don't want it to be profitable day one. And that's my attitude with autonomous, which is again, get it out there, introduce it to as many people as possible. At a maturity level, generally, if you look at our take rate,
Around the world, it's about 20%. We get 20%, the driver gets 80%. We think that's a model that makes sense for any autonomous partner going forward. And that's what we expect. I kind of don't care, honestly, what the margins are for the next five years. The question is, can I get lots of supply? Can it be absolutely safe? And, you know, does that 20-80 split look reasonable going forward? And I think it does. Yeah. Yeah.
I want to ask about Tesla. You mentioned them a little earlier. They held an event recently where they unveiled their plans for a robo-taxi service. Do you consider Tesla a competitor?
Well, they certainly could be right if they develop their own AV vehicle and they decide to go direct only through the Tesla app, they would be a competitor. And if they decide to work with us, then we would be a partner as well. And to some extent, again, both can be true. So I don't think it's going to be an either or.
I think Elon's vision is pretty compelling, especially like you might have these cyber shepherds or these owners of these fleets, etc. Those owners, if they want to have maximum earnings on those fleets, will want to put those fleets
on Uber. But at this point, it's unknown what his intentions are. There's this big debate that's playing out right now about who has the better AV strategy between Waymo and Tesla in the sense that the Waymos have many, many sensors on them. The vehicles are much more expensive to produce.
Tesla is trying to get to full autonomy using only its cameras and software. And Andrej Karpathy, the AI researcher, recently said that Tesla was going to be in a better position in the long run because it ultimately just had a software problem, whereas Waymo has a hardware problem, and those are typically harder to solve.
I'm curious if you have a view on this, whether you think one company is likelier to get to a better scale based on the approach that they're taking with their hardware and software. I mean, I think that hardware costs scale down over a period of time. So, sure, Waymo has a hardware problem, but they can solve it. I mean, the history of compute and hardware is like the costs come down very, very significantly now.
The Waymo solution is working right now. So it's not theory, right? And I think the differences are bigger, which is Waymo has more sensors, has cameras, has LiDAR. So there's a certain redundancy there.
Waymo generally has more compute, so to speak. So the inference of that computer is going to be better. And Waymo also has a high-definition maps that essentially makes the problem of recognizing what's happening in the real world a much simpler problem. So under Elon's model, the weight that the software has to carry is very, very heavy.
versus the Waymo and most other player model where you don't have to kind of weigh as much on training and you make the problem much simpler as a compute problem to understand. I think eventually both will get there. But if you had to guess, who's going to get to sort of a viable scale first? Listen, I think Elon eventually will get to a viable scale, but for the next five years, I bet on Waymo, and we are betting on Waymo.
I'll say this. I don't want to get into an autonomous Tesla in the next five years. Somebody else can test that out. I'm not going to be an early adopter of that one. FSD's getting pretty good. Have you used it recently? I have not used it recently. It's really good. Yeah? All right. Yeah, it's really good. Now, again, it's the, for example, the cost of a solid state LiDAR now is $500, $600, right? So why wouldn't you put that
into your sensor stack. It's not that expensive. And for a fully self-driving specialized auto, I think that makes a lot of sense to me. Now, Elon has accomplished the unimaginable many, many, many times. So
I wouldn't bet against them. Yeah, I don't know. This is always, you know, my secret dream for you, you know, obviously you should stay at Uber as long as you want. When you're done with that, I actually do think you should run Tesla because I think you would be, just as you've done Uber, you'd be willing to make some of the sort of easy compromises, like just put a $500 fricking LIDAR on the thing and we'd go much faster. So anyway, what do you think about that? I have a full-time job and I'm very happy with it. Thank you. Well, the Tesla board is listening. I don't know if the Tesla board listens to YouTube. Yeah.
Good point. That's true. I made too many Kennedy jokes. We're opening up the board meeting with an episode of Hard Fork, everybody. They can learn a lot from this show. What's your best guess for when, say, 50% of Uber rides in the U.S. will be autonomous? I'd say close to 8 to 10 years is my best guess, but I am sure that'll be wrong.
Probably closer to 10. Closer to 10? Okay, interesting. Most people have overestimated. You know, again, it's a wild guess. The probabilities of your being right are just as much as mine. I'm curious if we can sort of get into a future imagining mode here. Like,
In the year, whether it's 10 years or 15 years or 20 years from now, when maybe a majority of rides in at least big cities in the U.S. will be autonomous, do you think that changes the city at all? Like, do the roads look different? Are there more cars on the road? Are there fewer cars on the road? What does that even look like? So I think that...
The cities will have much, much more space to use. Parking often takes up 20, 30 percent of the square miles in a city, for example, and that parking space will be open for living, parks, etc. So there's no doubt that it will be a better world. You will have greener, cleaner cities, and you'll never have to park again, which I think is pretty cool.
I'm very curious what you think about the politics of autonomy in transportation. In the early days of Uber, there was a lot of backlash and resistance from taxi drivers. Sure. And, you know, they saw Uber as a threat to their livelihoods. There were some, you know, well-publicized cases of sort of sabotage and big protests.
Do you anticipate there will be a backlash from either drivers or the public to the spread of AVs as they start to appear in more cities? I think there could be. And what I'm hoping is that we avoid the backlash by having the proper conversations. Now, historically, society...
As a whole, we've been able to adjust to job displacement because it does happen gradually. And even in a world where there's greater automation now than ever before, employment rates, et cetera, are at historically great levels. But the fact is that AI is going to displace jobs. What does that mean? How quickly should we go? How do we think about that? Those are discussions that we're going to have. And if we don't have the discussions, sure, there will be backlash. There's always backlash against AI.
societal change that's significant. Now, we now work with taxis in San Francisco and taxi drivers who use Uber make more than 20% more than the ones who don't. So there is a kind of solution space where
new technology and established players can win, but I don't know exactly what that looks like. But that calculus does not apply to self-driving. You know, it's not like the Uber driver who's been driving an Uber for 10 years and that's their main source of income can just start driving a self-driving Waymo. You don't need a driver. No, you don't need a driver. It's not just that they have to switch the app they're using. It's that it threatens to
put them out of a job? Well, listen, could they be part of fleet management, cleaning, charging, etc.? That's a possibility. We are now working with some of our drivers. They're doing AI map labeling and training of AI models, etc. So we're expanding the solution set of work, on-demand work that we're offering our drivers because there is part of that work, which is driving, maybe going away,
or the growth in that work is going to slow down, at least over the next 10 years, and then we'll look to adjust. But listen, these are issues that are real, and I don't have a clean answer for them at this point. - Yeah. - You brought up shared rides earlier, and back in the day, I think when UberX first rolled out shared rides, I did that a couple of times, and then, I don't know, I got a raise at my job, and I thought, from here on out, I think it's just gonna be me and the car.
How popular do you think you can make shared rides? Is there anything that you can do to make that more appealing? Well, I think the way that we have to make it more appealing is to reduce the penalty, so to speak, of the shared rides. I think the number one reason why people use Uber is they want to save time, they want to have their time back.
And a shared ride would, you know, you would get about a 30% decrease in price historically, but there could be a 50 to 100% time penalty. We're working now. You might end up sitting next to Casey Newton. That would be cool. That would be amazing. Thank you, Tara. I would feel very short. Otherwise, I would have no complaints.
People, so far we've heard, don't have a problem with company. It really is time. And they don't mind writing with other people. There's a certain sense of satisfaction with writing with other people. But we're now working with both algorithmically and I think also fixing the product. Previously,
You would choose a shared ride and you get an upfront discount. So your incentive as a customer is to get the discount, but not to get a shared ride. So we would have customers gaming the system. They get a shared ride at 2 a.m. when they know they're not going to be matched up, et cetera. Now you get a smaller discount and you get a reward, which is a higher discount if you're matched. So part of it is we're not compensating
Customers aren't working against us and we're not working against customers, but we're working on tech. We are reducing the time penalty, which is we avoid these weird routes, et cetera, that's going to cost you 50% of your time or 100% of your time. Now, in autonomous, if we are the only player that then has a liquidity to introduce shared autonomous into cities, that lowers congestion, lowers the price. That's another way in which our marketplace can add value to the ecosystem.
Speaking of shared rides, Uber just released a new airport shuttle service in New York City. It costs $18 a person. You book a seat. It goes on a designated sort of route on a set schedule. I don't have a question. I just wanted to congratulate you on inventing a bus. Yeah.
It's a better bus. You know exactly when it's coming, picking you up, like just knowing exactly where your bus is, pick up, knowing what your path is, real time. It just gives a sense of comfort. We think this can be a pretty cool product. And again,
is bus going to be hugely profitable for us longterm? I don't know, but it will introduce us to a bigger audience to come into the Uber ecosystem. And we think it can be good for cities as well. Uh, if you're in Miami, by the way, over the weekend, we got buses to the Taylor Swift concert as well. So I'm just saying, well, I mean, look, it should not be hard to improve on the experience of a city bus. Like, do you know what I mean? Uh,
- I like city buses. When was the last time you were on a city bus? - Well, I took the train here. So it wasn't a bus, but it was transit. - He doesn't take shared, he doesn't take bus. - I'm a man of the people. - This guy is like. - I like to ride public transit. - You're an elitist. - No, I would love to see a picture of you on a bus sometimes in the past five years, 'cause I'm pretty sure that's never happened.
Let me ask you this. I think we can make the experience better. Here's, I've, you know, so far I've resisted giving you any product feedback, Dara, but I had this one thing that I have always wanted to know the explanation for, and it says, at some point in the past couple years, you all, when I ordered an Uber, started sending me a push notification saying that the driver was nearby. And I'm the sort of person, when I've ordered an Uber, Dara, I'm going to be there when the driver pulls. I'm not making this person wait, okay? I'm going to respect their time.
And what I've learned is when you tell me the driver is nearby, what that means is they're at least three minutes away and they might be two miles away. And what I want to know is why do you send me that notification? We want you to be prepared to not keep the driver waiting. Maybe we should personalize it. I would love that. I think that's a good question, which is depending on whether or not you keep the driver waiting. I think that is one of the cool things with AI algos that we can do. At this point, you're right.
the experience is not quite optimized. But it's for the driver. It's for the driver. No, I get it. Time is money. And if I were a driver, I would be happy that you were sending that. But you also sent me this notification that says the driver's arriving. And that's when I'm like, okay, it's time to go downstairs. But it sounds like we're making progress on this. I think the algorithm just likes you. It just wants to have a... Yeah.
They know that I love my rides. Yeah. Yeah. Well, Casey has previously talked about how he doesn't like his Uber drivers to talk to him. And this is, this is a man who, who, who, uh, listen, this man likes to coast through life in a positive bubble. I mean, here's what I'm saying. If you're on your way to the airport at six 30 in the morning, do you truly want a person you've never met before asking you who you're going to vote for in the election? Is that an experience that anyone enjoys? By the way, I, I, I drive, I drove and, uh,
reading the writer as to whether they want to have a conversation or not. I was not good at the art of, of kind of conversation as a driver. Hey, no, no. Hey, how's it going? Are you having a good day? Go to work. And then I just shut up and have a nice day. To me, that was it. But,
I don't know if that's... No, that's perfect. That's going to give you all the information that you need. I'll be your driver any day. This is Casey's real attraction to self-driving cars is that he never has to talk to another human. Look, you can make fun of me all you want. I am not the only person who feels this way. Let me tell you. When I check into a hotel, same thing. Like, did you have a nice day? I'm like, yeah, but where are you coming in from? Let's not get into it. Yeah, just...
I would love to see you checking into a hotel. So did you have a nice day? And you're like, well, let me tell you about this board meeting. I just went to because the pressure I'm under, you don't want to hear about it. All right. Well, I think we're at time. Dara, thank you so much for coming. It was fun. When we come back, well, AI is driving progress and it's driving cars. Now we're going to find out if it can drive Casey insane. He watched 260 TikTok videos and he'll tell you all about it.
Well, Casey, aside from all the drama in AI and self-driving cars this week, we also had some news about TikTok. One of the other most powerful AI forces on Earth. No, truly. Yes. I unironically believe that. Yeah, that was not a joke. Yes. So this week we learned about some documents that came to light as part of a lawsuit that is moving through the courts right now. As people will remember, the federal government is still trying to force ByteDance to sell TikTok.
But last week, 13 states and the District of Columbia sued TikTok, accusing the company of creating an intentionally addictive app that harmed children. And, Kevin, and this is my favorite part of this story, is that Kentucky Public Radio got a hold of these court documents, and they had many redactions. You know, often in these cases, the most interesting sort of facts and figures will just be redacted for who knows what reason. But the geniuses over at Kentucky Public Radio just copy and pasted everything in the document. Yeah.
And when they pasted it, everything was totally visible. This keeps happening. I feel like every year or two, we get a story about some failed redaction. Like, is it that hard to redact a document? I'll say this. I hope it always remains this hard to redact a document because...
I read stuff like this, Kevin, and I'm in heaven. Yes. So they got a hold of these documents. They copied and pasted. They figured out what was behind sort of the black boxes in the redacted materials. And it was pretty juicy. These documents included details like TikTok's knowledge of a high number of underage kids who were stripping for adults on the platform. The adults were paying them in digital gifts.
These documents also claimed that TikTok had adjusted its algorithm to prioritize people they deemed beautiful. And then there was this stat that I know you honed in on, which was that these documents said, based on internal conversations, that TikTok had figured out exactly how many videos it needed to show someone in order to get them hooked on the platform.
And that number is 260. 260 is what it takes. You know, it reminds me, this is sort of ancient, but do you remember the commercial in the 80s where they would say, like, how many licks does it take to get to the center of a Tootsie Pop? Yes. To me, this is the sort of 2020s equivalent. How many TikToks do you have to watch until...
you can't look away ever again. Yes. So this is, according to the company's own research, this is about the tipping point where people start to develop a habit or an addiction of going back to the platform and they sort of become sticky in the parlance of social media apps. In the disgusting parlance of social media apps, it becomes sticky. So Casey, when we heard about this magic number of 260 TikTok videos, you had what I thought was an insane idea.
Tell us about it. Well, Kevin, I thought if 260 videos is all it takes, maybe I should watch 260 TikToks, and here's why. I am an infrequent user of TikTok. I would say...
Once a week, once every two weeks, I'll check in, I'll watch a few videos, and I would say generally enjoy my experience, but not to the point that I come back every day. And so I've always wondered what I'm missing because I know so many folks that can't even have TikTok on their phone because it holds such a power over them. And they feel like the algorithm gets to know them so quickly and so intimately that it can only be explained by magic.
So I thought, if I've not been able to have this experience just sort of normally using TikTok, what if I tried to consume 260 TikToks as quickly as I possibly could and just saw what would happen after that? Not all heroes wear capes. Okay. So Casey, you watched 260 TikTok videos last night. Yeah.
Tell me about it. So I did create a new account. So I started fresh. I didn't just reset my algorithm, although that is something that you can do in TikTok.
And I decided a couple of things. One is I was not going to follow anyone, like no friends, but also no influencers. No enemies. No enemies. And I also was not going to do any searches, right? A lot of the ways that TikTok will get to know you is if you do a search. And I thought, I want to get the sort of broadest, most mainstreamy experience of TikTok that I can so that I can develop a better sense of how does it sort of, uh,
walk me down this funnel toward my eventual interest. Whereas if I just followed 10 friends and did like three searches for my favorite subjects, like I probably could have gone there faster. And so do you know the very first thing that TikTok showed me, Kevin? What's that? It showed me a 19-year-old boy flirting with an 18-year-old girl trying to get her phone number. Yeah.
And when I tell you, I could not have been any less interested in this content. It was aggressively straight. Yes. And it was very young and it had nothing to do with me. And it was not my business. And so over the next several hours, this total process, I did about two and a half hours last night and I did another 30 minutes this morning and
And I would like to share, you know, maybe the first nine or 10 things that TikTok showed me. Again, you know, the assumption is it knows basically nothing about me. Yes. And I do think there is something quite revealing about an algorithm that knows nothing, throwing spaghetti at you, seeing what will stick, and then just picking up the spaghetti afterwards and saying, well, what is it, you know, that I thought was interesting. So here's what it showed me.
Second video, a disturbing 911 call, like a very upsetting sort of domestic violence situation. Skip. Three, two people doing trivia on a diving board and like the person who lose has to jump off the diving board. Okay, fine. Four, just freebooted clip of audition for America's Got Talent. Five, vegetable mukbang. So just a guy who had like rows and rows of beautiful multicolored vegetables in front of them who was just eating them.
Six, a comedy skit, but it was like running on top of a Minecraft video. So one of my key takeaways after my first six or seven TikTok videos was that it does actually assume that you're quite young, right? That's why it started out by showing me teenagers. And as I would go through this process, I found that over and over again, instead of just showing me a video, it would show me a video that had been chopped in half and on top,
was whatever the sort of core content was. And below would be someone is playing Subway Surfers. Yes. Someone is playing Minecraft. Or someone is doing those sort of oddly satisfying things. This is a growth hack. I'm combing through a rug or whatever. Yes. And it's like, it's literally...
people trying to hypnotize you, right? It's like if you just see the, oh, someone is trying to smooth something out or someone is playing with slime. - They're cutting soap. Have you seen the soap cutting? - Yes, soap cutting is huge. And again, there is no content to it. It is just trying to stimulate you on some sort of like lizard brain level. - It feels vaguely narcotic. - Absolutely. - It is like, yes. - It is just purely a drug.
Video number seven, an ad. Video number eight, a dad who was speaking in Spanish and dancing. I think it was very cute. Now, can I ask you a question? Are you doing anything other than just swiping from one video to the next? Are you liking anything? Are you saving anything? Are you sharing anything? Because all that gets interpreted by the algorithm as like a signal to keep showing you more of that kind of thing.
Absolutely. So for the first 25 or so videos, I did not like anything, but because I truly didn't like anything, like nothing was really doing it for me. But my intention was always like, yes, when I see something I like, I'm going to try to reward the algorithm, give it a like, and I will maybe get more like that.
So the process goes on and on, and I'm just struck by the absolute weirdness and disconnection of everything in the feed. At first, truly nothing has any relation to anything else, and it sort of feels like you've put your brain into like a Vitamix.
you know, where it's like, swipe, here is a clip from Friends. Swipe, kids complaining about school. Swipe, Mickey Mouse has a gun and he's in a video game, right? Those are three videos that I saw in a row. And the effect of it is just like disorienting, right? And I've had this experience when you like go onto YouTube, but you're not logged in, you know, on like a new account. And it's sort of just, it's just showing you sort of a random assortment of things that are popular on YouTube. It does feel very much like they're just
firing in a bunch of different directions, hoping that something will stick. And then it can sort of, it can then sort of zoom in on that thing. Yes, absolutely. Now I will add that in the first 30 or so videos, I saw two things that I thought were like actually disturbing and bad.
- What were they? - Things that should never have been shown to me. - Was it a clip from the All In podcast? - Yes, no. Fortunately, it didn't get that bad. But one, there was a clip of a grate in a busy city and there was air blowing up from the grate and the TikTok was just women walking over the grate and their skirts blowing up. - That seems bad. - Horrible, that's horrible. That was in the first 20 videos that I saw, was this video, okay? I guess if you like that video, it says a lot about you, right? But it's not bad.
The second one, and I truly, I do not even know if we will want to include this on our podcast because I can't even believe that I'm saying that I saw this, but it is true. It was an AI voice of someone telling an erotic story which involved incest, and it was shown over a video of someone making soap.
Wow. Like, what? This is dark stuff. This is dark stuff. Now, at what point did you start to wonder if the algorithm had started to pick up on your clues that you were giving it? Well, so I was desperate to find out this question because I am gay and I wondered when I was going to see the first gay content, like when it was actually just going to show me two gay men who were talking about gay concerns. And it did not happen. Ever? No.
It never quite got there at the, on, on,
On this morning. In 260 videos. In over 260 videos. Now, it did show me queer people. Actually, do you know the first queer person, identifiably queer person that the TikTok algorithm showed me? Are you familiar with the very popular TikTok meme from this year, Very Demure, Very Mindful? Yes. The first queer person I saw on TikTok, thanks to the algorithm, was Jules LeBron in a piece of sponsored content, and she was trying to sell me a Lenovo laptop. And that was...
The queer experience that I got in my romp through the TikTok algorithm. Now, you know, it did eventually show me a couple of queer people. It showed me one TikTok about the singer, Chapel Roan, who is queer, so I'll count that. And then it showed me a video by Billie Eilish, you know, a queer pop star, and
And I did like that video. And now Billy Eilish was one of the most famous pop stars in the entire world. I mean, like truly like on the Mount Rushmore of famous pop stars right now. So it makes a lot of sense to me that TikTok would show me also incredibly popular with teenagers. And so I liked one Billy Eilish video. And then that was when the floodgates opened and it was like, okay, here's a lot of that. But yeah,
Just from like sort of scrolling it. No, we did. We did not get to, we did not get to the gay zone. Now I did notice the algorithm adapting to me. So something about me was because again, I was trying to get through a lot of videos in a relatively short amount of time. And TikTok now will often show you three, four or five minute long videos. I frankly did not have the time for that.
The longer I scrolled, the shorter the videos were that I got. And I do feel like the content aged up a little bit. You know, it started showing me a category of content that I call people being weird little freaks. You know, it's like somewhat, these are some real examples. A man dressed as the cat in the hat dancing to Sierra's song Goodies. Okay. There was a man in a horse costume playing the Addams Family theme song on an accordion using a toilet lid for percussion.
This is the most important media platform in the world. Yes. Hours a day, teenagers are staring at this. And this is what it is showing them. We are so screwed. Yeah. You know, it figured out that I was more likely to like content about animals than other things. So there started to become a lot of dogs doing cute things, cats doing cute things, you know, other things like that.
But, you know, there was also just a lot of, like, here's a guy going to a store and showing you objects from the store. Or, like, here is a guy telling you a long story. Can I ask you a question? Yeah. Was there any... In these 260 videos, were there any that you thought, like, that is a great video? Um...
don't know if I saw anything truly great. I definitely saw some animal videos that if I showed them to you, you would laugh or you would say that was cute. But there was stuff that gave me an emotional response. And I would say particularly as I got to the end of this process, I was seeing stuff that I enjoyed a bit more. But there I did this morning. I decided to do something, Kevin, because I'd gotten so frustrated with the algorithm. I thought it is time to give the algorithm a piece of data about me. So do you know what I did? What did you do? I searched the word gay. Okay.
- Very subtle. - Which like, in fairness, is an insane search query. - Yeah. - 'Cause what is TikTok supposed to show me in response? - Yes. - You can show me all sorts of things, but on my like real TikTok account, it just shows me your creators all the time and they're doing all sorts of things. They're singing, they're dancing, they're telling jokes, they're telling stories. So I was like, I would like to see a little bit of stuff like that.
Do you know the first clip that came up for me when I searched gay on TikTok to train my algorithm? What was it? It was a clip from an adult film. Now, like explicit, unblurred? It was from... And I don't know this. I've only read about this. But apparently at the start of some adult films, before the explicit stuff, there will be some sort of story content that sort of establishes the premise of the scene. And this was sort of in that vein. But I thought...
If I just sort of said offhanded, you know, oh, TikTok, yeah, I bet if you just search gay, they'll just show you porn. People would say, it sounds like you're being insane. Why would you say that? That's being insane. Obviously, they're probably showing you their most famous queer creator, you know, something like that. No, they literally just showed me porn. So it was like, again, so much of this process for me was like,
the things that people say about TikTok, assuming that people were sort of exaggerating or being too hard on it, and then having the experience myself and saying like, oh no,
Oh no, like it's actually like that. An alternative explanation is that the algorithm is actually really, really good. And the reason to show you all the videos of people being weird little freaks is because you are actually a weird little freak. That's true. I will accept those allegations. I will not fight those allegations. So, okay, you watched 260 videos. You reached this magic number that is supposed to get people addicted to TikTok. Are you addicted to TikTok? Kevin, I'm surprised and...
frankly delighted to tell you, I have never been less addicted to TikTok than I have been after going through this experience. Do you remember back when people would smoke cigarettes a lot? And if a parent caught a child smoking, the thing that they would do is they say, you know what? You're going to smoke this whole pack and I'm going to sit in front of you and you're going to smoke this whole pack of cigarettes. And the accumulated effect of all that stuff that you're breathing into your lungs, by the end of that, the teenager says, dad, I'm never going to smoke again. This is how I feel.
It cured your addiction. After watching hundreds of these TikToks. So, okay, you are not a TikTok addict. In fact, it seems like you are less likely to become a TikTok power user than you were before this experiment. I think that's right. Did this experiment change your attitudes about whether TikTok should be banned in the United States?
I feel so bad saying it, but I think the answer is yes. Like, not ban it, right? Like, you know, my feelings about that still have much more to do with, like, free speech and, like, freedom of expression. And I think that a ban raises a lot of questions that the United States approached this issue. It just makes me super uncomfortable with. You can go back through our archive to hear a much longer discussion about that. But...
If I were a parent of a teen who had just been given their first smartphone, hopefully not any younger than like 14, it would change the way that I talk with them about what TikTok is. And it would change the way that I would check in with them about what they were seeing, right? Like I would say,
you are about to see something that is going to make you feel like your mind is in a blender and it is going to try to addict you. And here's how it is going to try to addict you. And I might sit with my child and might do some early searches to try to precede that feed with stuff that was good and would give my child a greater chance of going down some positive rabbit holes and seeing less of, you know, some of the more disturbing stuff that I saw there. So if nothing else, like I think it was a good exercise
educational exercise for me to go through. And if there is someone in your life, particularly a young person who is spending a lot of time on TikTok, I would encourage that you go through this process yourself because these algorithms are changing all the time. And I think you do want to have a sense of what is it like this very week if you really want to know what it's going to be showing your kid. Yeah. I mean, I will say I've, you know, I spent a lot of time on TikTok. I...
don't recall ever getting done with TikTok and being sort of...
Happy and fulfilled with how I spent the time. Like there's a vague sense of like shame about it. There's a vague sense of like, sometimes it like helps me turn my brain off at the end of a stressful day. It has this sort of like, you know, this sort of narcotic effect on me. And sometimes it's calming and sometimes I find things that are funny, but rarely do I come away saying like, that was the best possible use of my time. There is something that happens when you adopt this sort of algorithm first
vertical video, mostly short form, infinite scroll. You put all of those ingredients into a bag and what comes out does have...
this narcotic effect, as you say. Well, Casey, thank you for exposing your brain to the TikTok algorithm for the sake of journalism. I appreciate you. And, you know, I will be donating it to science when my life ends. People will be studying your brain after you die. I feel fairly confident. I don't know why they'll be studying your brain, but there will be research teams looking at it. Can't wait to hear what y'all find out. ♪
Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant. Today's show was engineered by Alyssa Moxley.
Original music by Marion Lozano, Sophia Landman, Diane Wong, Rowan Nemisto, and Dan Powell. Our audience editor is Nell Gologly. Video production by Ryan Manning and Chris Schott. As always, you can watch this full episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Hui-Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at hardfork at nytimes.com. Ben hadn't had a decent night's sleep in a month.
So, during one of his restless nights, he booked a package trip abroad on Expedia. When he arrived at his beachside hotel, he discovered a miraculous bed slung between two trees and fell into the best sleep of his life. You were made to be rechargeable. We were made to package flights and hotels and hammocks for less. Expedia. Made to travel.