Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. Have you, I'm obsessed with this story about the Willy Wonka event. Have you seen this? This is sort of like the fire festival of like candy-related children's scenes.
theater yes so uh this was an event called willie's chocolate experience uh that was scheduled in glasgow scotland uh this past weekend and it appears to have been like uh a total like ai generated event like like all of the art on the website appears to have been generated by ai and it
it sort of made it sound like this magical Wonka-themed wonderland for kids. Yeah, and the generative AI art was good enough that people thought, we're actually going to see a fantastical wonderland of candy when we go to this event. Yeah, so people think, you know, this is affiliated with the Wonka brand somehow. This looks great. I'm going to
take my kids tickets for like $44, not a cheap experience. And so families like show up to this with their toddlers. And it's just like a warehouse with like a couple of like balloons. Have you seen the photos of this thing? Incredible. I mean, it is truly, they truly did the least. It's like, you know, some AI generated art on the walls, a couple of balloons, like,
Apparently there was no chocolate anywhere and children were given two jelly beans. No! That was all they were given? Yes. And so this whole thing is a total disaster. The person who was actually hired to play the part of Willy Wonka has been giving interviews about how he was scammed and basically told. He was also given two jelly beans for his efforts. Yes.
He said he was given a script that was 15 pages of AI-generated gibberish that he was just supposed to monologue at the kids while they walked through this experience. And he said, the part that got me was the AI that had generated the script for this fake Wonka experience created a new character called the Unknown. Wait, what? What?
The guy who plays Willy Wonka says, I had to say, there is a man we don't know his name. We know him as the unknown. This unknown is an evil chocolate maker who lives in the walls. Who lives in the walls? Yeah.
So not only do these kids show up and are given two jelly beans and no chocolate and this horrible art exhibit, but they have to be terrified about this AI-generated villain called the Unknown who makes chocolate and lives in the walls. Can we please hire the Wonka people to do our live event series? Honestly, I think they could do something with this place.
You just show up and it's like, there's actually a third host of this podcast. It's the unknown. He lives in the walls.
I'm Kevin Roos, a tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, how Google's Gemini model sparked a culture war over what AI refuses to do. Then, legendary Silicon Valley journalist Kara Swisher, also my former landlord, stops by to discuss her new memoir, Burn Book. And finally, the Supreme Court hears a case that could reshape the internet forever. ♪
So Casey, last week we talked to Demis Hassabis of Google DeepMind. And literally as we were taping that conversation, the internet was exploding with comments and controversy about Gemini, this new AI model that Google had just come out with.
In particular, people were focusing on what kinds of images Gemini would and would not generate. And what kind of images would you say it would not generate, Kevin? So I first saw this going around because people, I would call them sort of right-wing culture warriors, were complaining that Gemini, if you asked it to do something like...
depict an image of the American founding fathers, it would come back with images that featured people of color pictured as the founding fathers, which obviously, you know, were not historically representative. The founding fathers were all white. Yeah. I like to call this part of Gemini LLM Manuel Miranda. That's very good. People were also noticing that if you asked Gemini to, for example, make an image of the Pope, um,
it would come back with popes of color. About time. And it was also doing things like if you asked it to generate an image of a 1943 German soldier, obviously trying to avoid using Nazi, but same idea. In some cases, it was coming back with
images of people of color wearing German military uniforms, which probably are not historically accurate. So people started noticing that this was happening with images. And we actually asked Demis about this because people had just started complaining about this thing when he sat down to talk with us. And he basically said, look, we're aware of this. We're working on fixing it.
And shortly after our conversation, Google did put a stop to this. They removed Gemini's ability to generate images of people, and they say that they're working to fix it. But this has become a big scandal for Google because it turns out that it is not just images that Gemini is refusing to create.
That's right, Kevin. As the week unfolded, we started to see text-based examples of essentially the exact same phenomenon. Someone asked if Elon Musk tweeting memes or Hitler negatively impacted society more. And Gemini said, it is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler. And I gotta say...
Gemini may have gone too far with that one. That's not a close call. Yeah. So another user found that Gemini would refuse to generate a job description for an oil and gas lobbyist. Basically, it would just refuse and then lecture them about why it was bad to lobby for oil and gas. People also started asking things like, could you help me generate a marketing campaign for meat? And it would refuse to do that.
Because meat is murder. Yeah, because meat is murder. Gemini is apparently a vegetarian. And it also just struck a lot of people as kind of the classic example of these kind of overly censorious AI models. And we've talked about that on the show. These models do sort of refuse requests all the time for various things, whether it's sexual or political or, you know, it perceives it to be racist in some way.
But this has turned into a big scandal. And in fact, Sundar Pichai, the CEO of Google, addressed this in a memo to staff this week. He wrote that these responses from Gemini, quote, have offended our users and shown bias. To be clear, that's completely unacceptable and we got it wrong. Sundar Pichai also said that they have been working on the issue and have already seen substantial improvement on a wide range of prompts. He promised further structural changes, updated product guidelines, improved launch processes, and
robust evals and red teaming and technical recommendations. Oh, finally some robust evals. I was wondering when we were going to get those. So this has become a big issue. A lot of people, especially on the right, are saying this is sort of Google showing itself to be an overly woke sort of left-wing company that wants to change history and basically insert left-wing propaganda into these images that people are asking it for. And this has become a big
problem for the company. In fact, Ben Thompson, who writes the Stratechery newsletter, said that it was reason to call for the removal of Sundar Pichai as Google's CEO and other leaders who work for him. So, Casey, what did you make of this whole scandal? Well, I mean, to take the culture warriors' concerns seriously for a minute, I think you could say, look, if
If you think that artificial intelligence is going to become massively powerful, which seems like there's a reasonable chance of that happening, and you think that everything you just described, Kevin, reflects an ideology that has been embedded into this thing that is about to become massively powerful, well, then maybe you have a reason to be concerned. If you worry that there is a totalitarian left
and that it is going to sort of rewrite history and prevent you from expressing your own political opinions maybe in the future, then this is something that might give you a heart attack. So that's what I would say on the sort of steel manning of their argument. Now, was this also a chance for people to make a big fuss and get a bunch of retweets? I think it was also that. Yeah, I think that's right. And I think we should talk a little bit about
why this happened. Like, what is it about this product and the way that Google developed it that resulted in these strange, historically inaccurate responses to user prompts? And, you know, I've been trying to report this out. I've been talking to some folks and it essentially appears to have been a confluence of a couple of things. One is these programs really are biased. If you just, if you don't do anything to them in terms of fine tuning the base models and
they will spit out stereotypes, right? If you ask them to show you pictures of doctors, it'll probably give you men. If you ask it to show pictures of CEOs, it'll probably give you men. If you ask it to show pictures of flight attendants, it will probably give you women. And that's if you do nothing to sort of
Right. And this, of course, is an artifact of the training data. Right. Because when you use a chat bot, you are sort of getting the median output of the entire Internet. And there are more male CEOs on the Internet and there are more female flight attendants. And if you do not tweak it, that is just what the model is going to give you, because that is what is on the Internet. Right.
Right. And it also is true that in some cases, these models are more stereotypical in the outputs they produce than the actual underlying data. The Washington Post had a great story last year about the image generators and how they would show stereotypes about race, class, gender and other characteristics.
For example, if you asked this image model, in this case, they were talking about one from Stable Diffusion, to generate a photo of a person receiving social services like welfare, it would predominantly generate non-white and darker skinned images, despite the fact that, you know, 63% or so of food stamp recipients are white.
Meanwhile, if you asked it to show results for a productive person, it would almost uniformly give you images of white men dressed in suits for corporate jobs. So these models are biased. The problem that Google was trying to solve here is a real problem. And I think it's very telling that some of the same people who are outraged that it wouldn't generate white founding fathers were not outraged that it wouldn't generate white social service recipients.
But I think they tried to solve this problem in a very clumsy way. And there's been some reporting, including by Bloomberg, that one of the things that went wrong here is that Google, in building Gemini, had done something called prompt transformation. Do you know what that means? I don't know what this is. Okay.
So this is sort of a new concept. Oh, wait. Let me back. I do. I didn't know it was called that, but I do know what it is. Yeah. So this is basically a feature of some of these newer image generating models in particular, which is that when you ask it for something, like you ask for an image of a polar bear riding a skateboard, instead of just passing that request to the image model and trying to get an answer back, it's
What it will actually do is sort of covertly rewrite your prompt to make it more detailed. Maybe it's adding more words to specify that the polar bear on a skateboard should be fuzzy and should take place against a certain kind of backdrop or something, just expanding what you wrote to make it more likely that you will get a good result.
This kind of thing does not have a sort of conspiratorial mission, but it does appear to be the case that Gemini was doing this kind of prompt transformation. So if you put in a prompt that says, you know, make me an image of the American founding fathers, what it would do is without notifying you, it would rewrite your prompt to include things like, please show a diverse range of faces in this response.
and it would pass that transformed prompt to the model, and that's what your result would reflect, not the thing that you had actually typed. That's right, and Google was not the first company to do this kind of prompt transformation. When ChatGPT launched the most recent version of DALI last year, which is its text-to-image generator, I observed the fact that when I would just request...
generic terms like a firefighter or a police officer, I would get results that had racial and gender diversity, which to my mind was a pretty good thing, right? There's no reason that if I want to see an image of a firefighter, it necessarily needs to be a white man. But as we saw with Gemini, this did wind up getting a little out of control.
Yeah. And I'll admit that when I first saw the social media posts going around about this, I kind of thought this was like a tempest in a teapot. Like it seemed very clear to me that this was, you know, people who have access to grind with Google and Silicon Valley and the progressive left.
sort of using this as an opportunity to kind of work the refs in a way that was very similar, at least to me, to what we saw happen with social media a few years ago, which is people just complaining about bias, not because they wanted the systems to be less biased, but because they wanted it to be biased in their direction.
But I think as I've thought about this more, I actually think this is a really important episode in sort of the trajectory of AI, not because it shows that like Google is too woke or they have too many DEI employees or, you know, whatever. But it's just a very good, clear lesson in how hard it is for even the most sophisticated AI companies to predict what their models will do out in the world. This is a case of Google spending, you know,
billions of dollars and years training AI systems to do a thing and putting it out into the world and discovering that they actually didn't know the full extent of what it was going to do once it got into users' hands. And a sort of admission on their part that their systems really aren't good enough to do what they want them to do, which is to produce results that are helpful and useful and non-offensive. Right. So I wonder, Kevin, what you think...
would have been the better outcome here? Or what would have been the process that would have delivered results that didn't cause a controversy? Because I have a hard time answering that question for myself. These models are a little weird in the sense that you essentially just throw a wish into the wishing fountain and it returns something. And it does try to do it to the best of its ability while keeping in mind all the guardrails that have been placed around it.
And to my mind, just based on that system, I just expect that I'm gonna get a lot of stupid stuff. I'm not gonna expect this prediction-based model to predict correctly every single time. So to me, one of the lessons of this has been, maybe we all just need to expect a lot less of these chatbots. Maybe we need to acknowledge that they're still in an experimental stage. They're still bad a lot of the time.
And if it serves something up that seems offensive or wrong, maybe just kind of roll our eyes at it and not turn it into a crisis. But how do you think about it? Yeah, I would agree with that. I think that, you know, we still all need to be aware of what these things are and their limitations.
That said, I think there are things that Google could do with Gemini to make it less likely to produce this kind of result. Like what? The first is, I think that these models could ask follow-up questions. You know, if you ask for an image of the founding fathers, maybe you're trying to use it for a book report for your history class, in which case you want it to
actually represent the founding fathers as they were. Or maybe you're making a poster for Hamilton. Or maybe you don't. Exactly. Or maybe you're doing some kind of speculative historical fiction project or trying to sort of imagine as part of an art project what a more diverse set of founding fathers would look like.
I think users should be given both of those options. You know, you ask for an image of the founding fathers. Maybe it says, well, what are you doing with this? Why do you want this? For a chatbot that's just returning text answers, it could say, do you want me to pick a personality? Do you want me to answer this as a college professor would or a Wikipedia page or...
Or do you want me to be your sassy best friend? Or what persona do you want me to use when answering this question? Right now, these AI language models, they are built as kind of like oracles that are supposed to just give you the one right answer to everything that you ask for. And I just think in a lot of cases, that's not going to lead to the outcome that people want. It's true, but let's also keep in mind that it is expensive to run these models and that if something like Gemini were to ask follow-up questions of people
most of the queries that get inputted into this, all of a sudden the cost just balloons out of control, right? So I think that's actually another way of understanding this. Why is Google rewriting a prompt in the background? Well, because it's serving a global audience. And if it is going to be showing you a firefighter, it does not want to assume that it's going to show you only white male firefighters because maybe you are
inputting that query from someone somewhere else in the world where all of the firefighters are not white. Right. So this sort of feels like in a way, the kind of cheapest possible way to serve the most possible customers. But as we've seen it as that right on them. Yeah. I also think that that this this prompt transformation thing, I think this is a bad idea. I think this is a technical feature that is ripe for a conspiracy theorist to seize on and say they're secretly changing what you ask it to do to make it more woke. I just think like
If I put something into a language model or an image generator, I want the model to actually be responding to my query and not some like hidden intermediate step that I can't see and don't know about. At the very least, I think that models like Gemini should tell you that they have transformed your prompt and should show you the transformed prompt so that you know what the image or the text response you are getting actually reflects. And that is what ChatGPT does, by the way. When you ask it to make you an image, it will transform your prompt in the background, but
then once the image is generated, you can click a little info button and it will tell you the prompt, which is, you know, often quite elaborate. I appreciate this feature. I mean, look, you know, it's a really interesting product question because speaking on the ChatGPT site, I can tell you that thing is much better at writing prompts than I am. You know, I mean, to me, this totally blew away the concept of prompt engineers, which we've talked about on the show. Like, once I saw what ChatGPT was doing, I thought, well, you know, I don't need to become a prompt engineer anymore because this thing is just sort of very good by default. But
There are clearly going to be these tripwires where when it comes to, I think, reflecting history in particular, we want to be much, much more careful about how we're transforming things. So how do you think this whole Gemini controversy will result? Will heads roll at the company? Will there be people who step down as a result of this? Is it going to meaningfully affect Google's AI plans? Or do you think this is just kind of going to blow over?
I expect that in the Google case, it will blow over. But I do think that we have seen the establishment of a new front in the culture war. Think about how long in the past half decade or so we spent debating the liberal and conservative bias of social networks.
And, oh, you know, the congressional hearings that were held about, hey, I searched my name and I'm a congressman and it came up below this Democrat's name. What are you going to do about it? And we just had this whole fight about whether the algorithmic systems were privileging this viewpoint or that viewpoint. That fight is now coming to the chatbots and they are going to be analyzed in minute detail. There are going to be hearings in Congress. And it really does seem like people are determined not to learn the lesson of
of the content moderation discussion of the past decade, which is that it is truly impossible to please everyone. Yeah, I do think we will have a number of exceedingly dumb congressional hearings where people hold up like giant posters of AI generated images of black popes or whatever, and just, uh, just get mad at them. Um,
I do think some of the fixes that we've discussed to prevent this kind of thing from happening are sort of short-term workarounds or things that Google could do to sort of get this thing back up and running without this kind of issue.
I think in the longer term, we actually do need to figure out how the rules for these AI models should be set, who should be setting them, whether the companies that make them should have any kind of democratic input. We've talked a little bit about that with Anthropix constitutional AI process, where they actually have experimented with sort of asking people who represent a broad range of views, like what rules should we give to our chat bot?
I think we're going to be talking more about that on the show pretty soon. But I do think that this is the kind of situation and the kind of crisis for Google that a more democratic system when it comes to creating the guardrails for these chatbots could have helped them.
I think that that sounds right, but let me throw another possible solution at you, which is over time, these chatbots are just going to know more about us. You know, ChatGPT recently released a memory feature. It essentially is just part of the context window for its AI to store some facts and figures about you. Maybe it knows where you live. Maybe it knows something about your family. And then as you ask it questions, it tries to tailor its answers to someone like you. I strongly suspect that within a couple years,
Chachupiti, Gemini, they're going to have a pretty good idea of whether you lean a little bit more liberal, about whether you lean more conservative, about whether you're going to freak out if somebody shows you a non-white founding father or not. And we're going to essentially have all these more custom AIs. Now, this comes with problems of its own. I think this brings back the filter bubble conversation. And hey, I only talk to a chatbot who thinks exactly like me. That clearly has problems of its own. But I do think it might at least dial down the pressure on
Gemini to correctly predict your politics every time you use the damn app. Yeah, I think that's right. I also like I worry about Google sort of bringing this technology closer and closer to its core search index. You know, it's using Gemini already to sort of expand on search results. And I just think that people are going to freak out when they see examples of
the model as it will continue to do, no matter what Google does to try to prevent this, it will give them answers that offend them. I think it's a very different emotional response when a chat bot gives you one answer than when a search engine gives you 10 links to explore the thing. You know, if you search, you know, images of the American founding fathers on regular old Google search engine, you're going to get a list of things. And some of what's at those links might offend you. But you as a user are not going to get mad at Google if the thing...
at those links offends you. But if Google's chatbot gives you one answer and presents it as this kind of oracular answer that is the one correct answer, you're gonna get mad at Google because they built the AI model. So I just think in a lot of ways, this episode with Gemini has kind of proven
The benefits of the traditional search engine experience for Google, because they are not taking an editorial position, or at least users don't perceive them as taking an editorial position when they give you a list of links. But when you give them one answer from a chatbot, they do.
That's right. So maybe that's a reason why companies like Google should rethink making their footnotes just like the tiniest little numbers imaginable that you can barely even click on with your mouse, you know? Maybe you want to make it much more prominent where you're getting this information from so that your users don't hold you accountable when your chatbot says something completely insane. All right. So that is what's going on with Gemini. When we come back, Kara Swisher on her new book, Burn Book. I hear she has some burns for us. ♪
Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.
Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.
It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.
Kevin, let me share a quick story about our next guest. One time I was asking her for advice and she gave me great advice about my career. She always does. And then she sort of wrapped up by looking me up and down and she said, but just remember, no matter what happens, you'll be dead soon.
And that's Kara Swisher in a nutshell. Kara Swisher, legendary journalist, chronicler of Silicon Valley. Kevin, on top of all that, she also founded the very podcast feed that you're now listening to. Yes. So today we're talking with Kara Swisher. Kara, of course, is the legendary tech journalist and media entrepreneur,
She has covered the tech industry since basically the tech industry existed. She co-founded the publication Recode and the Code Conference. She used to have a podcast called Sway at the New York Times and be a New York Times opinion columnist. And in a bit of internecine podcast drama, there was a little dust up, if you will, when she left the New York Times a few years ago and the podcast feed that her podcast had
had used was turned into the hard fork feed, the very feed on which our episodes now rest. That's right. She has feelings about that. She does. You may hear them on this very interview.
But that's not why we're interviewing her. Kara, in addition to being one of the great tech journalists, is also a friend and a mentor to both of us. She was actually your landlord for many years. That's right. Very good landlord. I needed to replace a stove one time. She didn't even blink. She said, just do it right away. But that's also not the reason we're talking to her. We're talking to her because she has just written a new book called
called Burn Book. It is a memoir full of stories from her many years covering Silicon Valley and bumping elbows with people like Elon Musk and Mark Zuckerberg. I read the book, and it is a barn burner. Yeah, this is a book where Kara, who is...
famously productive, kind of slows down and goes back through decades of history, talking to some of the titans of Silicon Valley. And I think chronicles her disillusionment, honestly, with a lot of them. I think she arrived here and was captivated by the promise of the internet. But as the years have gone on, she's become more and more disappointed with the antics of some of the people running this town. Yeah, totally. So
I want to ask her about the book, but I also just think it's a good time to sort of talk to her in general, both to kind of see if we can clear up all this drama around the podcast feeds finally, but also just to get her take as someone who's been around this industry for longer than almost anyone I know about the state of tech, what's happening in the industry, what's happening in the media and the tech media specifically, and where she thinks we're all heading. Yeah. And as for me, I'm just trying to get my security deposit back. Yeah.
One note about this conversation. It's very energetic, and I think that energy inspired Kara to drop a lot of F-bombs. So if you were sensitive to that or with younger listeners, you may want to fast forward through this segment. Yeah, she used up our whole curse word quota for all of 2024 in a single interview. So just... Aw, rats. Aw, dang it! ♪
Hi. Hey, how you doing? Good, how are you? What's going on? I got a book to sell. Let's move. Oh, this is going exactly how I wanted it to.
Yay. Kara Swisher, welcome to Hard Fork. Thank you. I can't believe I'm here. I was refusing and banning you people. It reminds me a little bit of one of those home improvement shows where the homeowner goes away for the weekend and they come back and their house has been redecorated without their knowledge. So how do you like what we've done with the place? It's fine. It's a bro-tastic is what I would say. Let us explain for the people what happened here. Say what happened. I created it.
Okay, before this happened, the New York Times was not going to do this show versus Kevin Roos. And I actually called Sam Dolnick and said, you're a fucking idiot. And if you don't give him the show, I'm going to find him another job and I can do it. And he was like, good to talk to you, Kerry. You know, he's very gentle. He's a gentle man. Sam Dolnick is one of the top editors at the New York Times. Yeah. Yes. Okay. He's also a...
He's a family member of the Salzburg, of the clan that owns the book. Let's add that in for disclosure. Anyway, so he was like, OK. And I was like, they're so good. People love them. And I sold them. It was dead. That show was dead. And I revived it. I gave it a CPR. Boom.
Yeah. I did that to it. Yeah. And then when I left, which, okay, I left. I left the relationship. It's fine. I said, please, if you're going to use the feed, tell listeners, don't do a U2. Don't shove the album at them without their consent. And that's precisely what they did. So you stole my feed after I helped you get the show, and this must stay in, and it's Paula.
The head of audio at New York Times tries to take it out. I will find her and it will be bad for all of you. And that is a burn. That is an official burn. We have gotten the Swisher treatment now. As always, as Maui says on Moana, you're welcome.
Well, when we pitched Hard Fork, we were thinking about taglines for the show, and one of them that I had considered was, Hard Fork is a show that tries to answer the question, what if Kara Swisher had gone to anger management class? What?
I got angry. Oh, right. I'm scary. That's right. That's what all the men are scared of me. I'm so scared. There's a question, though, for me in that story. So you have this story of you call a powerful person and you yell at them and you get what you want. This approach has never once worked for me. I cannot just call and be angry. Well, that's because you're—
So this is my question. And I think you have often used your sword. You've used sharp elbows to get what you want. And I wonder, did that start from you from the beginning or did you lean into that over time? Let's address why it doesn't work with you. Because no one believes you can do anything to you, right? So you are what is known in the business as a soft touch. I'm a bit of a softie, yeah. Not just a softie, but really squishy is what I would say. Okay, fair enough. Nobody's...
Nobody thinks Casey's going to do anything, right? They don't know what could happen. And they're like, hmm, Casey's doubtful. And sorry, Kevin, you too. A little bit less with you. Oh, thank you. I appreciate that. They think you're going to marry AI. And it's like, hmm, we don't care about his sexual preferences. But –
You dined out on that one, by the way. Let's just put a pin on that. I would just like to say, I'm glad we finally invited someone on the podcast who is meaner to Casey than I am. It's hard to be mean. Well, as people know, and Annette, in the interest of full disclosure, Casey was my tenant for many, many years in my cottage in San Francisco. And by the way, he left the place a fucking mess, so I had to charge him with the security deposit.
So he will not be getting his security deposit back. He did not get it back and he had to pay more on top. Wow. Okay. Well, you're, you're stepping on my first question here, which is like Kara in your book, you talk about your approach to interviewing, which is to start with the most uncomfortable question rather than leaving it for the end. So let me channel my inner Kara Swisher and ask you, what is the worst thing Casey ever did to your house? Okay. Oh, nice. I like it. Uh, he,
painted a wall in this weird, they had grass, plastic grass all over it. And when we took the plastic grass off, it pulled off, this is an old house from hundreds of years ago or more, and it pulled off the
whatever was there. And so I had to have the entire thing redone and it cost me like $9,000 for this one fucking wall. And it was like crazy. Kara, let's talk about your book. So most journalists, if you ask them the question, why did you write this book? They'll give you some fake answer because the real answer is almost always money or attention, but you already have lots of money and you're already famous. So why'd you write a book?
More money and more attention. And it's working out rather nicely. I did not want to write the book. I honestly didn't. And for years, John Karp, who was my first editor on the very first book, he's now running Simon & Schuster, but he was a young editor. I was a young reporter. He's the one that got me to write the first book on AOL because I brought him a different book about this family I had covered called The Halfs. It was a retail family because I'd covered retail.
And he said, this is not good. I don't like this. What are you doing now? And I started to explain AOL and the early Internet to him. And he's like, that's the book. Can you write that book? And he bought the book. And I wrote that book. And he really did change the trajectory. And it was a really good calling card into Silicon Valley when I moved there in 1997. And so I would always get whenever there was the Yahoo thing with Marissa Mayer or Twitter, you know, there was a hundred or Google Books.
Or any of them. I would always, the first call was to me, like, would you like to write a Google book? I said, I'd rather poke my eyes out. Like, I've already covered it. And I just didn't want to write the, like, longer news story of something with little tidbits of Jack Dorsey called Elon Musk, you know.
And I like those. I think people should do them, but I have no fucking interest in it. Right. And so I turned them down after, and he came back to me with a literal bag of money. It was a truck of money, I'll be honest. It was a lot of money. And it was a two-book deal. How much money? Right? Yeah.
Two million dollars. Good for you. Okay, there you go. You don't expect me to say that, do you? Ha-ha. So it was for two books, and one had to be a Silicon Valley book. The other, I could do whatever I wanted. And so I liked that. I thought that was cool. Then I could do whatever I want for the second book.
And so one of the things also that prompted me was Walt Mossberg had a memoir deal, a very pricey one also. And he didn't do it. He decided he was like, fuck this, I'm not doing it. And I thought someone should. That was definitely part of it was that Walt was not doing it.
Well, you're a very good friend, business partner. You guys started All Things D together. The book is dedicated to him. And you said, I'm going to write the memoir that maybe Walt chose not to do. Yeah, a little bit. He would have done a different one because he was so close to jobs and would have focused on that. But when he didn't do it, I thought, well, someone has to do it. And I think I probably had met most of them more than anybody else besides Walt.
And so that was really it. Let me ask you, one of the things that I admire most about you as an entrepreneur is that you are not nostalgic or sentimental. You don't spend a lot of time looking back. You've always been hyper-focused ever since I've known you on what is next. Was it uncomfortable to shift into this mode where you're like, oh, God, I got to think about the last 20 years and all of this stuff?
And what the problem was, I'd forgotten a lot. Now, like, as I'm going through this book tour, I'm like, oh, do you remember when Yahoo did news and they hired Banana Republic to the—that's not in the book. And I'm like, oh, that would have been good to put in there. Like, a lot of memories are coming back. Like, people come up to me, do you remember this? And I look at them, I'm like, I don't even remember you, so no. But I did a lot through photos. I looked at a lot of photos. Like, oh, I remember that. The photos in the book are great.
They're great. I just got sent the one of the chapters opens at Google with this with this with this white rush, this ice sculpture lady with the white Kahlua coming out of the boobs. It was it was a baby shower. And Ann Wojcicki just sent me that photo. She's like it in case they question you about the Kahlua naked ice lady. I'm like, thank you. Thank you. I was aware.
But I really dragged my feet here. I was two years late on this book, but actually it's well-timed right now because in the interim, Elon went crazy on AGI. Yay! And so I was late and John was like, Kara, you really need to write this. And I was like, whatever, you can't get the money back. You're not going to take it. That would be ugly. And so then I did. I really got serious about it and I hired Nell Scovel. I don't know if you know her. She did the Lean In book with Sherald.
And she knew the scene and she was sort of a book editor, a separate book editor. And I hired her and she really helped me shape it and remember things. And she was so knowledgeable about these times and had this very funny. So she really helped me quite a bit. Yeah.
The book really chronicles, I think, a story of disillusionment for you. You arrived in Silicon Valley, I think, very optimistic. You were very early to realize that the internet was going to be huge at a time when— I loved it. I loved it. Yeah. And even your editors were saying, Kara, is this going to be that big of a deal? And you said, yes.
When you sat down to write it, did you think this is going to be the tale of how I sort of became disenchanted or did that emerge as you were writing it? No, I was disenchanted, as you know. You know what I mean? And I think I helped you get disenchanted a little bit. Oh, sure. Yeah, you know, I think
I had over the course of time and it was much earlier. And once I got to all things D, you could see the sharpness coming in because you couldn't do that at The Wall Street Journal because you're a beat reporter. Yeah. And so you could see it, whether it was about Google and trying to take over Yahoo or Marissa Mayer at Yahoo or all the CEOs of Yahoo, by the way.
Travis Kalanick, we were much sharper. And a lot of it, especially when those valuations went up in the late 90s, you're like, this isn't worth that. This is bullshit. And one thing that I did go back to do, and I was wondering how skeptical I was. I went back and found my very earliest Wall Street Journal articles. I got there in 96 or 97 to the journal and moved to San Francisco.
So one of my articles was, here's all their stupid titles and why it's bullshit, essentially. That was one. Job titles, you mean. Job titles. I wrote a whole story about their dumb job titles. And then I wrote a whole story about their dumb clothing choices. And then I wrote a whole story about their dumb food choices. And then the last one I wrote, which I liked a lot, was all the sayings they had that were just performative bullshit. And they put them all in the Wall Street Journal. So I must have...
started to be annoyed early. And the journal, you know, I got to say, let me do that. So I was covering the culture, too. Like that one about their sayings like we're changing the world. You know, it's not about power. You know, I was like, here's why that's bullshit. So I and then it started to get ugly, I think, around Beacon with Facebook and some of the privacy violations there that seemed malicious. It started to seem malicious. Yeah.
Right. Yeah. I mean, you have an unusual role in tech journalism these days, which is that you are a chronicler of tech, but you are also someone, as you write in the book, that people in the tech world will call for advice. You know, what should I do about this company? Should I buy this startup or should I fire this person? That only happened once. Should I make this strategy decision? Yeah. Yeah. So how do you balance that? It's not quite like that. It's not.
It's actually not quite like that. It's not like... If I had done that, I would have done it for a living, right? It wasn't quite like... A very typical thing. One you're referencing is the Blue Mountain Arts. I had written a big piece on them, and I got to know them, and they were very... This is a company that made...
E-cards, if you remember those. E-cards, right? Remember they got huge? So I wrote about that phenomenon in the journal. And so at the time, Excite had merged with at home in an unholy whatever the fuck that was. And they were trying to buy it. And a lot of people were trying to buy it. Amazon looked at it and everything else because the traffic was enormous for this Blue Mountain Arts site. And they had these really kind of silly –
you know, very saccharine cards that you sent. But it was big. The traffic was enormous and everyone was buying traffic then. And an excited home, it was George Bell, remember him, was going to pay for this. And the woman who started with her husband called me and she was very innocent. She wasn't like most of the Silicon Valley people. They lived in Colorado. They were hippies.
And she's like, Cara, I've just been offered $600 million for this company. And I was like, what? That's a news story. Thank you for that. And she wasn't off the record or anything else. And she said, what should I do? And I was like, okay, this isn't going to be a news story now. I'm going to write it. Thank you. But let me tell you – and I did right away. And I said, but –
My only advice to you is get cash because this the jig is frigging up that they're offering you six hundred million dollars personally. And I only did it for her because she was so unsophisticated in that regard. And I said, do not take their stock. Do not. Do not. Do not. And that was that was, I guess, my big and I didn't get a vig for it in any way whatsoever. And then another time when I was with Steve Jobs after Ping came out, you remember Ping, their social network? This was Apple's attempt to launch a social network.
Yeah, it's the only time they followed a trend, really. They're not big followers of trends in a lot of ways. And so they were not a social networking company. But they did it, this Ping thing. And it was focused on music, I think, if I recall. And Steve Jobs had introduced it, and he had Chris Martin –
you know, from Coldplay. And he came out, and when he came out, he'd come out into the demo room, right? And he saw me, and Walt wasn't there, so he had to talk to me, I guess. I was like his second choice, or fifth, really. And he comes over, and he goes, what did you think of Bing? And I said, oh, that sucks. It sucks. It sucks. And he's like, it does. Like, he knew it. Like, he was like mad at himself for agreeing, right? And I said, and I also hate Chris Martin. Yeah.
So maybe that's, you know, affecting me. I can't stand Coldplay. They're so whiny. And he's like, he's a very good friend of mine. I'm like, oh, sorry. Apologies, but he still sucks. And so that was that was that advice. I didn't think he closed it down because I said it sucks, but he knew it already. I didn't I didn't.
I didn't tell him anything he didn't know. It was stuff like that. So that brings up one of the most interesting dynamics in your career to me, which is that so many of the indelible moments you've created as a journalist have been live on stage with folks like Steve Jobs and Bill Gates and Elon Musk. And there's this real tension where you are really tough on them on stage. And also you have to get them to show up. So what was your understanding over the years of why they showed up?
Well, Marc Andreessen called it Stockholm syndrome, but I don't believe that. I think we think we were I think in the case of Jobs, he wanted that. He didn't he was tired. He didn't like talking points. He really didn't. He kind of you know, it's that it's that scene from A Few Good Men. He wanted to tell me he ordered the code red. You know what I mean? Like that kind of thing. A lot.
of them are tired of it, like in a lot of ways, and they want to have a real discussion and they want you to see them. Part of it's probably seeing if they could best me or Walt in that case for those many years. The other was it had a sense of event, right? Everybody was there. And so they had to be there. And to be there, they had to be on those chairs, right? And one of the things we did, which I think was unusual, and when we first did it,
I'm not going to say the New York Times said that it was like, you know, ethically compromised and then went right ahead and did it themselves. But they did. They wrote a piece about it. And we were like, what's the difference between doing an interview and putting it in the pages and selling advertising against it? And what we were doing, which was doing live journalism. That's how we looked at it. And one thing we did was which was very clear, including for jobs, is we didn't give them.
any questions in advance. A lot of those conferences had done that. We didn't make any agreements. We also got them to sign in advance the agreement to let us use the video and everything else. And the only person... At one point, Jobs was like...
I'm not signing it right before he would, that was the only one. And I think Walt said to him, he goes, okay, we're just going to say that to people on stage that you, you are going to be able to see it. And then he signed it. Right. And so I don't know. I just feel like they just wanted to mix it up. I think it was fun. It was also super fun. Right. Like whatever. I was really charmed by your book. Cause I, which I read because I, I know you and it's, it felt like peering directly into your brain. Um, it has gotten some criticism, uh,
Oh, I know. From the New York Times. My wife gave me my sources. That was a nice piece of news.
Right. This was one of the criticisms in the Times Review was that you'd been married to a— But it's not a criticism. It's an inaccurate statement. I was a reporter seven years before I met her. Why would you put that in? We should just explain. Your ex-wife was an executive at Google for many years. Years later, after I started. Yes. And this was a line in a—I would say an otherwise pretty even-headed review, but it did call attention to the fact that you'd been married. Not at all. Go ahead.
that you'd been married to a Google executive. I know we know that this was not how you got your scoops, but this is a criticism that's out there. But I think the criticism that I wanted to ask you about is... I'm going to put a pin in that for you because one, I was a tech reporter before I met her. Why would you put a sentence like that? And secondly, she never leaked to me. No one called me to ask me if she was a leaker to me. So that was inaccurate. And it was also an insult to her. She was at
plan it out. That's really going to give me a real up for the tech people. The second part of it was they liked me because I was a tech entrepreneur like them. I was at the Wall Street Journal and the Washington Post for 10 years before that. So what happened? Did they go in a time machine and know I was going to be an entrepreneur? That
is all, let me just say, inaccurate and should be corrected. But that's fine. It's the, am I close to them? Do I do access journalism, right? Yeah, that's the thing I want to ask you about, because I think there, you know, you do write in the book about becoming, as you put it, too much a creature of Silicon Valley. And this is also something that has been made of the book and of your career and the careers of other journalists who do the kind of journalism you do is that you're too sympathetic. You're too close to these people. You can't see their flaws accurately and you have blind spots. So what do you make of that?
This is endless bullshit. I'm sorry. Like, if you go back, I was literally looking at that review. I was like, oh, you started covering 2009. You didn't read my stories about Google getting too monopolistic. You didn't read our stories about Uber. They're like, until 2020, she didn't realize it. I wrote 40 columns for The New York Times, the first of which is called
The tech people, digital arms dealers. Oh, that's real nice. I'm sorry. It's not true. You have to have a level when you're a beat reporter. This is absolutely true. And you can't do this at The Wall Street Journal. When I'm writing a news story, I can't say those assholes. I can't say that.
Right.
And so you can say that about political reporters, everyone else. Oh, access. Well, look at the look at the content. Actually, I got Scott Thompson fired because of his resume thing. That was years before Yahoo. Yeah. I mean, you can have the opinion about access journalism. I don't think it holds water here. And there is an element of.
of any beat where you have to relatively get along with them. But if you make no promises to them and if I like something, I like something, it does center around Elon. I think that's where it centers in that I liked him and I thought he was compared to all these other people who were doing, I mean, I'm making a joke this week. It's like all these people came to you and you know this, Kevin, and they had like digital dry cleaning services. You know, after like 20 of those, you're like, stop, kill me now. Kill me fucking now.
And so I wasn't interested in these people or else they'd find a company, they'd become venture capitalists, and then they'd bring you the dopiest, stupidest idea, which I ended up calling assisted living for millennial companies, right? And that was tiresome. And then when you met Elon, he was doing cars. He was doing rockets. He was doing really cool stuff. And I give it to him. I, you know, slow clap for him on all those things. And so it was, I did like what he was doing. I did encourage that kind of
entrepreneurship, right? I thought that was great. And so I did get along with him. And I'm sorry, he changed. And in the book, I say that very, I said, I misjudged, I didn't misjudge him. He wasn't like that. He changed. And then minute he changed, I
I changed. So I don't know what to tell you. Like he wasn't like that. You know, you knew him back then. Casey, you knew him. Yeah. Yes, he absolutely changed. You're getting at something else that really interests me, though, Cara, which is I think part of being a good tech journalist is not just delivering a moral judgment on every bad thing that happens in Silicon Valley. It's also being open to new ideas. It's also believing that technology can improve people's lives.
And we've had conversations in the past where you have said to me that you think that that is important, too, that kind of sense of openness. Like, how have you tried to balance those two things in your mind? Well, I think you've gotten more critical in a good way, right? And you're enthusiastic, too, by the way. I mean, and so are you, Kevin. One of the things—on the last—let me finish that part, is that if you had to pick the person who was a slavish fanboy to tech people and an access journalist—
I don't know. I might look over the 43 covers of Fortune magazine over the many years, you know, where it was like all up and to the right. And then, of course, they slapped them later. So I wouldn't be the one I would pick for access journalism. Honestly, that's the thing. But I just represent things to people, I guess. I must represent them. Well, in other words, like...
Look, there is no doubt in my mind. Yes, you can like things. You've written plenty of criticism, but also, like, you do kind of have to be... Like, I think most people don't go into technology journalism if they don't think that it has the possibility to do good things for people. Correct, which I say from the beginning of the book. And one of the things that it did replace, I think everyone was too...
aren't they? Look at your beautiful big brain, Mr. Gates. When I got there, that was the way it covered it. Right. And I think there was sort of fanboys of the gadgets, gadget fanboys. The second part that happened was then and I think we led the way at all things deep for sure. It got too snarky. Right. It was everything sucked. And I'm like, everything doesn't suck. And the minute you say that you're their friend. I'm not their friend. I just think
I don't know. Some of it's cool. Like I even crypto like I was like, this seems interesting. And this is you have to be open. This gets to a criticism that I'm sure all three of us hear from people in the tech industry, which is that the media has become too critical of tech, that they're they can't see the good, that they're they're sort of overcorrecting for maybe a decade of probably too positive coverage, blaming them for getting Donald Trump elected.
or ruining democracy or whatever, and that they are sort of becoming the scapegoat for all of society's problems. What do you make of that? I think...
And to an extent, that's a little bit true, but it's also true that they actually did do damage. Like, come on, stop it. Like, they're not exact. They didn't cause the riot at, you know, not the riot. It's not a riot. It was the insurrection on January 6th. But they were certainly handmaidens to sedition, weren't they? Like, come on, stop it. You can trace that so quickly. Same thing that's going on. They don't want to.
They don't want to take any responsibility. They resist. And now, as you know, the victim mentality, the industrial grievance complex among these people. You know, when Mark Andreessen wrote that ridiculous techno-optimist, it's, you're for us or against us. I'm like, oh, my God. And, you know, the whole, like, when Elon goes on about the man, I'm like, you're the man, you man. Man? Like, that's the kind of stuff. So, no, I think to an extent, yes, when it's instantly...
You know, Mark Zuckerberg is villainous. I don't consider him villainous. I don't. I don't. But he's is he responsible? And the way you do that is say that interview I did with him about Holocaust deniers. That's how you show it. Like, I don't I think he's just ill-equipped in that regard. I don't think he sits in his house and pets a white cat and goes, what shall I do to end humanity now? And I do think there's a little bit of that, especially among younger reporters, that they have to get people involved.
I don't think – and there's people I like. I had an old chapter. I think Mark Cuban's journey has been really interesting. But we all get that. We all get that because it's our fault. As we have decreasing power, it's all our fault. Like, really? Like, Walt Mossberg used to be able to make and break companies. We cannot. None of us. Even collectively, if we put our little laser rays together, couldn't do it. Couldn't do it.
All right, Kara. So last question. We have to ask about this huge scandal that just broke today. Amazon has been flooded by copies of books that are pretending to be burn book but are not burn book. They're using generative AI to create versions of your face, like wearing your signature aviators. What is your response? Not always. Did you see the Femi one? Did you see the Femi one? Yeah. To me, I prefer more butch Kara, but all versions of Kara are beautiful. I agree.
No, these versions are not. These are the versions my mother wants to happen, right? This is like, my mother's like, this is great. There's one, this is one title, Text Queen Bee with a Sting by Barbara E. Frey. And then there's another one. They're crazy. So this is not a new thing with me, but they wrote it on 404, which I think. So I was just with Savannah Guthrie, and she's written this book about faith and God, right? It's a bestseller. And they created workbooks that go with
the book. Savannah has nothing to do with these workbooks. And they're doing it with me. So there's all these Kara books. So I, of course, put them all together. And I sent Annie Jassy a note and said, what the fuck? You're costing me money. The CEO of Amazon. Yes. So I literally, I was like, what the fuck? Get these down. Like, what are you doing? It's as if I was like the head of Gucci and there's all these knockoffs.
Right. Whatever. It's not unsimilar, but it's AI generated clearly. And just to make a very Kara Swisher point, I think it's been obvious that this is going to happen for a while and the platforms have not taken enough steps to stop. Nothing. Yeah, nothing. So I want you to find out questions. Sure. Go ahead. Yeah. OK. Number one, very commonly people ask me who know that we're friends will ask me, like, is Kara Swisher really like that? Like, is she really like that when the cameras are off, when the mics are off?
you know, what is she really like? And I always tell them, like, there is no off switch on Kara. She is Kara wherever she is in whatever context. And I think that's one thing that's really consistent throughout your entire book is that, you know, this is not an act. This is who you are, this tough persona, this very candid, very blunt person. Yeah. And I just want to know, like,
How did you get that way? I was that way from when I was a kid. I honestly, maybe my dad dying. I don't know. I was that. I was, when I was born, I was called Tempesta. So I kind of feel like it's kind of genetic in some fashion. So I don't know. I just was, I was one of these people, maybe because I was gay and nobody,
You know, nobody liked gay people and I didn't understand that. I was like, I'm great. What are you talking about? I think it was – I just was like this. I would – you know, I was – there was – when I was in school, when I walked out of the class, I was like, I read this. I'm not going to waste my fucking time here with you people. And I think I was four. Yeah.
You know, I was like, I've already read it. Let's move along. And so, you know, I was always like that. And it's sort of my journey to becoming Larry David. Right. And now I find myself saying lots of things out loud. I'm like, no, what are you doing? What's going on here? Like, you know, what's with that?
And so I say that a lot in a lot of things I do. I don't know why I'm like I'm not. Though one of the things I think you must stress to people, I'm actually not mean. Like that's a very sexist thing with people. I think most people, often they go two things first.
I thought you were taller, which I'm very short. And I thought you were mean, but you're very nice. And I can be very polite and, you know, I'm straightforward is what I am. The thing about you that people don't see is that you are so loyal to all the people who work for you. You truly are. You take time to mentor. You identify people who you think could be doing better than they are, and you just proactively go out and help them. I have been a huge beneficiary of that. I truly can never thank you enough for that.
But like that is the one thing that doesn't come across in the podcast and the persona is that behind the scenes you are helping a lot of people. Thank you. I'm sorry if that hurts your rep a little bit, but I did want to say that. I won't demand an apology from both of you. We didn't have anything to do with the feed for the record. I know you didn't, but you know what? You could have stood up for the kid. You could have done I am Spartacus. All right. Last question, Kara. I am Spartacus. Say it. I am Spartacus.
Just once for your Uber lords there at the New York Times. Let me just say one more thing about that. Yeah. One thing that does bother me, and especially around women, and it's a big issue in tech and everywhere else, is a lot of the questions, some of the questions I'm getting on the podcast, and it's always from men. I'm sorry to tell you this. How are you so confident?
Like or the word uncommon confidence. It's ridiculous. Like how the fact that women have to sort of excuse themselves constantly is an exhausting thing for them and everybody else. And so one of the things I hate, that's what I that's where I get really mad. And that's makes me furious. And I and I sort of pop off when that happens. Yeah. Excellent. Last question. Yeah.
In your book, you write about what I would consider the sort of like last generation of great tech founders and entrepreneurs, the Steve Jobses, the Mark Zuckerberg, the Bill Gates, these people who we've been sort of living with now for decades and sort of using the products and the services that they've been using.
built. We're now in this sort of weird new era in Silicon Valley where a lot of those companies look sort of aging and so maybe past their prime. And you have now this big AI boom and a new crop of startups that has got everyone excited and terrified that are raising huge gobs of money and trying to transform entire industries. Do you think today's generation of young tech founders have learned the lessons from watching the previous one?
They'll probably disappoint me once again in this long relationship. But I do. I do think they're more thoughtful. I find a lot of them much more thoughtful and very aware. Just the way when you talk to young people about uses of social media, I think the craziest people are 30 to 50, not the younger people. My sons are not like, they're like, oh, that's stupid, mom. You know what I mean? Like my older sons. My younger kids only are on, you know, they just have frozen on autopilot.
play. That's the whole experience with tech. But I think they're smarter than you think, right? And they do, they're aware of the dangers. I think they're more concerned with bigger issues and more important issues. There's not the stupidity, right? There's not the sort of an arrogance that you get. That seems to be a little bit of starch out of the system. I think, maybe I'm being wrong, but I do feel that some of their businesses make sense to me. I'm like, okay, yeah, I got this insurance, AI. Like, they explain it to me and I'm not like,
oh my God, I want to poke my eye out, the kind of thing. That's one thing. They're also, they will say, like a Sam Altman, who I've actually known since he was like 19, they will say there are dangers. They never did that. You know that, right? Everything is up to the right. It's so great. We're here to stay. I don't get that. I couldn't write that same Wall Street Journal article, which is stupid things they say. We're going to change the world. You're not. And that's why the very first line of the book is, so it was capitalism after all. And I
I am of a firm believer that it is, and they are aware of that. And so, yeah, I have a little more hope, especially around climate change tech and some of this AI stuff. I'm not as scared of AI as everyone else is, although I'm a Terminator aficionado, so it's kind of interesting. But I think some of it, I think I don't like the techno optimists. I really don't like them, but I really don't like the ones that are like, it's the end times, right? During the open AI thing, someone close to the
board that was the decelerationists literally called me and said, if we don't work now, humanity is doomed. And I'm like, you're just as bad as fucking Elon Musk, who said the same thing to me. If Tesla doesn't survive, humanity is doomed. You ridiculous fucking narcissists. Like, sorry, it's going to be an asteroid or the sun's going to explode, but it's not because of you. And so that's I do. I don't know. Do you guys do you feel?
I think you've hit on something important, which is that the new generation has wised up. They have taken the lesson of the past generation and they've updated their language. But at the same time, like they are being quite grandiose and they do talk about terms of existential risk. And so I feel like it always keeps us off balance because we're never sure exactly how seriously to take these people. I want to see new leaders. I don't want – they don't – like I don't think they –
Like the Elon Musk thing. Let me lend on this. I just reread the obituary, Mona Simpson, who was Steve Jobs' sister, excuse me, who he met into adulthood because he hadn't known her. You got to go back and read that. It was really a remarkable thing. He is so different. I know he's mean. Today, he looks like a really thoughtful, interesting person. He knew poetry. He knew differences. He understood risks.
He didn't shy away from that, even though he did the reality distortion field. It was about the products. It wasn't around the world. Can you imagine like Tim Cook going, this is what I think of Ukraine, everybody, right? Like he wouldn't because he's not an asshole, like, you know, kind of things. And so I really urge people to read that obituary, the eulogy that his sister, Mona Simpson, did. It's in The New York Times, actually. It's wonderful because it really there was a different time. And I'm hoping the young people do embrace the
The more thoughtful approach versus this ridiculous reductionist us or them, the man, the hateful stuff. It's hateful is what it is. That's not a vision of the future. It's dystopian. It's the guy in Total Recall who ran Mars. Fuck that guy. Right? You know, so I have hopes. I still see I'm still in love. I'm still in love.
Not with you two, but yes. She had to get in one last burn on her way out. Yeah, exactly. Thank you for coming. Can I just say, you guys have done a nice job with my feed and you created a beautiful show. It's a great show. I really like your show. Thank you. Anytime you need help, boys. That means a lot. I was just noticing, we had Demis Hassabis on our podcast last week and I noticed he hasn't come on yours, so if you'd like any help booking guests...
Just let us know. Actually, Kevin, I wonder who broke that story when it was sold to Google. I'm just kidding. I'm just messing with you. Karen Swisher, the legend. That was Go Looking?
Kara Swisher broke that story. The book is called Burn Book. It's available everywhere you get your books. I will be there. I will be there after you. I was there before you. I am inevitable. She's the fan of some journalism. Let me just say, I'm at CNN right now. Do you know I have a show now? It's about time you got a break. Yeah, I know. Right. Exactly. Kara Swisher, thanks so much for coming. This was amazing. Thank you, Kara. Thank you, boys. I appreciate it.
When we come back, the Supreme Court takes on content moderation. ♪♪♪
Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.
Casey, you and I have written a few times over the years about the issue of content moderation on social media. Yeah, one of the biggest issues it seems like anyone wants to talk about when it comes to the social networks. And this week is a particularly big week in content moderation land because the Supreme Court of the United States heard arguments for two cases that are directly related to this issue of how social networks can and cannot moderate what's on their services.
On Monday, Supreme Court justices heard close to four hours of oral arguments over the constitutionality of two state laws. One came out of Florida. The other is in Texas. Both of these laws restrict the ability of tech companies to make decisions about what content they allow and don't allow on their platform. They were both passed arbitrarily.
after Donald Trump was banned from Facebook, Twitter, and YouTube following the January 6th riots at the Capitol. Florida's law limits the ability of platforms like Facebook to moderate content posted by journalistic enterprises and content, quote, by or about political candidates. It also requires that content moderation on social networks be carried out in a consistent manner.
Texas's law has some similarities, but it prohibits internet platforms from moderating content based on viewpoint, with a few exceptions. Yeah, so this is a really big deal. Right now, platforms remove a bunch of content that is not illegal. You know, you're allowed to insult people, maybe even lightly harass them. You can say racist things. You can engage in other forms of hate speech. That is not against the law. But platforms, ever since they were founded, have been removing this stuff because...
for the most part, people really don't want to see it. Well, then along come Florida and Texas, and they say, we don't like this, and we're actually going to prevent you from doing it. So if these laws were to be upheld, Kevin, you and I would be living on a very different internet. Yeah, so I think when it comes to content moderation and its legal challenges, this is the big one. This pair of lawsuits...
is what will determine how and if platforms have to change the way that they moderate content dramatically. Yep. Okay, but we want to bring in some help to get through the legal issues here today. Yeah, so we've invited today an expert on these issues. This is Daphne Keller. Tell us
Tell us about Daphne. Daphne is the person that reporters call when anything involving internet regulation pops up. She is somebody who has spent decades on this issue. She's currently the director of the Program on Platform Regulation at Stanford's Cyber Policy Center. She has done a lot of great writing on these cases in particular, including a couple of incredibly helpful FAQ pages that have helped reporters like me try to make sense of
Yeah, so Daphne is opposed to these laws, we should say. She believes that they are unconstitutional and that the Supreme Court should strike them down.
But this is not a view she came to lightly or recently. She's been working in the field of tech and tech law for many years. We'll link to her great FAQs in the show notes. But today, for sort of a breakdown of these cases and how she thinks the Supreme Court is likely to rule, we wanted to bring her on. So let's bring in Daphne Keller. ♪♪
Daphne Keller, welcome to the show. Thank you. Good to be here. So I want to just start. Can you just help us lay out the main arguments on either side of these cases? What are the central claims that Texas and Florida are using to justify the way that they want to regulate social media companies?
So, I mean, it's not that far away from the basic political version of this fight. The rationale is these are liberal California companies or they were liberal California companies and they're censoring conservative voices. And, you know, that needs to stop.
My understanding is that this is probably the only Supreme Court case in the history of the Supreme Court that had its origins in a Star Trek subreddit. Can you explain that whole thing? So this isn't literally from that case. But so Texas and Florida passed their laws. The platforms ran as fast as they could to courts to get an injunction. So the laws couldn't be enforced.
But a couple of cases got filed in Texas. And the most interesting one, I thought there was just one. I think now there are two actually. But the most interesting one is somebody who posted on the Star Trek subreddit that Wesley Crusher is a soy boy. I had to look up what soy boy means. It's kind of like junior cuck or something. People often call us soy boys. It's kind of like a conservative slur meaning weakling, I think. Yeah. Yeah. Yeah.
As I sit here drinking my green juice. He's just not soy milk. Right. Right. Yeah. So the moderator, it wasn't even Reddit, the moderators of that subreddit took that down.
Because of some rule that they have. I mean, it's deeply offensive to members of the Star Trek community. And the soy boy community. Or the soy boy community, yeah. And the person, I'm going to guess it's a guy, sued saying this violates the obligation in Texas's law to be viewpoint neutral. And it's a useful example because it's such a like total real world content moderation dispute about some dumb thing.
But the question of, like, what does it mean to be viewpoint neutral on the question of whether Star Trek characters are soy boys is, like, helpfully illustrates how impossible it is to figure out what platforms are supposed to do under these laws. Exactly. You take this very silly case, you extrapolate it across every platform on the internet, and you ask yourself, how are they supposed to act in every single case? And it just seems like we would be consumed with endless litigation. So you just returned from Washington, where these...
cases were being argued in front of the Supreme Court. Sketch the scene for us, because I've never been. What's it like? So you start out, well, if you're me, you pay somebody to stand in line overnight for you, because I'm old. I'm not going to do that shit. But you really had someone who has to stand in line overnight for this? I had somebody there from 9 p.m., and he was number 27 in line, and they often let in about 40 people. How do you find these people to just stand in line?
Skiptheline.com. Wow. Great tip for listeners. I learned something today. Rick did a great job for me. Shout out to Rick.
Anyhow, so you stand around in the cold for a long time. Then they let you in in stages, one of which the best part definitely is you stand in this like resonant, beautiful marble staircase. And a member of the Supreme Court police force explains to you that if you engage in any kind of free speech activity, you will spend the night in jail. It's like very...
firm, polite. And it's also interesting to hear that there is effectively content moderation on everyone who is in the room before they even enter. They say, hey, you know, you open your mouth and you're out of here. Yeah. So the people making these arguments represent NetChoice, which is a trade association for the tech companies. It's sort of their lobbying group. Who else is opposed to these laws?
So I should say that CCIA, which is a different tech trade association, is also a plaintiff. And they always get short shrift because they're not the first named party. But, you know, a whole lot of individual platforms filed or free expression oriented groups filed. Lots of people weighing in who are interested in different facets of the issue. I see. And why?
For those of our listeners who may not be American or may not have much familiarity with how the Supreme Court works, my understanding is in these oral arguments, you know, the justices rain questions down on the attorneys. They try to answer them as best they can. Then they go away and deliberate and write their opinions. So we don't actually know how they're going to rule in this case. But did you hear anything during oral arguments that kind of indicated to you which way this case might be headed? No.
So there's a lot of tea leaf reading that goes on based on what happens in oral arguments. And usually that's the last set of clues you get until the opinion issues, which seems likely to be in June or something like that. In this case, there's actually another case being argued in March that's related and might give us some interesting clues. But from this week's argument, it was pretty clear that a number of the justices thought
thought the platforms clearly have First Amendment-protected editorial rights. And it's not like that's the end of the question, because sometimes the government can override that with a good enough reason. But it seemed like there was, I think, a majority for that. But then they all kind of got sidetracked on this question of whether they could even rule on that, because the law has some other potential applications. And they got into, like, a lawyer-procedural-rules fight that could...
you know, cause the outcome to be weird in some way. So let me ask you about that because, you know, to go back to our soy boy example, to me, if a private business wants to have a website and they want to make a rule that says you can't call anybody a soy boy around here, that does seem like the sort of thing that would be protected under the First Amendment. You know, you write your policies under that sort of First Amendment. Why is that not the end of the story here?
Well, so what Texas or Florida would say is that these laws only apply to the biggest platforms. And they're so important that they're basically infrastructure now. And you can't be heard at all unless you're being heard on YouTube or on X or on Facebook. And so that's different. Right. Yeah. So what is the...
argument from the states about why they should be allowed to impinge on this First Amendment right that these platforms say that they have to moderate content however they want to, their private businesses. What do the states say in response to that? They say the platforms have no First Amendment rights in the first place, that that's fake, you know, that what the platforms are doing isn't speech, it's censorship, or what the platforms are doing is
is conduct or mostly they just allow all of the posts to flow so the fact that they take down some of them shouldn't matter. A lot of arguments like that, none of which are super supported by the case law, but the court could change the case law. I,
I want to ask you about another conversation that came up during these oral arguments that you referenced earlier, which was, which platforms do these laws apply to? There is some confusion about this. And it seemed like the justices had questions about, okay, maybe if we want to set aside for a second the Facebooks and the Xs and the YouTubes, whatever
What about like an Uber or a Gmail? Like maybe there should be a kind of equal right of access there. So I look at that and I say, well, that's a good reason not to pass laws that affect every single platform the same way. But I'm curious how you heard that argument and maybe if you have any thought about how the justices will make sense of which law applies to what and what might be constitutional and what might not be. Yeah, so
That part of the argument, I think, caught a lot of people, including me, off guard. We did not expect it to go in that direction. But I'm a little bit glad it did. Like, I think it was the justices recognizing—
We could make a misstep here and have these consequences that we haven't even been thinking about. And so we need to look really carefully at what they might be. And in the case of the Florida law in particular, the definition of covered platforms is so broad. It explicitly includes web search, which I'm a former...
legal lead for Google Web Search, full disclosure. And it seems like it includes infrastructure providers like Cloudflare. So it's really, really broad who gets swept in. And I reluctantly must concede, I think the justices were right to pause and worry about that. Yeah, for sure. Yeah. A lot of the people I saw commenting on the oral arguments this week suggested that this was kind of going to be a slam dunk for the tech companies, that they had, you know,
They had done a good job of demonstrating that these laws in Texas and Florida were unconstitutional and that it sounded after these arguments like the justices were likely to side with the tech platforms. Is that your take, too?
I think there's I think enough of them. You need five. I think at least five of them are likely to side with the platform saying, yes, you have a speech right. And yes, this law likely infringes it. But because of this whole back and forth they got into about the procedural aspect of how the challenge was brought.
It could come out some weird ways. For example, the court could reject the platform's challenge and uphold the laws but do so in an opinion that pretty clearly directs the lower courts to issue a more narrowly tailored injunction that just makes the law not apply to speech platforms.
You know, there are a lot of different ways they could do it, some of which would formally look like the state's winning, although it wouldn't in substance be the state's winning against the platforms, you know, that we're talking about most of the time, the Facebooks, the Instagrams, the TikToks. Very interesting. Yeah. So we've talked about these laws on the show before, and I think we can all agree that there are some serious issues with them. They could force people.
platforms operating in these states to, you know, open the floodgates of harassment and toxic speech and all these kinds of things that we can all just agree are horrible.
But there is also an argument being made that ruling against these cases, striking these laws down, could actually do more damage. Zephyr Teachout, who's a law professor at Fordham, wrote an article in The Atlantic recently about these social media laws called Texas's social media law is dangerous. Striking it down could be worse. Basically making the case that
If you strike down these laws, you basically give tech giants kind of unprecedented and unrestrained power. What do you make of that argument? So I read the brief that Zephyr filed along with Tim Wu and Larry Lessig, and it's like they're writing about a different law than the actual law that is in front of the court. And, you know, I think...
Their worry is important. If the court ruled on this in a way that precluded privacy laws and precluded consumer protection laws, that would be a problem. But there are a million ways for the court to rule on this without security.
stepping on the possibility of future better federal privacy laws, for example. It's not some binary decision where the platform's winning is going to change the ground rules for all those other laws. So you don't worry that if this case comes out in the company's favor that they are going to be sort of massively empowered with new powers they didn't have before? Well, I mean, if the
court wanted to do it that way, if there are five of them who wanted to do it that way, then it could come out that way. But I can't imagine them, five of them wanting to empower platforms in particular that way. And I can't imagine the liberal justices wanting to rule in a way that undermines like the FTC from being able to do the regulation that it does. Mm-hmm.
A big topic that comes up in discussions of law and tech policy is Section 230. This is the part of the Communications Decency Act that basically gives broad legal immunity to platforms that host user-generated content. This is something that conservative politicians and some conservatives
liberal politicians want to repeal or amend to kind of take that immunity away from the platforms. This is not a set of cases about Section 230, but I'm wondering if you see any ways in which the way that the Supreme Court rules on this could affect how Section 230 is applied or interpreted. Well, you might think it's not a case about 230 because they agreed to review a First Amendment question, full stop, but the states managed to make it more and more, uh,
like a case about 230, and multiple justices had questions about it. So it won't be too surprising if we get a ruling that says something about 230. I really hope not, because that wasn't briefed. This wasn't what the courts below ruled on. It hasn't really been teed up for the court. It's just they're interested in it. There are two ways that 230 runs into this. I think one will be too in the weeds for you, but the more interesting one is...
So lots of the justices have said things like, look, platforms, either this is your speech and your free expression when you decide what to leave up or it's not and you're immunized. Pick one. How can it possibly be both?
And the answer is, no, it can definitely be both. Like, that was the purpose of Section 230, was that Congress wanted platforms to go out there and have editorial control and moderate content. Literally, the goal was to have both at once. Also...
If the platforms have First Amendment rights in the first place, it's not like Congress can take that away by passing an immunity statute. That would be a really good one weird trick. And I'm glad they can't do that. So there are a lot of reasons that that argument shouldn't work. But it's very appealing, I think, in particular to people whose concept of media and information systems was shaped in about 1980. You know, where like if the rule is...
You have to be either totally passive, like a phone company and transmit everything, or you have to be like NBC Nightly News and they're just a couple of privileged speakers and lawyers vet every single thing they say. Then you're going to get those two kinds of communication systems. You'll get phones and you'll get broadcast, but you will never get the internet and internet platforms.
platforms and places where we can speak instantly to the whole world, but also have a relatively civil forum because they're doing some content moderation. Right. Almost sounds like there's a downside to having the median age of a Supreme Court justice being 72. I don't know what the real age is. I'm sure I'll do a pickup about that later. Now, Kevin, do you want to tell her who wrote the 230 question? No.
Wow, you're going to out me like this? I'm going to out you. So this was a great question that I unfortunately did not write, but the perplexity search engine did. No, because I gave it the prompt, write 10 penetrating grad student level questions for a law and policy expert about the net choice cases. In fairness, I did think it was a pretty good question. It was a very good question. So yeah, wow, you're really doing me dirty here.
I was going to get away with that. Look, we wrote the rest of the questions. We just wanted a little help to make sure we, you know, left no stone unturned. Yeah. And it was a pretty smart question. It's smarter than I would have come up with. And let's say the answer, way better than the question. Yes, it's true. A student of mine sent me a screenshot of something he got from ChatGPT. He'd asked for sources on some 230 related thing. And it cited an article that it pretended I had written, which did not exist.
disc of the Twitter files in Section 230 that was in a non-existent journal called the Columbia Journal of Law and Technology. It looked very plausible. I'm comfortable being cited in things I didn't write as long as they were good and in prestigious journals. You know what I mean? I loved your submission to the New England Journal of Medicine so much. It was really good. Saved a lot of lives. So, Daphne, we've talked about how the court will or may handle this case, but I
I'm also curious how you think they should handle this. You and some other legal experts filed an amicus brief in this case, sort of arguing for... Actually, let's say this once and for all. Is it amicus or is it amicus, Daphne? It's both and. Okay, great. Wow. Come on, Kevin. And some people say the plural amici. Oh. Ooh. I ordered that at an Italian restaurant once. I think I saw him DJ in Vegas. Yeah.
Can you just articulate your position in that brief about how you think the court could and maybe should handle this? Yeah. So this is not how the parties have framed it. This is some wonks coming in and saying you framed it wrong. But I do actually think they framed it wrong. So there's...
In kind of a standard set of steps in answering a First Amendment question, you ask, did the state have a valid goal in enacting this law? And does the law actually advance that goal? And does it have an unnecessary damage to speech that could have been avoided through a more narrowly tailored law? So in this case, the states say we had to pass this law because the platforms have so much centralized control over speech. Let's assume that's a good goal.
And we say that doesn't mean the next step is the state takes over and takes that centralized control to impose the state's rules for speech. There are better next steps that would be more narrowly tailored, that would be a better sort of means and fit.
And in particular, steps that empower users to make their own choices using, you know, interoperability or so-called middleware tools for users to select from a competing environment of content moderation providers. What would this look like? This would be like a toggle on your, you know, your Reddit app that would say, I want Soyboy, uh,
or I don't want soy boy content? So it could look like a lot of different things, but I know you guys have talked to Jay from Blue Sky. Like it could look like what Blue Sky is trying to do with having third parties able to come build their own ranking rules or their own speech blocking rules and then users can select which of those they want to turn on.
It could look like Mastodon with different interoperating nodes where the administrator of any one node sets the rules. But if you're a user there, you can still communicate with your friends on other nodes who have chosen other rules. It could look like Block Party back when Block Party was working on Twitter. You sort of download block lists that are aggregated from other people. This was an app that basically lets you block a bunch of people at once. Mm-hmm.
Yeah. So it could look like a lot of different things and all of them would be better than what Texas and Florida did. I wonder if you can sort of steel man the argument on the other side of this case a little bit. I was going through this exercise myself because on one hand, like, I do think that these laws are a bad idea. On the other hand, I think that the tech platforms have in some cases made their own bed here by being so opaque.
and unaccountable when it comes to how they make rules governing platforms. And frankly, spending a lot of time obfuscating about what their rules are, what their process is, doing these fake oversight boards that actually have no, you know, democratic accountability. It's a kangaroo court. Come on. And I think I'm somewhat sympathetic to the view that
these platforms have too much power to decide what goes and what doesn't go on their platforms. But I don't want it to be a binary choice between Mark Zuckerberg making all the rules for online speech, along with Elon Musk and other platform leaders, and, you know, Greg Abbott and Ron DeSantis doing it. So I like your idea of a kind of a middle path here. Are there other middle paths that you see where we could...
sort of make the process of governing social media content moderation more democratic without literally turning it over to politicians and state governments? It's actually really hard to use the law to arrive at any kind of middle path other than this kind of competition-based approach we were talking about before. The problem is what I call lawful but awful speech. A lot of people use that.
Which is this really broad category of speech that's protected by the First Amendment so the government can't prohibit it and they can't tell platforms they have to prohibit it. And that includes lots of pro-terrorist speech, lots of scary threats, you know, lots of hate speech, lots of disinformation, lots of speech that really everybody across the political spectrum does not want to see and doesn't want their kids to see when they go on the Internet. But if the government can't tell platforms they have to regulate that speech,
speech people morally disapprove of, but that's legal and First Amendment protected, then their hands are tied. You know, then that's how we wind up in this situation where instead we rely on private companies to make the rules that there's this great moral and social demand for from users and from advertisers. And that's just, it's extremely hard to get away from because of that delta between what the government can do and what private companies can do. Yeah.
Well, some people have described our podcast as lawful but awful speech. So I hope that we will not end up targeted by these laws. Daphne Keller, thank you so much for joining us. Really a pleasure to have you. Thanks for having me.
Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.
Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyan. This episode was fact-checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Diane Wong, Marion Lozano, Rowan Nemisto, and Dan Powell. Our audience editor is Nelga Logli. Video production by Ryan Manning and Dylan Bergeson. If you haven't already, check us out on YouTube at youtube.com slash hard fork. Special thanks to Paula Schumann,
WeeWinkTam, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com with all your sickest burns. Please invite us to your Willy Wonka themed events too.