cover of episode Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

2023/11/3
logo of podcast Hard Fork

Hard Fork

AI Deep Dive AI Chapters Transcript
People
C
Casey Newton
K
Kevin Roos
R
Rebecca Tushnet
Topics
Kevin Roos: 本期节目讨论了拜登政府关于人工智能的新行政命令,该命令旨在规范下一代人工智能模型的创建,并解决人工智能可能加剧的现有社会问题,例如歧视、偏见、欺诈和虚假信息。该命令要求公司向政府报告其训练的大型人工智能模型,并披露其进行的安全测试。这一规定引发了科技行业的强烈反对,一些人认为这是行业巨头为了维护自身利益而操纵的结果,是一种监管俘获。 Casey Newton: 作者认为,政府对AI的监管是不可避免的,但开源与封闭方法的争论实质上是各方为自身利益辩护。作者本人对AI的看法摇摆不定,有时AI的表现令人震惊,有时又觉得社会正在适应AI带来的变化,难以预测未来一到三年的发展。作者认为,政府应该关注AI的潜在益处,但同时也要对AI的发展进行监管,并对大型AI模型的训练情况进行报告。 Rebecca Tushnet: Rebecca Tushnet教授认为,现有的版权法原则可以处理AI和AI图像生成器带来的问题,AI输出可能不具有版权,因为它们不反映人类创作。她认为,使用受版权保护的材料训练AI模型是否构成侵权,以及Google Books项目的案例,可以参考现有的法律原则来判断。她还讨论了艺术家如何保护其作品不被用于AI模型训练,以及道德和伦理因素在AI发展中的作用。她认为,大型模型由于使用了大量数据,可能更不容易受到版权诉讼的影响。 Arthi Prabhakar: 白宫科技政策办公室主任Arthi Prabhakar认为,AI技术既具有民主化意义,也存在扩散风险。 Ben Buchanan: 白宫AI顾问Ben Buchanan认为,政府试图在AI的潜在益处和风险之间取得平衡。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. Casey, I want to talk this week on the show about a technology that is dangerous and that I believe the government should intervene to regulate. What's that? Dots. Dots? Yes. The Halloween candy? Yes. I mean, from what I understand, they're made out of recycled plastic, so I don't know why they're feeding them to children. Yeah.

Have you ever tasted one of those things? Good Lord. I sure did. So as we were getting ready for trick-or-treaters this year, my wife picked up some dots. I wouldn't say it's like a top-tier candy in my estimation. Was all the other candy gone at the store? Literally, yes. It was the only thing remaining at Target. So you bring home these dots. And, you know, I'm testing the candy as one does. So I bite into a dot and a tooth comes out. Wait. Presumably not out of the dot.

It sort of jumbled in with the dot. I feel this hard thing in my mouth, and I realize that I have just broken my tooth on a dot. A dot? Yes. Is it because it's so hard and sticky? Yes. It took off the crown on my molar. And so I had to spend Halloween at the dentist's office getting emergency dental work done. That is horrible. And I went trick-or-treating with half of my face numb. Okay.

It was very spooky. You know, I could recommend actually a lot of good costumes for that. Phantom of the Opera comes to mind. Really anything with a mask that covers at least half your face. Yes, who is this strange drooling man accompanying a toddler? So yeah, that was not a pleasant way to spend my Halloween. You know what's so funny about this is that

Every year there is a panic around Halloween candy. You know, it's like, well, you better, you know, open up every single wrapper and make sure nobody stuck a razor blade in that. And we always laugh. We say, oh, you people need to calm down. You bit into candy and had to go to get emergency dental work done. Yes, yes. It was very bad. And these dots, they're too sticky. We got to do something. And I'm calling on the Biden administration to step in and outlaw dots. Where's the executive order on that? Yeah, yeah. Mr. President.

I'm Kevin Roos, a tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, I visit the White House to talk to the Biden administration about its new executive order on artificial intelligence. Then, copyright expert Rebecca Tushnet joins to discuss some big developments in the legal battle between artists and AI companies. And finally, an invigorating round at GPT. Casey, anything big happen to you this week?

Kevin, I went to Washington, D.C. this week to get some answers about what's happening in this country related to artificial intelligence. Yeah, so you got a very exciting invitation this week to go to the White House to actually talk to some officials there about this new AI executive order. And my first question, obviously, was where's my invite?

But my second question is, what was it like? Because here are the things. I went to the White House once when I was a child. It was part of a school tour. Very exciting. Remember, very little of it. But here are the things I know about the White House. I know it's where the president lives. That's right. I know there's something called the Oval Office and something called the West Wing. I also know that until recently, there was a dog at the White House named Commander who bit people. There's a portrait of Commander at the White House, and I took a picture of the portrait just because it tickled me. Did you get a bite, just like a commemorative dog bite? No.

Listen, let me tell you. For the moment I walked onto the grounds, my head was on a swivel. I'm saying, where is that dog? Because I wanted to meet him and pet him. Because what could be better for the podcast than if I'd been bitten by the president's dog? Did you bring some treats? No, but it's funny you mentioned treats. Because we went on the Monday before Halloween, so Monday of this week, and we went on the

I walked down with our producer, Rachel. We kind of took in the sights and the sounds. And as we walk onto the grounds of the White House, there are children in costumes everywhere. So I do not see a dog, but I do see a Lego, a Cheeto, a Tyrannosaurus, a Transformer, a lot of Barbies. And everywhere we went throughout the executive office building, the offices of the staffers had been transformed into some sort of, you know...

Hollywood intellectual property is, I guess, what I would say. There was a Barbie room. There was a Harry Potter room. Wow. Our hosts in the White House digital office had transformed their office into something called the Multiverse of Madness. And when you took a left, you were standing in Bikini Bottom from the SpongeBob SquarePants universe. There were bubbles blowing everywhere. And I'm setting this scene because you have to understand, I am there to listen to the president talk about the most serious thing in the world,

And while we were interviewing his officials about the executive order, we're literally hearing children screaming about candy. So it was an absolute fever dream of a day at the White House. So amid all of the shrieking children and the costumes and the multiverse of madness, there was actually like a signing ceremony with the president where he did put this executive order into place. That's right. Yeah. So after we had some interviews at the executive office building, we walked over to the

East Room of the White House, which was very full of people from industry, people who work on advocacy around these issues. And not only did the president come out, but the vice president came out, Chuck Schumer, the Senate majority leader was there. Yeah, it was a big deal. So before we get into what you learned from talking with the president's advisers,

Let's actually just talk about this executive order. So, you know, I spent a long time going over it this week. It's more than 100 pages, very long executive order. And it's also very comprehensive. It's sort of like a like a sort of grab bag of regulations and rules governing artificial intelligence in all of its forms.

Yeah, and we could dive in any number of places. I think the part of the order that has gotten the most attention is the aspect that attempts to regulate the creation of next-generation models.

So the stuff that we're using every day, the Bards, the GPT-4s, those are mostly left out of this order. But if there is to be a GPT-5 or a CLAWD-3, presumably it will fall under the rubric that the president has established here. And when it does, it will then have some new requirements. Starting with, they will have to inform the federal government that they have trained such a model.

and they will have to also disclose what safety tests they have done on it to understand what capabilities it has. So, I mean, to me, that is the big screaming bullet that came out is like, okay, we actually are going to at least put some disclosure requirements around the big AI companies. Totally. The industry, I would say, was surprised by this. The people I talked to at AI companies, they did not know that this exact thing was coming, and they were also not sure what the sort of threshold would be where these rules would kick in. Would they apply to all

all models, big or small. And it turns out that one threshold for when these requirements kick in is when a model has been trained using an amount of computing power that is greater than 10 to the 26th power floating point operations or flops. I looked this up. That is 100 septillion flops. Wow. That's more flops than we've ever had on this podcast. Yeah.

Well, so, right, that was the piece that I think caught the industry's attention. Another big part of the executive order addresses all of the ways that AI could basically exacerbate harms that we already have, like discrimination, bias, fraud, disinformation. There are some specific requirements in it that government agencies are supposed to sort of figure out how to prevent AI from, you know, encouraging bias or discrimination in, for example, the criminal justice system.

or whether AI can be used for processing like federal benefits applications in a way that's fair to people. And to me, the big takeaway from this, the thing that if you know nothing else about this executive order, you should know is that it basically signals to the AI industry from Washington, we are watching you.

This is not going to be another social media where you have a decade to sort of build and chase growth and spread your products all over the world before we start holding hearings and holding people accountable. We are actually going to be looking at this in the very early days of technology.

Yes, that is true. But it is also, I think, proving to be really controversial. Totally. So let's talk about some of the controversies around this executive order, because the provision that you mentioned, this sort of computing threshold over which you have to tell the government that you are trading an AI model, has been getting a lot of blowback from people in the tech industry. So describe what you're hearing.

People are losing their minds, like legitimately. Like you can go on X and Threads and see Jan LeCun, who is a major proponent of open source AI, ringing a bunch of alarm bells. And there really is a huge dispute in this community right now around the idea of open sourced AI versus a more closed approach.

So, you know, briefly open source technology can be analyzed, examined. You can look at the code, you can usually fork it, change it to do your bidding. And the people who love it say, this is actually the safest way to do this. Cause if you get, you know, thousands and thousands of eyes on this, including people who may, might not have a direct profit motive, you are going to eventually build safer, better tech. You're going to democratize that tech and we're all going to be better off. Right.

And any other people who are taking a closed approach, and in that group, I would include OpenAI, Anthropic, Google, and they're saying, well, we do see a lot of potential avenues for harm here. And so instead of just putting it up on GitHub and letting anybody download it and go nuts, we're going to build it ourselves. We're going to do a bunch of rigorous testing. We'll tell you about the test, but we are not going to let everyone play with it.

And this debate has been swirling in Silicon Valley for months now, but it really seems to have come to a head over this issue of having to report to the government if you are training a model larger than a certain size. So let's just talk about that, because to me, I don't get the backlash to this. It's not telling AI developers you can't

make a very large model. You're not allowed to. It's not even saying you can't make an open source model that is very large. All it's saying is this, if you're building a model that is bigger than a certain size, 10 to the 26 power flops,

It's just very fun to say flops. It's so fun to say flops. And the next time one of my friends has a huge failure, I'm going to say, it's giving 10 to the 26 power flops. I'm saying, you flop so hard, you're going to have to tell the federal government, bitch.

So it's just saying you have to tell the government and you have to actually tell them that you're doing safety testing and sort of if you found anything dangerous that these models can do. So I would say that people who are objecting to this are not objecting to anything specific that applies to models currently existing. They're just they're mad that at some point in the future, AI developers may be required to tell the government what they're doing, which strikes me as being very similar to what

companies in other industries, you know, if you're making a new pharmaceutical drug and you're trying to sell it to millions of people, you have to tell the government it has to be approved. So why is this any different than that? So I agree with you, but let me just sort of try to steel man the other arguments, right? Here's what I'm hearing from the folks that are in this open source community.

They believe that what we are seeing is the beginnings of regulatory capture. Just define regulatory capture. Regulatory capture is when an industry sets out to ensure that to the extent any regulations are passed, it gets those regulations passed on its own terms. And it sort of pulls the ladder up so that incumbents always maintain the power and challengers can never compete.

Right. Basically, using regulation to kind of draw a moat around yourself such that smaller competitors who don't have armies of lawyers and compliance people and people to fill out forms for the government, they can't compete with you. That's right. And, you know, just to really lay it out, people are making a really specific accusation, which is that Sam Altman from OpenAI, Dario Amadei from Anthropic, and some of the other big AI players here who are taking this

closed approach. They did this intentionally that they do not actually believe that AI poses any existential risk beyond what we have with just sort of ordinary computers. And they went to the government, they freaked them the hell out. They said, regulate us now. And oh, by the way, here's exactly how to do it. And now they are starting to get what they want.

And the result is going to be that they are the winners who take all and everyone else is left by the wayside. But this is crazy to me because it's not like these companies and the people running them started sort of hyping up the risks of AI recently, right? These are people who have been talking about this, some of them for many years. I mean, Dario Amadei, Sam Altman, these are not people who became worried about AI recently.

just as soon as they had big companies to protect and products to sell, they are people who I think are genuinely worried that AI could go wrong and are trying to put in place some common sense things to prevent that. So I just don't get this argument, this very cynical argument. I would say that the people who are talking about the risks of AI are just doing it to enrich themselves. I agree with you. I think where I have a little bit more doubt in my own mind is which approach do I actually think will lead to...

safety over the longer term? Is it a closed approach where we put very powerful AI in relatively few hands? Or is it one where it is widely available to the public? And to be honest, that is just an issue where I am trying to learn and listen and read and talk to people. But I'm curious if you have a gut instinct on that. I mean, my gut instinct is that it was always going to be regulated somehow, right? AI is too powerful a technology not to invoke

from the government and from governments around the world. This is technology that is not just going to be built into chatbots. This is going to be used in defense, in the financial markets, in education. Kids are going to be using this stuff. So clearly there was going to be a point, at least to me, where the government stepped in. Now that arrived, I think, sooner than I would have thought.

right? Because the government is usually pretty sclerotic and slow moving when it comes to- The US government in particular. Exactly. But I think I was not surprised to see that governments are taking a strong and early approach to AI because it is just such a powerful technology. Now,

I think the debate between closed and open source is basically everyone sort of arguing for their own position, right? The companies that make large models, they do see some of the risks of those models. And I think they're quite genuine in wanting the government to sort of step in and protect against some of the worst case scenarios.

The open source people, I think I struggle to understand what they believe because I don't think they're saying that AI has no risk attached to it. Some of them are. I have VCs who are texting me saying that you can already make a bioweapon just by Googling and that if you think that AI makes that any easier, then you are a fool. This is what people are telling me. I've been using Google for a long time and it has never once told me how to make a novel bioweapon. I mean...

A challenge with having really good safety discussions about this stuff is that I personally just do not try to use these tools for evil. And so it's hard to know what is the case here, but I'm with you. - So, okay, this is the debate that's happening in Silicon Valley about the executive order, but let's talk about your visit to the White House, 'cause you actually did have some conversations with some of President Biden's advisors about this. What did they say?

So on this open source point in particular, I talked to Arthi Prabhakar, who directs the Office of Science and Technology Policy. And I just said, does the government have a stance on whether it wants to see more open source development or more closed development? And here's what she told me.

If I were still in venture capital, I would say the technology is democratizing. If I were still in the defense department, I would say it's proliferating and they're both true. And that, I mean, this is just the story of AI over and over again, right? Bright side and dark side. And you just have to understand it and deal with it as it is. And the open source issue is one that we'll definitely continue to work on and hear from people in the community about and figure out the path ahead. Yeah.

That's interesting, because it does seem to me like if you had asked me, what is a Biden White House executive order on AI going to look like? I would probably say that it's going to be focused much more on the harms, the potential harms of AI than the potential upsides. But what really struck me about reading this executive order is just how sort of

balanced it tried to be, sort of striking this middle ground between sort of optimism and pessimism, kind of AI is going to do all these great things, and AI has these potential harms associated with it. Yeah, and I actually put that question to Ben Buchanan, who is an AI advisor for President Biden, about what he was seeing, if there were any green shoots out there that was making the administration say, oh, there's potentially a lot of good that AI can do for us, for the American people. Here's what he told me about that.

I think it's even more than green shoots. I think we wouldn't be trying to so carefully calibrate the policy here if we didn't think there was substantial upside as well. So a look at something like microclimate forecasting for weather prediction, reducing waste and the electricity grid and the like, accelerating renewable energy development. There's a lot of potential here, and we want to unlock that as much as we can.

So that seems like, to me, a pretty balanced view of AI. You know, on one hand, it could help us with microclimate forecasting. On the other hand, it could cause some harms, especially when it comes to things like bioweapons and cybersecurity. So is that kind of the vibe you picked up from this White House visit in general, is that this is a White House that is trying to sort of

cautiously but enthusiastically weighed into AI regulation? Yes, and I would say that, honestly, this was a pleasant surprise, right? Like, I write about technology policy and proposed regulations a lot, and I don't like a lot of what I see. You know, when he was campaigning to be president, President Biden said that we should get rid of Section 230 of the Communications Decency Act, which would mean, effectively, that Google and every technology platform was responsible for every person posted on its platform, which I just think would be bad for a lot of reasons that we'd have to get into.

But like, to me, that was the worst kind of tech policy because it's you're painting with the broadest possible brush. You're ignoring any positive use cases and you're just sort of like, you know, legislating with a giant hammer. This is not that approach. These are people who have done the homework, who have been very thoughtful. They still have a lot to do. Again, the policy reads very sweeping what it means in practice. I think we'll have to see how it plays out, but they're good ideas here.

So, I guess my big question about this executive order is like, is this enough, right? This is a big sweeping executive order, touches on a lot of different parts of AI and a lot of different parts of the federal government. But I also remember a time not too long ago where you and I were talking about sort of these existential threats from AI.

these kinds of near-term scenarios whereby AI would get so powerful that it would start to, you know, displace millions of people from their jobs or, you know, improve itself recursively in a way that would allow it to kind of like take over and potentially, you know, wreak havoc on humanity. Like these things that did not seem super far-fetched to us just a couple of months ago

And now I'm hearing you talk about the need for sort of balance and trying to find the green shoots of what AI could do. So has your view changed on AI or has something in AI itself changed in a way that makes you less nervous? And do you actually think that more regulation is needed?

Well, let me take the first question first. You know, like, has something changed that has made me less nervous? I kind of go back and forth on this. It depends on the day. There's sometimes when I'm using GPT-4 and it does something so good that it is spooky in a way that makes me think, oh my gosh, the future is going to look so different from today. What do we do now? But then like a week will go by and my everyday life looks the same as it has for a while. And I think, well, maybe society is actually just sort of adapting to this. And this isn't quite the disruptive change that I was thinking. It's not.

It's very hard to know in the moment what the one to two to three year future of all of this looks like. And so I try to just keep my eyes focused on like, well, what happened today? That's kind of the first part of it. The second part of it is like the sort of mythical future GPT-5 and all the equivalents from all the other companies. We just don't know how good they're going to be. Like what we know is that there have been massive leaps in each successive version of these models, right?

What does the next massive leap look like? As humans, we're really bad at conceiving of exponential change. Our brains think linearly. And so if we're one step away from an exponential change, I'm just telling you, it's like my brain is not good about understanding all of what that is going to mean. So I don't want the government to get so far out ahead of things that it is prevented from doing all the things that Ben Buchanan just talked about.

like helping to address climate change, for example, using the power of AI. If the government could do that, that could be a great thing. I don't think we need to slam on the brakes so hard that we don't allow for the possibility of that. But do I want the government saying, oh, if you're going to train the largest language model yet, we'd like you to tell us?

I do lean on the side of like, yes, I like, let's tell someone, I want someone paying attention to this. So that's kind of where I am. Where are you? Well, I think one problem and one challenge with regulating AI right now, it is very hard to regulate against theoretical future harms. If we know one thing about the history of regulation in this country, at least, it's

It is that often the biggest regulations are passed in the wake of truly horrendous damage, right? It took the financial markets collapsing in 2008 for Dodd-Frank to be passed to regulate the banking system. You know, a lot of our labor laws and labor protections came after things like the Triangle Shirtwaist factory fire when people died because there were not, you know, adequate safety protections at their workplace.

Typically, something very bad happens. People either die or are badly harmed. And then regulators and legislators step in, pass new laws, write new regulations, try to get things under control. So unfortunately, I think that's going to be true of AI as well. I think this is sort of a good stopgap measure in sort of addressing some of these potential future harms. But I actually don't think the real, true, good, robust regulation will arrive, unfortunately, until something quite bad happens.

happens with ai i i think that that is true but there is still reason to hope i think in this executive order for example it talks about using the department of commerce to try to develop content authenticity standards um for the very um meaningful reason of wanting to ensure that when the government communicates to its citizens the citizens know that the communication actually came from the government that's kind of an existential problem for the government

It's not a horrible problem today, but it might very well be in a few years. So the government is getting ahead of that. And the hope would be, well, maybe they're able to develop some authenticity standards so that when this stuff becomes more serious, we are prepared, right? It does similar stuff around the possibility of bioweapons. So I do think the smart thing in here is they're trying to identify like, well, what sort of seems like it might be easy to do with a much more powerful version of this thing and start to develop some mitigations today.

Right. And I think what will be interesting to see is not just how the U.S. regulates this, but also how the European Union, which is really, I think, ahead of the U.S. when it comes to actually trying to regulate AI. They have this AI Act that might get adopted as soon as next year. And then there's this big AI safety summit that happened in the U.K. this week, where a bunch of AI researchers and executives and industry people and various government officials talked about some of the more

existential risks. So I think it's quite possible that Europe gets ahead of the U.S. when it comes to regulating AI and sort of sets the de facto standard, sort of the way that it's been happening with social media. Yeah. All right. So that is the executive order on AI and your trip to the White House. I'm glad you got to go. Was it everything you hoped it would be? I mean, look, here's the thing. Not to like stand for the federal government, but...

But when it wants to, the government can be pretty frickin' majestic. You know, as a kid, like, you ingest so much mythology about, you know, American history and democracy and everything. It's like, okay, now, like, you're in the room seeing it happen. So, yes, I will, at the risk of sounding cringe, yes, I did enjoy my trip to the White House and watching Democracy in Action. Will you wear a damn tie next time? I will wear a tie next time. Actually, I have to say, our producer, in what was a transparent effort to get me in trouble, asked one of our minders at the White House,

don't most people wear a tie here? And the man looked very uncomfortable because I think he wanted to not embarrass me, but he was like, yeah, pretty much everybody wears a tie. Well, good for you. You've embarrassed the Hard Fork podcast in the hallowed halls of democracy. What is wrong with you? I don't know. The shirt I was wearing, I was like, I didn't really have a tie to go with that shirt. Did you have a belt? Of course I had a belt. Were you wearing shoes? I was wearing, yes. Were you dressed as the QAnon shaman? Did you have a Viking hat on?

My God. I...

I didn't think enough about it. And I do feel bad. And I want to apologize to President Biden that I was not wearing a tie. Wow, that's the last time you're getting invited back. Hey, White House, if you want someone to wear a tie next time you invite a representative from the Hard Fork podcast, invite the real journalist. When we come back, Harvard Law School professor Rebecca Tushnet on why AI image generators may be here to stay, whether artists like it or not.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.

Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

So Casey, we've been talking a lot on the show about AI models and copyright, this issue of whether artists and writers and other people whose works are sort of ingested by large AI models have any recourse when it comes to getting paid or credited or even potentially suing the companies that make these models. Yeah, this feels like one of the big questions in AI right now. We're using these tools. We're thinking, on some level, I actually helped make this thing without my consent. Uh,

Where's my cut? Totally. And it's been sort of a cloud hanging over the entire AI industry. And this week, we actually got an update on how the legal battle is going. A case brought by a group of artists, including Sarah Anderson, who's a cartoonist who we interviewed on the show many months ago. She and some other artists spoke

Sued Stability AI, the company that makes the stable diffusion image generator, along with two other companies, Midjourney and DeviantArt. And wait, by the way, I think we should say, I think this is the first known incident of two hard fork guests being involved in litigation. Because we did have

We have stability AI CEO Ahmad Mustak on here. True. So this case, Anderson et al. versus stability AI et al. has been making its way through the courts. And this week, a judge made a pretty significant ruling. On Monday, the judge dismissed the claims against Midjourney and DeviantArt, two of the companies that have been sued, saying these claims are defective. Which is one of the harshest things a judge can say to you, by the way, is that your claims are defective.

Totally. So some of these allegations were dismissed because the artist's works weren't actually registered with the copyright office. But there was one claim that the judge did let stand, which is this direct infringement claim against Stability AI. The judge says, basically, you have 30 days to go back and clarify and sort of refile and amend this complaint. But

Basically, a big win for the AI companies because most of the claims brought by these artists were dismissed. Yes, on one hand, that is true. But on the other, the core claim, the one that you mentioned at the top of this segment, is allowed to go forward. And so we are going to see these two sides hash it out at least a little bit about whether the artists have been wronged here in a way that can get them some money.

Totally. So I have just been fascinated by this whole area of law recently because this does seem like kind of the original sin of the AI industry in the eyes of a lot of creative workers is that, you know, the way you build these models, whether they're image generators or language models or video, uh,

you know, generator models is you take a bunch of work, probably much of which is copyrighted, you feed it into these systems, you train the model, and then you can produce outputs that mimic the work of living artists. And I think, you know, understandably, a lot of people are upset about

that. And so this question of like, is this legal? Is this protected under our copyright doctrine? Or do we need some kind of change to the laws to better protect artists and creative workers? That does seem like a really central question in the world of AI right now. That's right. And so that's why we said, Kevin,

We need a lawyer. Yes. So we decided to bring in Rebecca Tushnet. She is a professor at Harvard Law School. She specializes in the First Amendment, intellectual property, and copyright law. I also read, according to her bio, that she is an expert on the law of engagement rings. Which, unfortunately, we ran out of time before I could ask her all my questions about that. But maybe for a future segment. Yeah, we'll have her back to talk about the engagement ring legal issues. I don't even know where you'd start on that.

Well, I recently went through a messy divorce. That's a joke. Okay, so let's bring on Rebecca Teschnett. Rebecca Teschnett, welcome to Hard Fork. Thanks for having me. So before we get into talking about this specific case, I want to just understand how a copyright law expert thinks about AI and these AI image generators and also these language models we've been hearing so much about and all of the copyright questions that have come up

around them. So when you saw things like ChatGPT, Stable Diffusion, Midjourney, DALI start to sort of rise to prominence last year, what did you think?

So I thought that copyright had the tools to handle this, that they're pretty conventional questions. On the other hand, you know, if people decide that we need something new, we've changed copyright laws before. So it's quite possible that we could fruitfully get a new law. But right now we do have established principles that

And I don't think that they break when confronted with AI. So that totally surprises me, right? I feel like when we've talked about this on the show, it has been in the context of, wow, this seems really new. But what about it struck you as conventional? So in terms of whether you can get a copyright for the output, we do have a history of saying, okay, at what point does a human being's use of a machine, you know, break the connection between the human and the output? And my view is that

that a lot of AI output should be uncopyrightable because it doesn't reflect human authorship, which, you know, we've rarely considered before, but, you know, have sometimes had to decide, for example, what about a photograph? And, you know, if you're giving a copyright in a selfie, is that the same thing as giving a copyright in the footage from a security camera that's running 24-7, right? And, you know, I

Although you sometimes do have to draw lines, that's not unknown to the law. And we can just decide what our rules are going to be without really disrupting anything, in part because, you know, most of the time it doesn't come up.

whether a human is enough involved. So at the risk of derailing, I am just super fascinated by this question. So I can sort of see your point of view. If I just type the word like banana into Dolly and it produces a banana, I can see the argument that I didn't really have a lot to do with any of that and probably shouldn't be granted a copyright. But these

days, people are writing these meticulous prompts. You know, it's a banana that is dressed like a detective in a 1940s noir movie, but it's he's a Disneyland, right? And like the output of that actually feels like it did have a little bit more human authorship in it to me, but I'm not alert. Like in your view, is that is that is also the same thing?

So I guess what I would say is I'm still mostly of the opinion that the prompt alone shouldn't count, although you can find people who disagree. But here's my pitch, which is you often get a choice of multiple outputs that look quite different from each other. And so I've...

Two questions. First, you know, are all of them the same thing or does the fact that they look different show that, in fact, the prompt just didn't specify enough to be firmly connected as a human creation to the output? And then the second question I have for this point of view that the prompt should be enough to get copyright is, OK, so what about the ones you reject? You're like, no, that's not what I wanted. Are they still yours? Yeah.

If it wasn't within your contemplation, like there's room for accident and serendipity in human creation. But there's also a point at which the serendipity is no longer yours. Right, right. And to me, the fact that you get, you know, three very different looking people

suggests that the serendipity is on the machine side. That's interesting. Super interesting, but not what this case is about. Yeah, so that's the copyright issue with the outputs of these models. But this case, the stability AI case, which also looks at

tools like Midjourney and DeviantArt, is about the inputs to these models, the data that they are trained on. And the core question of this lawsuit is basically, does training an AI model on copyrighted material, whether that's images or something else, count as infringement?

And I'm curious what you make of that argument, because that's something that I've heard from artists, from writers who are mad that their books were used to train AI language models. What are the copyright implications that we know of how these models are trained? Again, you know, my view is we actually have a set of tools for dealing with this. And of course, you can disagree with them. But the background is, of course, the rise of the Internet and technology.

Google, looming large over everything. So Google, of course, made massive copies of lots of stuff, including things that weren't put online. So that's the Google Books Project. And the courts came around to the conclusion that this is basically all fair use.

Now, there are things you can do that are not fair, just to be very clear, right? But Google, for example, with the book project, doesn't give you the full text and is very careful about not giving you the full text. And the court said that to...

The snippet production, which helps people figure out what the book is about but doesn't substitute for the book, is a fair use. So the idea of ingesting large amounts of existing works and then doing something new with them, I think, is reasonably well established. The question is, of course, whether we think that there's something uniquely different about LLMs that—

justifies treating them differently. So that's where I end. So I think this is an interesting analogy to think about for a minute. Like if I'm hearing you right, you're saying when you think about what Google does, it creates this index of the web, right? It looks at every single page. And in many cases, it is making copies of those pages. It is caching those pages so that it can serve them up faster. That is all intellectual property of one sort or another. And then you enter a

query into Google, and it spits out a result which takes advantage of that intellectual property without reproducing it exactly. I think the question for me is, is that truly analogous to a situation where I'm a very popular artist, people love to type my name into stable diffusion, you get images that look like my life's work, and I get zero dollars for that? And so part of the answer is,

well, is the output actually infringing? So if it's not, then no. And if it is, then actually I want to start asking questions, why and who's responsible for it? So there's lots of circumstances where, for example, people can use Google and say, I want to watch Barbie. And

Although Google has made reasonable efforts to make that not the first thing that you get, you know, it's not impossible to figure out how to use Google to watch Barbie without authorization. To find like a bootlegged copy that I'm not paying for. Yeah. We have a robust system for attributing responsibility to the person who, you know, tried really hard.

to find the infringing copy on Google. So there are definitely some principles of kind of safe design, but the fact that they aren't perfect really shouldn't be the end of the question sort of who's responsible for it. And the more you get someone saying like, I tried really hard and I was able to create something that looked like, you know, Sarah Anderson's cartoons after, you know, a 1500 word prompt, I'm thinking that's on you.

So let's get to some of the specifics in this case. So there were a number of different claims made by the artists who are suing these AI companies. One of them is this argument that these models are basically collage tools. They're

that their images, their copyrighted works get sort of stored in the model in some compressed form, and that this actually is a violation of their copyright because they're not truly being transformed. They're just sort of being turned into these sort of mosaic collage things on the other end. Now, the companies and people who work in AI research have said, like, this is not actually how these models work. But this is the argument that the artists in this case are making. What do you make of that argument?

It's a little perplexing. I am also not a programmer, but it does sound fairly consistent when you talk to them that, no, there aren't pictures in the model. All right. There's a whole bunch of data. And, you know, there are sort of these unusual occurrences, usually when the data set contains like 500 versions of Starry Night, where, you know, it might get pretty good at producing something that is a lot like Starry Night. But

for, you know, the average image, it's not in there and can't be gotten out no matter how hard you try. So I would say, you know, in some sense, though, it doesn't really matter in the traditional fair use analysis because courts have generally said, you know, if you're doing something internally that involves a lot of copying, but it's not in there,

If your output is non-infringing, then that's a strong case for fair use. It strikes me we've been talking a lot so far about what is not a copyright violation. It might help me just to remind myself, what is a copyright violation? Like, give me some cut and dried cases of, oh, yeah, that's against the law.

So when somebody hosts a copy of Barbie and streams it to all comers, if they do that without permission, there's going to be a problem. Right. Copyright, at least when it was first conceived, is about literal, identical copies of something that you do not own that you are directly profiting from. Right.

And then we've expanded it as well to cover the idea of derivative works, which is a contested category. But the basic idea is, you know, if you're the author of a book, you should have the right to make a movie or a translation of the book that that's your right. A lot of Kevin's articles have been described as derivative work. I'm not sure if that's illegally true, but I just read that online.

So, Rebecca, in this case against Stability AI, the court dismissed a bunch of the claims from the plaintiffs just based on kind of procedural standing grounds. Some of the works that they said were copyrighted actually weren't registered. But the one claim that the court did not dismiss was this direct infringement claim against Stability AI. And that really goes to this question of fair use, which is the legal doctrine that allows

people to use copyrighted material without a license in some circumstances. The AI companies have argued that basically what they are doing is protected under fair use, and the artists have disputed that, and that part of the lawsuit is being allowed to move forward. So notwithstanding everything else that the court ordered here, isn't one takeaway that artists can't

can still argue with fair use, that they can still pursue copyright claims based on the use of their art as training data for these AI models.

So this is the classic thing, you know, can you sue over this? Well, it's America. You can always sue. Can you win? That's a very different question. And can you afford to litigate? A completely different question. But also, this is still very early days. The direct infringement training part of the claim just requires a different fair use analysis than the other claims, which in general were about the outputs.

And so I would say nobody should really rest on their laurels right now. I was really struck a few weeks back when OpenAI licensed some old articles from the Associated Press. Presumably, many of these articles were already online and could have been scraped by OpenAI for free and used to train their future models. If you're a lawyer for OpenAI and they say, we want to license that data,

As a lawyer, are you thinking, hmm, this could create a perception that this work has value and that we should be paying to license all of it? Or are the laws robust enough that it can do that as a goodwill gesture without incurring any more liability?

Look, people will definitely say, oh, you license this, this means you have to license everything. But the law has historically not been receptive to that argument because litigation is expensive. So what courts and other fair use cases have said is just because you were willing to negotiate to avoid a really expensive lawsuit doesn't

doesn't mean that it isn't fair use. It's just that fair use can be expensive to litigate. And so it's reasonable to license even if you didn't have to. The question is still for the people who won't license or, you know, who you can't find, is that fair use? And if you are an artist who's following along with these cases involving generative AI systems,

And you're thinking, well, I want to keep my work out of these systems or at least be paid some compensation when my work is used to train these systems. Do I have, in your view, any legal protections or would we need to sort of pass new laws and amend some of these fair use provisions for me to have any recourse?

Well, what I would say is you're seeing this rise of voluntary opt-outs, and that's very similar to what developed with Google. So Google respects what are called robot exclusion headers, although it's probably fair use to scrape for many purposes. They still won't do it.

And so, you know, I think a development like that is really powerful, even though it's not based in any legal requirements. So I would say there's definitely things you can do in terms of getting paid. I mean, the the

The classic thing about this is only publishers with big piles of works can ever hope to get paid because it's just not worth it to license on an individual basis. You know, at the same time, we're starting to see companies like Adobe put out models that do compensate artists. I think that...

Right now, even if there isn't a strong legal case to sort of have to use a tool like that, it does seem like there is a moral and ethical case to use tools that essentially have the permission of everyone involved. And so I wonder if maybe the long-term future here is just that we have to rely more on moral arguments and shame to get the world we want than these copyright laws that are less well-suited to the purpose. Yeah.

Here's the thing. I'm extremely skeptical about these models because, again, if they're done by the big publishers, they are not in the business of actually delivering most of the money to the authors or the artists. Because the fact of the matter is, a lot of the time, the image will not look like anything in the data set. So you could sort of randomly attribute, I suppose, or you could pass it through, you know, the fraction of the time that it looks close to a particular image. And I would just say, you know,

Are you going to be able to go to Starbucks on that money? I wouldn't place too many bets. There are situations where, for example, if you just train entirely on one artist, that might well be different. And...

That's a design choice. And right now there's a case proceeding brought by Westlaw for the copying of its headers where they write their own summaries of a court decision. And the court said, you know, we're going to go to a jury on that.

And the reason is Westlaw owns the set on which things are trained. But that's also kind of to make my point that these licensing deals are not going to help individual authors. The people who wrote the summaries at Westlaw do not see any more money, even if Westlaw prevails on this. So in some sense, the bigger your model is, the more data it was trained on, the more

potentially protected you are from some of these claims. It's sort of a strange incentive that it sets up where if you want to win lawsuits brought by individual creators or publishers, you should just make your model as big as possible and slurp up as much data as you can because then they can't come back and say, hey, that looks a lot like the specific thing that I made that is protected. So, you know, I see why you say that's strange, but in fact, it's exactly how you would make a general purpose tool.

So Photoshop being useful for lots of different things is, you know, more clearly a neutral tool than something that's like, well, here's a program that will draw Disney characters. Right. Or counterfeit money or something like that. That would be less protected. Whereas you can use Photoshop to draw Disney characters and try to counterfeit money. But because it can also do all these other things, the courts are less likely to see that as an infringement. Is that what you're saying?

Yes. Okay. Got it. And we will be trying to counterfeit money later in the show, so stay tuned for that. Curious to see how that works out. Now, I'm not a lawyer, but I feel like I have a pretty good grasp of one of the issues that is at stake here, which is, you know, who does the liability fall on? So if I'm

I'm using Photoshop and I create a counterfeit picture of money and I print it out and I try to use it as store. That's not on Adobe for making Photoshop. That's on me. And that is one of the arguments that you hear from these AI companies is we just make the tools. How users use them can be illegal or not. But either way, we are shielded. Is that a sound legal argument?

In general, yes. And so, you know, some of my questions are about the tweaked models that create infringing material or people are making, say, to, you know, generate porn. But in general, they are taking the models and then tweaking them themselves to do that. And, you know, that's on them. Well, what I'm hearing is that for so long in our society, the artists and the writers have been living on easy street. But now finally...

Along comes these new technologies to take them down a peg, and they're actually going to have to work for a living. So sorry to the artists and the writers out there. So can I just say one thing, which is that Cory Doctorow has this line about, you know, the problem is capitalism. That is, giving individual artists, you know, more copyright rights is like giving your kid more lunch money when the bullies take it at lunch, because the bullies are just going to take

all of the money you give, right? You can't solve a problem of economic structure by handing out rights to somebody who doesn't actually have market power to exercise because the publisher is still going to say, well, you know, if you want to publish with me, you got to give me all the rights. And you will say, I would love to be in print. So you'll do that, which

Which is why I think we need to talk about, you know, how we pay artists generally rather than thinking that, you know, we can fix it with AI. Right. Right. Well, fascinating. And I hope we can have you back if the courts do upend our entire fair use doctrine and push these companies out of business. Or if we get into any sort of legal trouble. Yeah.

Yeah, any copyright issues, we'll have you on speed dial. I'm a lawyer. I'm not your lawyer. Okay. Not yet. Although I did just Venmo you a dollar. So I think now, officially, you are my lawyer. This conversation is privileged. Privileged, yes. Rebecca Tushnet, thank you so much for joining us. Thank you so much. Thank you for having me. Motion to adjourn. Motion to adjourn. Is that a good joke?

When we come back, it's time for Hatch EPT. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us slash AI. Casey, what's your favorite Halloween candy? Um, I think at the risk of being a little controversial, I really love a York Peppermint Patty.

Wow. How old are you? Wait, is that considered an old- I like a Werther's. I like a nice, hard Werther's candy. Is that considered an old person candy? I think so. Look, it's chocolate and it's creamy and it's minty. I mean, that's- I've never been offered a York peppermint patty by anyone under the age of 70. I'll just say that. You know, at the old Facebook offices, they had a big jar of them. And so whenever I would go down there, on the way in and out, I was always like grabbing a couple of peppermint patties. Wow. And that's why you're captured by industry. Yeah.

Supply to you. Never bite the hand where the peppermint patties come from. Do you think they had like a secret dossier on you that was like Casey Newton from Platformer loves peppermint patties. Let's get a big bowl out so it'll be more favorable to us. Those places buy so many candies and so many foods. They don't need to bother having a dossier. You walk in, they're like, oh, what's your favorite food? Lobster bisque? Yeah, we have that. You know.

Speaking of candy, Casey, it is time once again for our favorite game. It's time for Hat Cheapy Tea. Pass the hat.

You know, we're on YouTube now, Kevin, and one of our wonderful listeners commented, I'm so excited because I want to see if there's actually a hat for HatGPT. And now we can actually just show, indeed, that there is a hat. There is a hat, GPT. We did also get some YouTube comments saying that this looked like a budget hat that was not professionally designed. And to that, I would like to say you are correct. This is something I made in about five minutes on Vistaprint.com.

And I think I paid like $22 for it. So if anyone wants to make us a better hat GPT hat, our inboxes are open. Absolutely. And, you know, hopefully the hat will become more and more elaborate and ornate over time. And that's how you'll know that the show is healthy and thriving. Yeah. Eventually it'll be like a 10 gallon Stetson. That's what I mean. That's what I want.

Hat GPT, of course, is the game where we draw news stories about technology out of a hat and we generate plausible sounding language about them until one of us gets sick of the other one talking and says, stop generating. That's correct. All right.

Oh, this one is sad. Okay. AI Seinfeld is broken. Oh, no. Maybe forever. This one's from 404 Media, and this is about Nothing Forever, the 24-7 endless AI-generated episode of Seinfeld that has been running on a Twitch livestream for many months. Captivated the nation when it first came out. One of my favorite AI projects of all time, I gotta say.

So this is a report that says that for the last five days or so, one of the main characters of the AI-generated Seinfeld show has been endlessly walking directly into a closed refrigerator. Nothing forever is very broken, stuck on a short repeating loop for days. It's also more popular than it's been ever.

in months. So people are tuning in to watch what may be the end of the endless AI-generated Seinfeld. - And I just wanna ask, what is the deal with walking into the refrigerator? But you know, there's something beautiful about a show that was famously about nothing being recreated as an AI project that over time just evolved into almost literally nothing

and then got more popular when it did. Yeah, it's a good metaphor. I can't wait until we start just really phoning it in and get mysteriously more popular as the show goes on. Next week on Hard Fork, we walk into a refrigerator. Tune in to see the live stream. Stop generating. Okay, you're up.

All right, Kevin, this next story is a tweet from something called Dell Complex, which describes something called the Blue Sea Frontier Compute Cluster, which is a barge. Are you familiar with a barge-based compute platform? So I saw this going around on social media the other day, and I think it is sort of what they call like an augmented reality corporation. I think

I think it's an art project, but it's basically a bit these people are doing, saying. We are so mad about the Biden administration's draconian executive order mandating that big AI developers report their models to the government that we are going to build...

essentially a floating AI computing cluster on a barge in international waters so that we're not subject to any regulations. So, and it says here that there are going to be more than 10,000 NVIDIA H100 GPUs on every platform. So this is literally seasteading for AI. Yes. Yeah. Yes. Well, look, I'm very sympathetic to barge-based projects in general. I don't know if you remember the Google barge. Remember the Google barge? Not really. The Google

Barge was a project in the early 2010s where Google was considering building retail stores on floating barges that would travel from port to port. Yeah.

I'm just picturing like old timey movies where like people are waving at the ships as they come in, but it's just like a giant Google store pulling up with new pixels. I mean, it would have been the thrill of a lifetime if this happened. The project got canceled. I can't imagine why, but for about a year or so, I would just think the words Google barge and would just smile because it made me so happy. You could say it was a sunk cost. Yeah.

Sorry. Well, now I don't want to talk about this anymore. Stop generating. All right. This one says, Joe Biden grew more worried about AI after watching Mission Impossible Dead Reckoning, says White House deputy. This is from Variety. And this is apparently from Bruce Reed, the deputy White House chief of staff, who told the Associated Press that

that Joe Biden had grown, quote, "impressed and alarmed" after seeing fake AI images of himself and learning about the terrifying technology of voice cloning. According to Reid, Biden's concerns about AI also grew after watching " Impossible: Dead Reckoning Part 1" at Camp David,

which is a movie where there's this like sort of mysterious AI entity that wreaks havoc on the world. Casey, what do you think about this? Did Mission Impossible come up in your conversations with President Biden's advisors? You know, it didn't, although he appeared to deviate from the script when he was giving his remarks because it was supposed to say something like, with just a three second clip of your voice, it could fool your family. And he stopped and was basically like, forget your family, it can fool you. He's like, you know, he says, I look at these things and I think,

When the hell did I say that? That's actually a direct quote. Jack. Yeah. He didn't say Jack, but it was implied. There was an implied Jack.

And everybody laughed. The silent Jack. The silent Jack. Yeah, everybody laughed. This is fascinating to me because it actually like does appear that he grew more alarmed about AI after watching a fictional Hollywood movie about a non-existent AI program. And so I sort of get why people in Silicon Valley want Hollywood to like make more positive movies about AI because if like the president is watching a movie and then all of a sudden decides to start writing some regulations, that feels weird.

Yeah. Here's what I'm going to say. I hope the next Mission Impossible movie is about how Congress managed to pass a law and just really inspire a lot of our lawmakers to do literally anything. It could be a really great thing for this country. Mission Impossible. Privacy regulation. Coming to a theater near you. Stop generating. Okay.

I love this story. Microsoft accused of damaging The Guardian's reputation with an AI-generated poll speculating on the cause of a woman's death next to an article by the news publisher. So this is very sad. The Guardian wrote a story about the death of Lily James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney last week. This went up on the Microsoft News aggregator.

But because it's Microsoft, and you know it's got that AI now, Kevin, they created a poll. No. And it put it next to this article, and the poll asked, what do you think is the reason behind the woman's death? Readers were then asked to choose from three options, murder, accident, or death.

Or suicide. Oh, God. This sucks so much. Like, I sort of vaguely have a sense of how this could have happened, right? Like, Microsoft runs, like, MSN.com and, like, maybe some other news aggregators. It pulls in stories from all over the place. And then, like, we know that they are very big on AI right now, so that maybe they're slapping, like, AI sort of things around the stories that they're aggregating. But, like, don't do this for stories about people dying. That should be, like, a very easy no.

Yeah, it really should. But, you know, I think we just sort of see this thing over and over again, which is that when newsrooms play around with generative AI and they don't keep a very close eye on its output, then they just find themselves in this ridiculous amount of trouble. So my hope is that this will be the last that we see of these silly AI-generated polls.

Kevin, when you die, do you want me to pull our listeners on how we think it happened? No, no, I don't. That's terrible. I have this theory that like the use of generative AI in news, it just, it always trends toward crap. You know what I mean? Like you have this idea and you think, oh, this is so cheap and it's so futuristic and let's put it into practice and we'll show innovative we are. And in practice, it always just trends toward crap. So this is a- This is so disgusting.

Oh my God. Imagine you live a dignified life. You accomplish some things. Your obituary gets written up in a major newspaper article.

And then they attach some poll to it generated by AI. Was Casey a good person? Sound off in the comments. I know. Oh, I mean, you know, a Microsoft spokesperson told the Guardian, we have deactivated Microsoft generated polls for all news articles, and we are investigating the cause of the inappropriate content. A poll should not have appeared alongside an article of this nature. And we are taking steps to prevent this kind of error from reoccurring in the future. Of course, raising the question, what?

kind of content is appropriate to have a stupid poll next to it. No, no, no, no, no, no, no. Do not let the humans off the hook for this. Because someone at Microsoft decided, you know what would boost our engagement on these news articles? Slapping AI-generated polls.

It is not the AI's fault that these polls ran. It is the Microsoft person who decided to implement these polls, and we should not let them off the hook for that. All right, and now we actually want to poll our listeners. Who do you think is more at fault? Do you think it was the humans or the AI? Please vote on the AI-generated poll that will be underneath the article. All right.

Last one. Last one. Cruise stops all driverless taxi operations in the United States. This is from The New York Times. Cruise, the driverless car company, said last week that it would pause all driverless operations in the United States two days after California regulators told the General Motors subsidiary to take its autonomous cars off the state's roads. The decision affects Cruise's robot taxi services in Austin, Texas, and Phoenix.

It's also pausing non-commercial operations in Dallas, Houston, and Miami. Now, this came after Cruise's license to operate driverless fleets was suspended by the California DMV, citing an October 2nd incident in which a cruise vehicle dragged a San Francisco pedestrian for 20 feet after a collision. So Cruise cars, which we have ridden in together, are now offloaded.

off the roads in the entire United States. What do you make of this story? The safe street rebels have won. Like, this was the future liberals want. And we're now left without these cars. This particular accident is very controversial. My understanding is that the victim of this

incident was hit by another car first. By a human driver. Yes. Yes. And so that was sort of the initial problem was this person was hit by a human driver and then... Was sort of dragged under a cruise car, which was trying to pull over on the side of the road, but ended up dragging this poor person. Horrible story. Horrible story. I think in general, regulators are just very...

on high alert for anything dangerous involving self-driving cars. But this is a big blow to cruise, I would say, which has struggled to convince people that its rides are safe. There have been a lot of documented incidents of traffic jams caused by cruise vehicles. I will say that Waymo vehicles that are operating in San Francisco have not been affected by this. They are still out on the roads. I actually took one this week.

And it felt quite safe to me, but I would say there are still a lot of questions about driverless cars. Do you think we are in a sort of moment where regulators are kind of getting nervous enough to shut all this stuff down? Or is this just kind of a speed bump on the way to these cars being more widely adopted?

Is that a traffic pun? Speed bump? Oh, God. No, I didn't. So, look, here's the thing. I haven't talked to the regulators. I don't know how they're thinking about this. I think it's clear that they are enforcing much stricter scrutiny on the self-driving cars than they ever would on these terrifying murder machines that everybody drives around in all day. And honestly, I just hope that it gets sorted out quickly. If for no other reason than, when are San Franciscans supposed to have sex now, Kevin? I mean, this had become...

Such a beloved pastime of citizens of this fair city. And now, well, if you can't find a Waymo, you're out of luck. It's true. Well, I did take a Waymo this week and I noticed something new in the Waymo, which is that they now come with barf bags. Is there typically a lot of turbulence in these Waymo rides? No, I don't think it's for turbulence. I think it's for drunk people. I think it was a trick-or-treat special. There must be a story behind this. If you have...

Because, you know, if you vomit in an Uber, the driver has to clean it up and they can charge you a cleaning fee. If you're in a Waymo, there's no one to clean up after you. So they got to put the barf bag in there. And if you are the person who vomited in a Waymo causing them to make this policy change, we do actually want to hear from you. That story is just really a rich, rich canvas to discuss so many aspects of society, wasn't it? Yeah, but the ride was very smooth. And so I was confused for a minute. I was like, should I be expecting turbulence? Should I be buckling up extra tight? Well,

What's going on here? All right. That's it for Hat GPT. Close up the hat. Casey, do you want to put on the hat? I don't look... Well, you know, I have famously spiky hair, so hats are kind of not really for me. Oh, it looks good. Also, I'm wearing headphones. Yeah. But, you know, I don't know. We do need a bit. We got to up the hat budget for this show. We got to up the hat... What is the hat budget on this show? It's $22 and, like, some cents from Vistaprint.com. Platform will chip in a few bucks. We'll see if we can get you a decent hat. Yeah. Yeah.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. What are we doing? What are we doing? Clap. One, two, three. That was, you didn't clap. Because I had a fidget spinner. Clap. One, two, three.

fidget spinner. Guy goes to the White House one time. I've always had a fidget spinner! He's exempt from the clapping rule. Oh my God. Show some respect. Do I have to call you Mr. Newton now? It would be nice. The Biden people sure did. It's not true, actually. They call me Casey.

Hard Fork is produced by Rachel Cohn and Davis Land. We had help this week from Emily Lang. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Rowan Nemisto. Original music by Alicia Baitup, Sophia Landman, Rowan Nemisto, and Dan Powell.

Our audience editor is Nell Gologly. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Schumann, Quewing Tam, Kate Lopresti, and Jeffrey Miranda. As always, you can email us at hardforkatnytimes.com.

Hi, Kips listeners. Today I'm sharing everyone's favorite lunchtime indulgence, the double quarter pounder with cheese from McDonald's. It's the go-to that keeps you full and energized for the rest of the day. It's not just a meal, it's a whole experience. You know it's fresh when you feel that heat through the bag. For those of us who know burgers, the McDonald's drive-thru is all about the double QPC. When those burger cravings hit, nothing comes even close.

Get a drip that's as far as your drip when you order a double quarter pound with cheese at McDonald's. Fresh beef at participating U.S. McDonald's excludes Alaska, Hawaii, and U.S. territories.