cover of episode S.B.F Goes to Jail  + Back to School with A.I. + Self-Driving Car Update

S.B.F Goes to Jail + Back to School with A.I. + Self-Driving Car Update

2023/8/18
logo of podcast Hard Fork

Hard Fork

AI Deep Dive AI Chapters Transcript
People
D
David Yaffe-Bellany
David Yaffe-Bellany 是《纽约时报》的技术记者,专注于加密货币行业,曾报道了Sam Bankman-Fried和FTX的倒闭,并获得了FTX爆炸后与Bankman-Fried的首次采访。
E
Ethan Mollick
No available information on Ethan Mollick.
Topics
David Yaffe-Bellany: Sam Bankman-Fried 因违反保释条件(与证人联系和使用 VPN 访问互联网)而被捕入狱。检方认为他泄露了 Caroline Ellison 的日记给《纽约时报》,试图影响证人证词和审判结果。此事件突显了 SBF 一直以来挑战保释条件的行为,以及检方和法官对此的回应。案件中涉及大量证据,包括电子邮件、信息、谷歌文档、Slack 历史记录和音频录音等,这些都将对审判产生影响。SBF 的辩护律师则认为他只是在行使自我辩护的权利,并且他相信自己能够在审判中胜诉。 Kevin Roos: 对 SBF 入狱事件的讨论,以及对案件中涉及的证据和未来审判的展望。 Casey Newton: 对 SBF 入狱事件的讨论,以及对案件中涉及的证据和未来审判的展望,并表达了对 SBF 行为的个人看法。

Deep Dive

Chapters
Sam Bankman-Fried's bail was revoked due to alleged violations, including using a VPN and potentially leaking sensitive documents, leading to his imprisonment.

Shownotes Transcript

Translations:
中文

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

It's funny, if you go to like the top podcasts on Spotify, most days it's like, you know, a mixture of like Joe Rogan and his friends, but then it's like sleep podcasts. Have you seen these ones? You're right. We are now below relaxing white noise. It's unfortunate. They've had some really good guests recently. They've had some great guests. Should we do some white noise to open the show today, just for anyone who's having trouble falling asleep and may use hard fork as a sleep aid? Okay. Ready? Shh.

And then can you just loop that for 40 minutes? Perfect. Podcast done.

I'm Kevin Roos. I'm a tech columnist at The New York Times. I'm Casey Newton from Platformer. And you're listening to Hard Fork. This week on the show, Sam Bankman-Fried goes to jail. We'll tell you why and what's next. Then we go back to school with Wharton professor Ethan Malek, who tells us what teachers and students need to know about this year in AI. And finally, another wild week in the world of robo-taxis. What happened and what you told us about our autonomous vehicle episode last week.

Casey, you know on this show, we love to talk about Sam Bankman Freed. We do. And you know what he's having this week? What? I mean, look, I've had a bad month. Another one? Yeah.

It's been through a bit of a rough patch. So Sam Bankman-Fried is having one of the worst weeks of a string of very bad weeks because this week he went to jail. And here to talk with us about how that happened, what is going on in the ongoing saga of FTX, is my colleague and friend of the pod, David Yaffe-Bellini. David, welcome to Hardfork. Thanks for having me. Hi, David. So it has been a very active...

few weeks in the case of Sam Bankman-Fried. Sam Bankman-Fried, of course, is the former CEO of FTX, the crypto exchange that collapsed late last year after allegedly misusing a bunch of customer funds. You reported this week that Sam Bankman-Fried went to jail because a federal judge in New York had revoked his bail. So just catch us up on why Sam Bankman-Fried is sitting in a jail cell in New York now and not at

at his parents' home in California where he's been. So yeah, the important context here is that basically from the moment that SBF was released into home confinement,

He's been pushing the boundaries of what you're allowed to do when you're out on bail. So first in January, he sends an email and message to a former FTX employee basically saying, hey, let's talk. Let's, you know, see if we can get on the same page about a few things. The prosecutors were really unhappy about that. They said this kind of reeked of potential witness tampering, that this was somebody who might testify in the case against Sam Bankman Freed and that therefore this was totally improper.

Then, like, while the negotiations over whether to tighten his bail terms as a result of that were going on, he was caught using a VPN to access the Internet, which the prosecutor said was a sign that he was trying to kind of evade detection in his online activities. So that was sort of the context of what led up to the last couple of weeks. And can I just do a sort of a clarifying aside here?

When you are arrested and you get out on bail, whether you are a white-collar, very wealthy defendant like Sam Bankman Freed or someone who is not wealthy or powerful or the former CEO of a company, you have some conditions, right? The judge says, I will let you out on bail, but you have to, in some cases, pay some money or put up some collateral. And, you know, you can't commit any more

crimes and there are certain conditions for you remaining free on bail. And in this case, my assumption is that because Sam Bankman Freed was so online, for lack of a better word, like he was signaling people and playing video games that like one of the things that the judge had said was that in order to remain free on bail, you actually can't

like do any of that stuff. Is that correct? That's essentially what happened. I mean, it came from this sort of interesting contradiction, which is that, you know, his original bail term said you have to stay in your parents' house and you have to wear an ankle monitor. So physically he was confined, but in the world of cyberspace, he could kind of like roam free and do whatever he wanted. And, you know, most criminal defendants would just stay quiet,

you know, try not to upset anybody while they're awaiting trial. But Sam's not a normal criminal defendant. And he was signaling people, he was tweeting, he was writing sub stack posts. And this was clearly really ticking off the prosecutors and ticking off the judge too. And so what ended up happening is that a whole new set of bail terms were imposed on him that limited which websites he could access. So there's actually, there was a list of maybe 25 or so websites that he was allowed to access, including NYtimes.com, of course.

And that was what he was confined to doing. And he was also prevented from reaching out to people who were involved in the case, like, you know, former employees who might be involved in the trial. And so those were the limitations on him starting around last spring. Now, when I hear that I would have to live in my parents' house and could only visit 25 websites, that sounds like a cruel and unusual punishment to me. But that's apparently not the case here?

Well, it's a better alternative than sitting in jail. I think most people would agree. Though maybe you don't, Casey. I don't know what your relationship is. Here's what I'll say. I either want to have all of the internet or none of the internet. If you give me like 1% of the internet, that's going to give me an aneurysm. I'm also not sure if Platformer was on the list. No offense. And by the way, my lawyers are looking into this.

My favorite detail from your reporting on this bail violation is that he was apparently using a VPN to watch a football game. Is that correct? That is that is true, or at least that's what his defense has claimed. I mean, the prosecution sort of cited the fact that he'd used VPNs on certain dates as like evidence of

potentially nefarious activity. And the defense came back and said, no, that was the date of the conference championships in the NFL and then the Super Bowl. And he was just trying to watch football. The reason he needed a VPN is because he'd gotten like an international NFL streaming account when he lived in the Bahamas and he needed to access it from his home in the US. If I was out on bail and one of my conditions of my bail was that I not use a VPN, like

maybe that's one where you just like skip the game and catch it on SportsCenter. Like that's an option too. Maybe you just read the post game report at nytimes.com, one of your approved websites. Right. So one of the most interesting and controversial wrinkles in this latest twist in the SBF case is

involves a story that you published actually in late July that was about basically the diary of Caroline Ellison, who ran the Alameda hedge fund and was also SBF's ex-girlfriend.

And this was a story that reported on the existence of these sort of Google Docs that Caroline Ellison had written about her sort of stress and anxiety running this hedge fund and some details of the relationship between her and SBF.

This story came out and then was subsequently cited by prosecutors as evidence that Sam Bankman Freed had violated the terms of his bail in talking with the media. So,

David, what can you tell us about this wrinkle in the case and this diary of Caroline Ellison's? You know, we did a story a few months ago noting that people involved in the case had a handwritten diary that Caroline had kept.

and that she had also kind of recorded her feelings and thoughts about various things on Google documents. So we'd known that for a while, and then this more recent story that we did actually had detailed excerpts from the Google documents that she had written. And, you know, this included accounts of her kind of insecurity about her position running Alameda. Was she a good enough leader? Was she really suited to doing this type of job? You know, it sort of reflected a lot of the stress that she was feeling throughout the year.

And it also got into some sort of intimate details of her romantic relationship with Sam Bankman Freed. You know, they had a kind of on and off relationship. And so, you know, reflections on what it was like to be around him after they broke up and that sort of thing. And, you know, all of this is potentially relevant to the case because their relationship is at the heart of the relationship between these two companies. She's agreed to testify against him at trial. So any kind of insight into what she's thinking and feeling is helpful.

and interesting and sort of sheds new light on the case. So that's the sort of background on her writings and what was in them. And prosecutors have said that Sam Bankman Freed leaked these documents to the New York Times and...

And that was apparently part of the judge's decision to revoke his bail and send him to jail. So what can you tell us about that? Sure. So within 24 hours of our story coming out, the prosecutors submitted a filing that basically said, we've confirmed with Sam's lawyers, they've admitted to us that Sam was a source for documents that were used in this story. And, you know, therefore, we want to put a gag order on him. We want to stop him from talking.

He's already limited in what websites he can access, but we want to prevent him from talking to the media because this is a sort of improper intervention in the case that could sort of interfere with a fair trial, intimidate witnesses, that sort of thing. So that was kind of their first salvo.

So help me understand this, because I'm a little confused here, because on one hand, you have prosecutors who are saying that Sam Bankman Freed leaked these documents from Caroline Ellison to the media in order to engage in kind of character assassination or or witness tampering or, you know, make her look bad ahead of her potential testimony at this trial this fall. And I think that's a really good point.

And, you know, I read the story that you wrote. I read the excerpts from these writings. They were kind of embarrassing in the same way that, you know, many of our private writings would be embarrassing if they were, you know, leaked to the New York Times. But they didn't strike me as particularly damning. She wasn't saying, you know, actually, Sam's innocent and I did all the crimes. So if prosecutors are correct in alleging that he did leak these diaries, I'm

why do you think he did that? Was there more to it than just trying to make her look bad? Yeah. So obviously I'm a little limited in, in, in what I can say, because this involves, you know, confidential sourcing, but I can, I can tell you what the prosecutors are claiming that he did and their rationale for why he did it. And what they're saying is that these were very kind of sensitive writings, uh, very personal, not the sort of thing that anybody would want to come out publicly. Um,

And that the sort of prospect of similar material coming out, you know, regarding other witnesses could have this sort of like effect of intimidation or cause people to back off. It could also have that effect on Caroline they're claiming because she might think, oh, maybe Sam's got other stuff and I better be careful what I say at trial. I mean, that actually makes a lot of sense to me. It's like nice diary you got there. Be a shame if something happened to it.

So, Sam Bankman Freed had his bail revoked and was subsequently put in jail. What do we know about the hearing where this was announced? Were you there in the room with us? Can you tell us a little bit more about what that was like?

So, unfortunately, I was not at either of the hearings that have happened on this issue because I'm stuck out here on the West Coast. But I had colleagues at both hearings. So there's an initial hearing, which everyone thought was just about this gag order that the prosecutors were asking for. But the prosecution comes into the room and immediately changes tack. You know, seconds before the hearing starts, they tell the defense we're going to be asking for his bail to be revoked and for him to be sent to jail immediately.

So there's an initial hearing on that where Sam's lawyers basically say, we haven't had any time to prepare. You need to give us some more time. And so the judge calls a second hearing a couple of weeks later. That was this past Friday. And at that hearing, there's about an hour of argument, both sides kind of making their case. The judge says, all things considered, I'm revoking bail. It's not just because of this recent story. It's because of the initial outreach to an FTX employee. It's

partly this VPN thing, which the judge said kind of showed a mindset of trying to evade limitations. And basically, it was the last straw for the judge. All right. So that's why SBF is in jail. You've also reported that there's been a new filing by prosecutors in this case detailing a bunch of new evidence against him. So tell us about what was in this filing and what that might mean for the trial. Yeah. So one of the things that was surprising about

Sam's bail getting revoked is we're really in the homestretch before the trial. There's not a lot of time left. Trial starts October 2nd, and we're kind of in the final stage of sort of pre-trial wrangling in court where both sides say, you know, this is the evidence we've got, and here's why we think it should be admissible in court. And so there's going to be arguments back and forth about that. And

On Monday, the prosecution filed a long document basically detailing, with way more color than we've ever seen before, what they've got and how they plan to use it. So they said, you know, we've got these three close associates of SBF, including Caroline, who've all pleaded guilty already, and we expect them to testify today.

And we've got contemporaneous notes that they kept after conversations with Sam. That includes some of Caroline's writings, including a document titled Things Sam is Freaking Out About. So I'll look forward to learning more about that. So you haven't seen it? We don't know what Sam was freaking out about?

I have not seen the thing Sam is freaking out about. However, it did say in the filing on Monday that the list included things like the bad press around the connections between FTX and Alameda. So we have sort of a vague sense. By the way, if it were me, something I'd be freaking out about, using customer funds to fund my crypto empire. But that would be things Casey was freaking out about. But go on. Yeah.

Well, maybe that's why you're not the one in MDC right now. Well, not yet. Yeah, stay tuned. Anyway, so there are a bunch of other interesting things too. I mean, there's text messages that another high-ranking FTX employee named Ryan Salem sent. He was the guy who donated tens of millions to Republican politicians and

prosecutors have kind of been circling him for a while. He hasn't been charged, but he's clearly going to come up to some extent. There's also an audio recording of a meeting that Caroline held with Alameda employees right as the companies were collapsing last November, where she essentially confessed and said, I worked with Gary and Neshad, the two other executives that pleaded guilty, and with Sam, and Sam decided that we were going to take customer funds. She said that

pretty explicitly. The fact that that meeting happened, we've known for a long time. I recorded it back then as did others, but the prosecution has a full recording of it,

with a transcript that was included in this filing. Who recorded it? Was someone wearing a wire? Any of us here would have recorded this. You want to have that for the historical record. It just seems like everything at this firm was being recorded and put into Google documents and voice memos and whatnot. Well, that's the funny thing, right, is that they actually didn't do any record keeping for many, many of the important things that they should. But when it came down to writing down the crimes, they were all over it.

It was an Alameda employee who recorded it, apparently. I mean, this was a staff meeting. The staff was pretty small, but there were enough people in the room that someone got the voice memos out, presumably. And so, yeah, I mean, it was a pretty damning bit of evidence when it kind of first emerged that she said these things at the meeting back in November when we were all writing about it. And now that they're planning to play that recording in court, it really doesn't look great for Sam. So...

In one of your stories, there was a line that said that so far there have been millions of pages of evidence produced. I don't think you were being figurative. Are there literally millions of pages of documents? And if so, like what is in those millions of pages and how can one man's lawyers go through all of that? And please describe all million. Yeah.

There literally are millions of pages. But it's like, you know, at some point, the prosecution subpoenaed the whole contents of SBF's personal Google Drive from Google. So that's, you know, however many hundreds of thousands of pages. And, you know, it's like all the documentation that

any of the key people in this case ever had on their computers is in play. Their financial documents, spreadsheets, text message histories, Slack histories. They've got some signal chats, I think, though the fact that SBF was on auto-delete and advised employees to go on auto-delete has been a factor in the case as well. So there could have been even more potentially. And yeah, it's just a

I know it is in the nature of federal prosecutors to just drown people in charges in hopes that they will reach a plea agreement and just end the whole thing without forcing a trial. And yet that, in combination with all of the evidence that you've been describing for us, David, makes me wonder, why hasn't Sam Bankman Freed pled guilty? Any thoughts on that?

I mean, look, it's hard to, you know, read his mind on this. And of course, you know, I think probably this is all speculation. But one thing the prosecutors might be hoping is that by forcing him to spend a little time in jail, they might change his thinking about this.

this. But, you know, all the evidence suggests that he's convinced that he has a chance of winning a trial and avoiding any prison time. And, you know, if you believe that, then our system allows you to fight the charges as it should. And so, you know, we'll see what comes of it. I mean, at various points throughout this process, people have pointed out that he seemed deluded about

things. He thought that he could save the company in the days before he was arrested, that he could just raise new money and make the hole disappear. And it could be that some of that optimism is playing into his thinking now. Yeah. Sometimes when you have these very high profile trials, the defendant becomes a kind of cause celeb. And

Perhaps they have a fan base that rallies to their side and sort of lobbies in the court of public opinion in the hopes that that might change the outcome. Is there an SBF fandom or constituency left? Are there folks out there saying, this guy's getting a raw deal?

Not really. A lot of people have made up their mind about him. This is something that has sort of come up a little bit in the back and forth over the story we did on Caroline, where SPF's lawyers have argued that one of the reasons that he wanted to provide these documents was because he felt like he's been unfairly maligned and that he should have a right to defend his reputation. That's partly because, yeah, like...

there really isn't like a pro-SBF contingent out there arguing for his innocence. I mean, crypto Twitter has certainly made up its mind that he's the worst villain in the history of the world and that he should be drawn and quartered, essentially. And I think most sort of mainstream legal analysts are pretty convinced that he's guilty as well. So this is shaping up to be quite a trial in October. I imagine that you will be there, as will some of our other

colleagues this is shaping up to be quite a trial in the way that like the Harlem Globetrotters next match is shaping up to be quite a basketball game like it's not not looking good for the Washington generals this time I didn't say it was going to be a close trial I just said it was going to be quite a trial so what are you looking for or what are you most interested to see in the lead up to this trial

Well, one thing I would say, you know, just to caution you on the Harlem Globetrotters comparison, you never know. I mean, this is why we have the system that we do. It only takes one juror to sort of swing the outcome. And, you know, like the prosecution has made a lot of claims about Sam, but they've been sort of claims that haven't been fully kind of fleshed out and tested. And so that's what we'll see in court in October. And it should be incredibly interesting. But, you know, what am I watching for? I mean, look, it's...

I mean, there's just the inherent drama of seeing, you know, three people as like top advisors at his company who were not just his top advisors. One was his girlfriend. The two others were two of his closest friends who lived with him, who'd been by his side for years. You know, that group testifying against him, there's a certain, there's a drama in that that's pretty undeniable and that

be kind of fascinating to see play out. I mean, also for someone like me who's been obsessed with this case for almost a year and following every twist and turn, any new revelation of

some new detail and, you know, some document that Caroline had that we didn't know about is kind of thrilling as well. But it's also, you know, a test of like, can this crackdown that the US government is doing on the crypto industry actually yield results? So that's probably the more kind of important. Yeah, I am looking forward to that moment when they all testify against him, though, because I think a very relatable feeling is wanting to see your boss go to prison. And those three are actually going to get to live

that out. Well, Casey, would you flip on Kevin if it turned out that he was embezzling money from hard fork or something? I would. I've told Kevin, buddy, you better be walking the straight and narrow because when the cops come knocking at my door, let's just say I have a few Google Docs of my own. Okay? Yeah, David, last question. If I were in possession of like a recording of a secret meeting at which like a

podcast host and newsletter writer confessed to certain federal crimes. Do you know any reporters I could send that to? You hit me up on Signal and we can discuss it. Okay. We'll just auto-delete our conversation afterward. David F. Evelny, thanks so much for coming back. Thank you, David. Thanks for having me. Anytime.

When we come back, we're going back to school with AI. ♪

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork. Hello, this is Yuande Kamalafa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

Casey, it's mid-August. You know what that means. What does it mean, Kevin? It's back to school time, baby. Oh my God, Kevin, I haven't done the assigned reading. What happens at the end of all yeller? Nothing good. Oh no. So this year, one of the biggest questions facing schools as they reopen for the fall semester is what the heck do we do about generative AI?

I think this is one of the biggest questions that schools have been wrestling with over the past year. You know, ChatGPT, it came out like the semester was already underway. It sort of landed as kind of an asteroid out of the sky. And schools really just sort of scrambled to get through the year. And I really thought that this summer was going to be sort of when schools and universities kind of regrouped and put their heads together and figured out how do we actually –

educate people in a world where this generative AI stuff exists? What does homework look like? What does admissions look like? What is the role of the faculty member anymore? And that just seems not to have happened in any big organized way. A lot of schools are still having meetings and organizing committees and task forces to try to figure out what to do about generative AI. And it just seems like there's an area where there are a lot more questions than answers. Yeah.

Sure. And at the same time, I think that that might be okay. The technology is new. We don't totally know what we ought to do about it. And so I think a world where different teachers are taking different approaches and schools are being a little slow in how they craft policies might be to everyone's ultimate benefit.

Right. But I do think this is an urgent issue for schools, especially going into this new school year. And so I wanted to kind of spend some time talking about that. And I wanted to talk to someone who actually does have a clear vision of how AI can and should transform education. And one person who stood out to me was Ethan Mollick.

Ethan is an associate professor at the Wharton School at the University of Pennsylvania. He teaches innovation and entrepreneurship and also writes and thinks a lot about generative AI in the classroom. He's been documenting his experiments with AI. He has a sub stack called One Useful Thing, and he has actually come up with a strategy that he thinks could help other schools adapt to the post-ChatGPT world. So I wanted to bring him on today to talk about how schools should be thinking

about generative AI and what they should be doing about it. All right, let's hear what he has to say. Ethan Malek, welcome to Hard Fork. Thanks for having me. So I first came across your work last year when you and I were both writing a lot about generative AI, and you have sort of become like, I would say, an AI guru inside the world of higher education. I know you've been talking with a lot of

faculty members and administrators at schools across the country who, it's fair to say, I think are confused and disoriented about what to do about all of this new technology. So I wanted to start with kind of a vibe check. Like, can you just paint a picture for us of what is going on with generative AI at schools right now as we head into the fall semester?

I think the word I'd probably use would be chaos or apocalypse. I think that people are just starting to dawn on them what this means. And I think when we talk about what this means, I think level one of what this means is what's dawning on them right now, which is, oh, God, all my homework assignments don't work anymore. And people haven't started to think about the other implications fully yet. There's a lot of exceptions out there, but generally that's the vibe I'm getting.

Now, this technology has been available since last November. So what has happened between November and now where some other folks in higher ed or maybe secondary ed still aren't thinking about this? So I think for one thing, I think ChatGPT, the free version, GPT 3.5, still made enough kind of mistakes that it made it a little bit easier to ignore. I think the second set of things was that people are used to ignoring technology hype.

So usually hype happens five years before technology comes out, right? So we're talking about Web 3 and you could just safely sit back and be a late adopter. I don't think people realize this was hype for technology that already hit, which is a little bit unusual. And people had to catch up to that. And then I think the third thing is enough alarm at the institutional level inside organizations that people who've ignored this have had to pay attention. So between all of those things, I think it's kind of created this bubble of anxiety and expectations.

Right. Now, one thing we should say is that if you're a teacher, something you can relax about is Web3. You can actually continue to ignore that one. But the AI stuff, you should probably pay attention to. Right. Well, and I think there's also this additional layer in education of like, well, these tech companies have been showing up for decades now and telling us that their tools are going to transform the way we teach in the classroom and these Chromebooks and personalized learning software. And a lot of that has just been sort of empty hype. And so I think there was a reluctance on a lot of

No.

I mean, that was some nuance, right? So the short answer is no. The long answer is AI uses undetectable and it detects people who don't speak English very well. It's terrible. So you can't use detectors. You can't ask AI to detect AI. It's just going to lie to you. Like every instinct we have about how to stop plagiarism doesn't work.

So you can change how you teach, right? You could do blue book assignments. You could have people do oral exams. There are other ways of checking, but the old homework assignment is basically cracked by AI. I wonder if you could just take us inside one of these kind of faculty meetings where professors and school administrators are trying to figure out how to adapt to the post-chat GPT world. What are some of the common sort of things that you hear brought up

in this world? And what are some of the sort of common objections to like, why shouldn't we be changing our policies and procedures? So the first thing is the same problem everybody is suffering from with generative AI, which is a no instruction manual. We're all figuring out as we go,

There's literally papers coming out regularly about what kind of questions should you ask AI to get the best answers. We don't know whether it's really good at these tests or whether it's faking being good at these tests. So you come to a faculty meeting and the first 20 minutes are debunking rumors and then supporting others about how it learns and what it knows and is it stealing information and what's ethical. So there's a lot of that kind of discussion you'd expect among academics.

Then there is sort of a discussion about, you know, how do you, usually a punitive discussion about stopping plagiarism. And then there is this sort of more advanced discussion about what do we do now, right? What do we tell our students? What does good instructional design look like? And I think that's the more profitable part of the conversation, but you have to get through the fact that nobody knows anything, you know, including, I'd hate to say all of us in the room, we're still kind of making it up and learning by experience, right? And trying to tell other people based on that experience, which is quite challenging.

So, okay, let's talk about not just why schools are tying themselves up in knots about this, but what they should actually do. So you, me, and Casey are starting a university tomorrow, Hard Fork University. It's a great school. Not accredited, but a great school. Yeah.

And we get to craft the policy about generative AI and how it should be used. And we not only get to craft the policy, but we get to tell every instructor how to teach their class using this stuff in the best way possible. How would you run a school if you could make all the decisions about generative AI?

So the cool thing about education is we're in for a couple rough years, but I actually kind of have a sense of what the future looks like because we actually have a lot of research on how to teach. And it happens to align really well with AI. And the secret is pretty simple. It's two concepts called the flipped classroom and active learning.

So the idea of a flipped classroom is rather than learning the material inside of class and practice it outside of class, you learn the material outside of class and practice it inside. So the basic version that you might have seen is people watching videos of a math lecture or Khan Academy outside of class. And then in class, you work on problem sets together. When you have trouble, the teacher comes around and helps. Some people present to the class about that. It's all about

putting knowledge to use. It's all about the challenging yourself, pushing yourself into an active learning environment. So it flips the classroom experience where instead of focusing on lecturing inside a class and doing assignments outside, you do the reverse. We've known this is a really great way to teach for a long time, but the two problems have been, what do we do in a flipped classroom setting? Do we just give people textbooks to read? Do they watch videos? That's never worked

particularly well. Now we have AI tutors that will be able to fill in some of that basic instructional piece. And then in class, how do we design enough engaging experiences for people? Well, it turns out AI is really good at helping us create engaging experiences. So I actually think the classroom of the future looks remarkably like the classroom today, but you reverse what you're doing in it.

And I think we could get a lot of the way there. Tell us what an engaging experience designed by an AI looks like, because I don't know that I would say I've had a lot of engaging experiences interacting with the bot so far.

So I don't think it's about engaging experience with the bot. I think that it is about putting knowledge to use. The AI is remarkably good about taking theory and giving you practice from it. So if you say, I want to have an in-class activity for fifth graders to teach about entropy, and I've actually done this, it came up with a really cool idea of a in-class entropy activity involving balls and people standing still and running around. It actually tied really nicely into entropy. And then it suggested a good classroom discussion. Then it actually built a game

coded the whole game for me that students can play with that activity. Now, I would want teachers to have input into that, but that's an active way of learning about a material that you learned outside of class and putting it to use in class and checking whether people understand the behavior, having people discuss it, having people act on it. So it's not about engaging with the AI. It's about having the AI help you figure out ways as an instructor to engage with the students. Yeah. Now, you have been using Gen

generative AI in your classrooms at Wharton since ChatGPT came out last year. What are some findings from that experience that might help other professors who are thinking about how to use this stuff in their classes? And I was actually using a little bit before. So I was impressed enough by GPT-3, which sort of wrote like a fifth grader, to have an assignment to sort of show them the future where they had to cheat with AI. And it was very funny because halfway through the cheating assignment, when half the students had turned it in, GPT-3.5 came out and it

definitely changed the game there. So I have all kinds of things that people do. I make AI mandatory. I teach an entrepreneurship class, so it's a little easier for me. If I was teaching English composition, I would probably be thinking a lot more about how do I do more in-class work where people are writing in blue books because you still need to learn to write and practice, and I can't flip the classroom that way. But in terms of my classes, it's been great. I actually literally require people to do at least one impossible assignment

that's in the syllabus now. So if you can't- What does that mean? So if you can't code, you have to write working software. If you've never done design work, I expect a fully built out graphic design product. It is literally, you have to do something that you think you cannot do. Every assignment has to be critiqued by at least three famous entrepreneurs through history. And that might sound like fun because it teaches them how the AI works, but it also is important because one of the defining characteristics of entrepreneurship is hubris. It is actually one of the number one predictors of entrepreneurial entry. And so-

things that break your hubris are actually quite useful. And the easiest way to do that is to have strong perspectives that do that. So it's let me increase the amount of work people do. It's let me push the kind of assignments they do. It's let me help adjust in a lot of ways. So AI mandatory works very well for the kind of classes I teach.

And we should say that when you say that you're having your students talk with great entrepreneurs through history, they are not actually speaking with the dead. They are going to the chat bot and say, sort of critique this in the voice of Steve Jobs or something like that, right? Exactly. And it's one of several kinds of prompts that I give them, right? So I give them tutor prompts because otherwise they ask, they had to explain something like they're 10 and that's fine, but that's...

actually how good teaching works. The way good teaching works is you solicit explanations for people and you critique the explanations. You don't explain things to them and ask if they get it. So I give them prompts that we've created that actually do that kind of interaction. And I give them prompts that act as a mentor and help them with classroom problems and give them prompts that they can use for their teams to help do

you know, actual good team activity. So there is this kind of interesting role where I'm designing material that then interacts with them like a TA or a really good research assistant.

In your experience, has the introduction of generative AI into your classroom kind of changed the culture of your classroom at all? I mean, I was recalling an episode that my colleagues at The Daily did earlier this year about basically chat GPT in education. And they interviewed a professor who said that basically...

Since ChatGPT came out, his whole attitude toward his students had changed, where it used to be that when he saw a surprising and skilled piece of student writing, he was delighted. It was like a cause for celebration. And now, you know, when he sees something that feels maybe a little too good for the student's ability level, he gets suspicious and defensive and starts sort of saying, well, did they cheat on this using ChatGPT? It just seems to have changed the teacher-student relationship in that case.

Are you finding that at all? Or maybe are students participating less in class because they know they can go, you know, ask a chat bot for clarification on something after they get back to their room? Or how has the culture of your classroom changed?

I mean, it broke the old style culture, right? So I'm on the edge of a new technology, so I'm excited about that. I can understand why that's worrying. People don't raise their hands as much because one of the things we trust that people do in the classroom was show ignorance so we can explain things to them. Now it's much better to not explain ignorance, show ignorance in front of 80 people. You'll ask the chatbot later about how to answer the question. It means that, you know, people always cheat, right? It's not a new thing. There's 20,000 people, at least prior to November, in Kenya whose job was writing essays.

for people. This is not a new phenomenon, but now it's much more obvious. Wait, what is this? Is this like a company? What is this company? It's not a single company. There's a paper that shows that it estimates the number of people who are getting jobs writing essays, mostly for college students. That's amazing. Yeah, it's pretty incredible. So cheating was pretty ubiquitous. Actually, it's been fascinating. Since the birth of the internet, the value of homework has dropped precipitously. I think cheating was already happening. We can ignore it, right? So this

Again, another forcing issue. It forces us to grapple with actual changes that have already been happening in classroom environments and that we didn't have to worry about before. So it has changed the attitude. I would say that suspicion of writing is probably right, but I also no longer accept badly written stuff in my classes. Why would I?

And for every student that was a brilliant writer before, I had 18 students who were not brilliant writers. And some of them, English was their seventh language. Like, why should I expect them to write a beautiful essay and punish them or not based on their prose? So it does change things. We haven't figured it all out yet, but it is some positive along with a negative.

Well, and I sort of want to pause on that because that's a very interesting point. What you're saying is that you used to get essays from kids that weren't particularly well written and you kind of give them a pass based on their individual circumstances. But now there is a tool that will instantly improve the quality of their prose. And so you just expect that they will use it. And to not do that is bad form.

Well, not just that. I expect them to use it well. And so it turns out like a little bit of prompting knowledge goes a long way. And I require at least five prompts for everything they turn in. And they have to give me a paragraph reflection on the prompt. And if they want to use the AI for the paragraph reflection, they can. And then they have to tell me the prompts and reflect on that paragraph. But it's hard to cheat on that piece. So the result is I want the writing to be different. It has to reflect their own writing. If it has that chat GPD style where it says, and then in conclusion, I'm like, oh, come on, you didn't even cheat well.

Now, you mentioned earlier that part of the reason you've been able to run all these experiments in your classes is because you teach entrepreneurship, which is sort of adjacent to AI in many ways. And you also have a very personal interest and affinity for this technology. Right.

So what do you say to the, you know, 20-year English professor or the organic chemistry professor or the anthropologist who says, I actually don't want to make my class all about AI. I want to teach and I want my students to learn and I want them to show me their work even if it's imperfect and know that that reflects what's actually going on in their head and not some chatbot somewhere. What do you say to the professor who just says, you know, I don't really want to turn my classroom into an AI lab?

And I think that's a huge number of people, right? And it makes complete sense. I think that stage one is recognizing that your homework broke. And that means that, you know, you may have to flip classrooms. You may have to hold people accountable with in-class exams, with having the Wi-Fi turned off, you know, your Chromebook in demo mode. There are ways of solving this problem in the short term. I think the bigger term, longer term problem is what does this all mean? What does this change about education? Now, I would actually argue in some ways, I think the only thing that...

carries us forward is expertise. And building expertise actually requires a lot of tedious fact learning and other material to get started. So I think that we're going to

be able to justify some of these returns. But I think in the short term, it's acknowledging this thing is real and then providing some guidance that, hey, I've been running all of our assignments through AI. Here's the things that gets right and wrong. So just as you start to do it, you should recognize that it will give you wrong answers of these kind of chemistry problems. And then you flip the classroom a little bit. And I think you could be okay in the short term. Right. We've talked a lot about

how teachers and professors should be thinking and feeling about generative AI in the classroom. What about students? I mean, students, millions of them are going back to school right now for the fall semester. I'm sure, you know, many of them have already been playing around with this stuff, but now they're confronting, you know, policies and restrictions. How should students be treating generative AI? Yeah.

I would demand clarity. I would demand clarity for what this means that AI is banned or acknowledged. Does this mean that I'm allowed to use AI to generate ideas? Could AI come with an outline that I work on? Can I ask for feedback from AI in my work? Because getting feedback is incredibly useful and it's very good at providing feedback along the way. Am I allowed to use AI as a teammate? Can I ask the AI advice for something? Can I ask to explain why I got a question right or wrong? I think there is a request for clarity that's useful.

And I also think that the future AI that our students are going to graduate into is going to look very different than AI today. So I think the idea that we're teaching kids how to use AI is actually not that useful in and of itself. It's going to be much more self-prompting. It's going to take away parts of work that we used to do before. So I think you are allowed as a student to ask for what does this mean while being patient with your teachers that they haven't figured out it either. Nobody knows the answer. I'm curious, Ethan.

One thing I hear a lot from educators when it comes to generative AI is this worry that it kind of flattens student creativity and output and effort. That, you know, when everyone is a B-plus writer, and it's sort of producing kind of generic prose that is kind of like sounds a little bit like Wikipedia almost.

It's like you sort of sand down some of the edges of, you know, one student having a very different writing style or another student communicating in a very different way. So I think there are a lot of worries, not just about what this is going to do to the classroom experience, but actually what it's going to do to the minds of students who are relying on this technology to help them think and write and work. Do you share any of that worry or sort of what are some of your worries about the long-term effects of generative AI on students? Yeah.

I think we're going to lose a lot. And I think we're going to need to figure out how to reconstruct that. I certainly think in the short term, flattening is real. Another phenomenon that falling asleep at the wheel is real. We've had papers showing that people will tend to, when they use AI that's very good, they tend to not pay enough attention anymore and kind of disengage from the work itself. That's a real phenomenon. I mean, there's

essays were useful, right? It's a shame to lose them as a homework assignment. Things are going to be lost. And I think some of this flattening effect is very true. So we need to teach people how to write with their own voice while still being able to use AI. And I think you've probably found as users, you can make AI do much more interesting things if you don't just do a generic prompt, right? The first prompt is always Wikipedia style, perfect English, but you can get to do some kind of neat things with some time.

But I think it's part of that bigger issue. By the time we have students in high school today, we have six or seven years before they're in the workforce for those who are going to college. And so what does that mean for a moving target for the future is a bigger question. It does outsource some of our thinking and some of our abilities in a way that people worry about with Google also and other phenomenons.

It's true, and yet I do want to say, as somebody who wrote essays in college at one point, a good number of those essays were written between the hours of, let's say, 3 and 9 a.m. the morning they were due while I was, you know, hopped up on, you know, no-dose pills. And I'm not sure how much I learned during the—now, on the whole, yes, essay writing was a huge part of my education, but, you know, not every student puts everything they have into every writing assignment. Yeah.

Absolutely. And I also want to apologize to the TA in my English literature course in college. They took pass fail and never revealed to them it was pass fail. So I could get solid Cs. And every time they'd write these elaborate notes and apologize to me for giving me a bad grade, I'd be like, I wrote that in 25 minutes. So I totally get that, right?

And I think that's another piece. It's like we have to not be delusional about what has actually happened in education. And to go back to the issue, that's what the – I hope the gift of the AI piece is less the AI itself than this act of being deliberate in what we've reconstructed at Hart Fork University of what parts of education matter. And we actually know a lot about that, and we have to figure out how to reconstruct those pieces.

So back to Hard Fork University, our university that the three of us are starting. Which, by the way, was recently voted to have the worst football team in the entire country. But the parties are amazing. Yeah.

Should we have an admissions essay? I mean, is that also one of the things that is on its way out? Oh, absolutely. That's done for. I have been enjoying having it. If you have not had the experience of asking it to write admissions essays, justification, it's amazing, Ed, right? So one of my recent favorites was explain how stubbing my toe in the fourth grade taught me about adversity and why I want to be a neurosurgeon.

And it was just amazing. I filled the details about how I realized that something as small as nerves could make such a difference in my life. And it was just like, wow, this is great. I mean, a lot of stuff is going to break, right? And so we have to decide what we're doing rather than fighting a fighting retreat against the AI as it takes thing after thing. We have to really rethink what we want to do with admissions and other policies.

Well, I mean, what I would say is that you have to have at least one TikTok go viral if you want to compete in today's economy. So that's what I would be looking for. It's time to dance. Okay, so we won't have take-home essays and we won't have, you know, assignments that students can easily plagiarize using ChatGPT. Are all of our assignments going to be in-class Blue Book essays? Like, what other types of work will we assign at Hard Fork University? So...

I think we need to divide the kind of work that we want to do. Learning English composition for our English 101 course, that's going to be a lot of, you know,

maybe reading stuff, getting critical feedback outside of class with a combination of AI help and human help. And then in class, you'll be doing a lot of writing. It's going to be writing workshops, right? That's our intro courses. I'm hoping that as our classes get more advanced, you're taking your 2.1 levels, you're taking more applied classes, that the in-class activities start to become very interesting. So the ability of students to get things done, to push past the frontiers that used to be a 1.01 frontier is fascinating, right? So we're going to push the

the power of AI to get people to do more than before. They're not just gonna do basic stuff, they're gonna do advanced stuff. They're gonna be 10 times more productive by the time they get out of the program. - Sounds compelling, sign me up. Last question, Ethan. What would it take to get you to leave your job at Wharton and join the faculty of Hard Fork University? - So, do you offer tenure?

We're working on it. Unaccredited, untenured university. I feel like, you know, these are my colleagues who joined startup companies here. I've done that already. So I'm willing to be visiting faculty, though, if the location's nice. It's a dingy podcast studio for now, but we are going to be expanding to a beautiful campus soon. Not too late for the metaverse for this one. All right. Ethan Malek, thank you so much for coming on. Thanks for having me.

After the break, another wild week in the world of self-driving cars. And some thoughts from you, our listeners, on our interview from last week. BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico.

It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

Well, Kevin, if I learned one thing from last week's episode, it is that people have strong opinions about autonomous vehicles. They really do. I would say that not before in the history of the show have we gotten as strong a reaction to an interview that we have done. It's been really wild. I mean, we've gotten like

so many emails and DMs and replies on social media about this episode. Threads, skeets, Instagram stories. We got it all, baby. I did not realize that this is the most polarizing issue in America. It really feels like we accidentally stumbled into, like, you know, a K-pop fan army. Like...

Yeah, and look, I think, you know, we found our way into this story because there is this group, Safe Street Rebel, that is going around and they're putting traffic cones on the hoods of autonomous vehicles, which has the effect of disabling them. And they're trying to do that to draw attention to concerns they have around autonomous vehicles and cars in general. And we thought,

well, that's interesting. Let's have a conversation with them about that. I think what we heard back from listeners, though, is that the issues here run deeper, and our listeners really wanted to get into it into much more detail on a bunch of subjects related to AVs. Yeah, it was a very polarizing segment. We got a lot of people saying, like,

They loved the segment. We got a lot of people saying they hated the segment. We got people saying your guests were correct and you guys are wrong and you're too pro-AV and anti-transit. We also got people saying your guests did not make their points clearly and you should have picked someone better to make the case. So it was just kind of all over the map, but very, very strong feelings on all sides. So we want to talk about listeners' opinions about the segment because a lot of listeners raised some really great points. But first...

a really extraordinary amount of news happened in the AV realm, including in San Francisco, just over the past seven days. And so we thought, let's take a quick beat and just talk about what's been going on. So the first thing that happened was we did manage to get it into last week's episode. It happened just before we published the episode, which is that the CPUC, the agency in San Francisco that was voting on the fate of these pilot projects for driverless cars, voted to allow...

Cruise and Waymo to expand in San Francisco. They can now run these AVs 24-7 and charge money for them. And that's a big deal, right? Huge deal. Because in order to hail one of these taxis before, you sort of had to have special access. You could only access it at night primarily. But now these are about to become just a fact of daily life in San Francisco in a way they were not before.

Right. And these companies are now going to start expanding into many more cities. Cruise just announced it is starting to offer driverless rides in Charlotte, North Carolina. They're also expanding to cities in Texas. So this is going to be coming to, if not a city near you, then at least, you know, a city within driving distance of you very soon. That's right. But there's another story we should talk about, which is that over the weekend, a group of 10 cruise

Cruise cars essentially came to a halt, blocking traffic in the North Beach neighborhood of San Francisco. Yeah, this was an amazing story. So right after this big vote by the CPUC, this pileup sort of happens in North Beach. And at first, it sounds like maybe it was related to this music festival, Outside Lands. That's sort of one of the big festivals in San Francisco. And at first, Cruise said, well, this was due to wireless connectivity issues. But then...

as the week went on, it looks more and more like this was a case of pedestrian interference. And in fact, that's the new explanation that Cruz has for why all these cars stopped in the middle of the street. What I love about these explanations is that neither one of them makes any sense to me. Okay? Okay.

When it comes to wireless connectivity, these cruise cars are miles away from the Outside Lands music festival. So I'm sure a lot of people were posting their skeets and their threads at Outside Lands, but I don't understand how they were doing it in large enough volume to stop a car miles away, okay? Cruise eventually comes to the same conclusion, right? It says, oh yeah, I guess it wasn't wireless connectivity, which doesn't exactly inspire a lot of confidence. But then they come up with explanation number two, which is pedestrian interference. And Kevin, I would just like to ask you,

what do you think pedestrian interference is? And how does it stop 10 cars from moving? Well, we don't know because Cruise has not said a ton more and we don't have like the footage from the cars themselves. They have said, they told us that it was not a cone-related stoppage. This was not the fault of our organizer friends from last week.

Let me tell you something. That we know of. As a pedestrian, I've interfered with traffic. And here's how I've done it. I've stepped in front of a car because I wanted to cross the street. And at most, this has affected one car. Okay? I mean, maybe the car behind it has to slow down too. But when I interfere as a pedestrian, I'm stopping one car for five seconds.

So I would just like to know a lot more from Cruise about how this alleged pedestrian managed to stop 10 cars from moving for many minutes on end. Right. But whatever the reason for the traffic jam, I think it's fair to say this was not one of the worst things that could happen in an autonomous vehicle. And it was actually cleared pretty quickly. In fact, it lasted only about 15 minutes. Well, I do think this is one of the worst things that could happen to our argument that autonomous vehicles are good. You know, we...

We've been trying to make the case. And then 10 of these things come to a dead standstill. And I found the whole thing very inconvenient. Yeah, I think if we thought we had road rage problems before this, we have not seen like if you're angry at someone for driving too slowly, you're going to be 10 times angrier if it's a robot driving too slowly or coming to a stop in front of you. But.

To my mind, the biggest story that has been published since we came out with our last episode was a story that ran in the San Francisco Standard with the title, San Franciscans are having sex in robo-taxis and nobody is talking about it. So this is a thing that I have been waiting to hear more about since these AVs launched, which was you knew as soon as this happened that people are going to start having sex inside the robo-taxis. Mm-hmm.

And the San Francisco Standard interviewed four people who claim to have had sex inside cruise AVs, including one source who they called Alex, which is not his real name, who they say claims to have performed at least six separate sex acts in robo taxis, ranging from impromptu makeout sessions to full on sex, no boundaries activities. Okay. A total of three times in a cruise car. First of all, are we considering a makeout session, a sex act? That seems like a bit of a stretch to me.

So I have questions about this. Also, wait. An impromptu makeout as opposed to one that's been scheduled and put on a Google calendar? Oh, you don't put all your makeout sessions on your Google calendar? Hey, I like to keep it loose, baby. Anyways. So I have questions about this. All right. So we know that these cars have cameras inside them. Yes. Right?

Is some poor soul at Cruise headquarters just having to like sift through hours of footage of horny people just climbing into these AVs and getting it on in the backseat? You know, when we had the CEO of Cruise here, Kyle Vogt, we sort of asked him, like, isn't there a high potential for...

bad behavior inside these cars. Shenanigans. Shenanigans, hijinks, antics, if you will, inside these cars because people, you know, because there's no Uber driver to sort of modulate people's behavior. And he said, essentially just that. We have cameras in these cars. If anything gets out of control, we can look and see the cars. And it is amazing to me that San Franciscans are already saying, we don't care. Go ahead. Look all you want. There's a free show in the cruise tonight. Yeah. I mean, there are lots of exhibitionists for whom the cameras are a feature, not a bug. Yeah.

I want to say, here's my other logistical question about this. So when we rode in a cruise car, as you will remember, it would not move unless both of us had our seatbelts on. It was very firm on this point. So presumably, these people who are having sex inside the robo taxis are doing it with their seatbelts on. And I just want to get your opinion on the logistics of that. How does that work?

We have a saying in this community, safety first, okay? If you want to have an impromptu makeout session in a car, you need to take care of your partner. And that means that you do need to keep your seatbelt on. And look, those seatbelts have a lot of give, okay? I don't know about you, but I've been in a seatbelt. I've reached all the way over to the other side of the car to grab something from the driver's side. So yes, it's absolutely possible and it's the way to do it. So that is the big news from the last week when it comes to autonomous vehicles. Now,

Let's talk about some of the things our listeners are feeling in response to last week's segment. Yeah. So, you know, I want to make some room to get some of the criticisms in here that we got. And look, you know, I'll be the first to say I'm not an expert on transit issues or autonomous vehicles issue. And part of

what we do on this show is we just lead with our curiosity. We bring in people, we ask them questions and yeah, we express our opinions, but we're like open to other ideas. Right. And so right now we want to be open to our listeners ideas. Right. This is something that people care a lot about. And I want to respond to some of the notes we got about last week's segment. And I think we should kind of sort it into a few flavors of,

Yeah. The first one we got a lot of responses on was basically listeners who were anti-self-driving car, who were on the side of the organizers we interviewed, but who didn't feel like they were the best ambassadors for that position or that we fully explored their point of view. And I,

think, you know, there was an element that I saw in a lot of the messages, both in email and social media. And it was basically that we didn't engage with the real thrust of the activist argument, which is that we need fewer cars on the road, period. And of course, there's a big tie in between that idea and climate change, which I also think that readers think we were not taking that seriously enough. And if we were serious about climate change, we would want to be saying like, yes, absolutely. Get all these cars off the road. So, Kevin, what

what is your take on this idea that we need fewer cars on the road period? Well, I think I share that view. I think car congestion and pollution are huge problems. And as anyone who has ever tried to drive around San Francisco can tell you, it would be a lot more pleasant here if there were, let's say 50% fewer cars on the roads. Now,

I think there are some issues with that. Namely, there are just some people for whom cars are a sort of a necessary fact of life. We got a lot of notes from parents who said, I support transit, but there is no way that I could lug my kids and all of their stuff around on an e-bike or on the bus, or what if you don't live on a transit route?

I got a note from somebody who said that cars are important if you have a disability and they make life much easier for folks who may struggle to use transit if they have a disability. Totally. So I think there are a lot of people who just need cars or are attached to cars as their daily mode of transportation is.

for whom switching those people over to mass transit is just not going to be very practical. Yeah. Now, that said, could I spend more time imagining a car-free future? Yeah, like I should. I grew up in America in the 80s and 90s. I'm extremely car-pilled and like not by choice. It's just kind of the oxygen that I've been breathing. But I can tell you the reason I live in San Francisco and the reason I love it so much is because I can walk almost anywhere. And I walk basically wherever I can. I love taking transit.

I got rid of my car. So in part, I really am trying to build this future, but we have a long way to go. Right. And the self-driving car companies would say that actually self-driving cars help to get cars off the road because we know from a study that was done about 10 years ago that the typical car utilization is something like 5%, which means that if you own a car,

it sits there in your driveway or in a parking spot roughly 95% of the time, right? That is a horrible utilization rate. But with a driverless car system, that car can go pick up other people and take them where they need to go. You can get to much closer to 100% utilization, which means that you theoretically would need fewer cars to drive

do the same number of trips in a city. Which would also mean that you need fewer parking garages and parking lots, and maybe we could use that to build more housing, right? So these are at least some ideas that are out there that I think are worth mentioning. Right. And the organizers we talked to last week, their point in response to that was, well, that's the same thing people said about Uber and Lyft, right? That if you have these ride-sharing services, people won't need to own cars. And anecdotally, I do know some people who have gotten rid of their cars because they're

I am one of those people. So it does happen, but their point that Uber and Lyft have not actually decreased the amount of traffic congestion in cities is a good one. It's also true, and it's also true, and I agree. It's a good point.

So what else do people tell us, Kevin? The other thing that people asked about was how good the data is on the safety of autonomous vehicles compared to human-driven cars. Yeah, and this is a nerdy point, but I do think it's worth getting into it. Totally, because I said on the show last week that the research that we have suggests that

autonomous vehicles are substantially safer than human-driven cars. And that is true. We do have research that shows that that is the case. But there's a big asterisk there that I maybe should have spent more time talking about last week, which is that this data is not very good and it's not very complete. So one of the issues with collecting data about autonomous vehicle safety is that there just aren't that many self-driving cars.

In San Francisco, there have been several hundred of them between Cruz and Waymo compared to hundreds of thousands of human-driven cars. And so if you're comparing the data between how often these various types of cars get into collisions or cause pedestrian injuries or something like that,

You're going to just be comparing from two very different bases. And so we just don't have the kind of large-scale data on autonomous vehicles that you would need to be able to make really good, really reliable comparisons. Yeah. So that's just something that we're going to have to keep an eye on.

Totally. But there is a report that compares human drivers to AVs. Cruise put out a report this year that said that in their first 1 million miles of self-driving car data, they found that their autonomous vehicles had 54% fewer collisions than human rideshare drivers, 92% fewer collisions where the autonomous vehicles

vehicle was the primary contributor to the collision and 73% fewer collisions with a meaningful risk of injury. It also said that of the collisions that it did encounter in its first million miles of autonomous driving, 94% of them were caused by the other party. All right. And do we trust this data? Is this just being reported by the companies themselves or is this sort of using something that we can check against, like, I don't know, the Department of Motor Vehicles records or something?

So it's a good question because this is obviously self-reported data from these companies, which have an interest in making their vehicles look very safe. But in Cruise's case, they partnered with the University of Michigan Transportation Research Institute and the Virginia Tech Transportation Institute, which are independent academic settings that are studying this data, too. So I give that a little more credibility there.

So I think we should say upfront that like these studies, they are conducted with the blessing and the data from these companies and,

And it's also very hard to compare, say, data about self-driving cars in San Francisco to national data because it's so different, right? It's driving in San Francisco in a crowded city with lots of other cars versus, you know... Like driving on the highway. Driving on the highway or driving in a small town in rural America somewhere. You could write a great song about that. Yeah. So it's just...

It's not quite apples to apples, and we just don't have enough good data about autonomous vehicles. But I would say the early indications in San Francisco are that these cars are

getting into collisions pretty infrequently. And when they do, that is often the result of the human in the other car, not the autonomous vehicle itself. What I'm taking away from this is that the early data looks good, but let's just be careful with what sorts of safety claims we're making around AVs as we continue to collect this kind of data. Yes, absolutely. All right. Let me read another piece of feedback that we got. This is from a listener named Dirk in Rhode Island, and he writes to us, quote, I don't

think you realize that Casey and Kevin came off as Luddites, not the guests. Car-free cities really are the future, as seen by the younger generation. The next generation doesn't want the SF of today. They want Zurich, Paris, Madrid to shape the future of the Bay. The A.V. itself is an afterthought, and you may be inside of an industry bubble. It's not how will A.V.s change our lives. It's how can I live in a way that I would rarely slash never need to use an A.V.?

So what do you think, Kevin? Are we just sort of not dreaming big enough when we talk about AVs as playing a major role in the city of the future? I think there's some truth to that. I mean, I have been to cities outside the U.S. where there is much more transit. La-di-da, Kevin. Tell us more about your international travels. Can I tell you, the Swiss trains and the tram system there is unbelievable.

So good. Very efficient. I will also say I've never seen a more expensive country in my entire life than Switzerland. These people are charging 40 euros for a cocktail? Come on. So I'm torn here because on one hand, like, I do believe that getting rid of many cars on our streets would make them safer, would make city life more pleasant, and would ultimately be a better future. At the same time, people are very wedded to their cars, right? We see this all the time. People...

even in cities with good mass transit, want cars for whatever reason. Maybe they don't want to wait around for the bus. Maybe they need to take their kids. Maybe they work a job where they have to transport large things and they can't really do that on mass transit. People are just really, really connected to cars and car culture. It is sort of a fundamentally American trope, but I think it is probably true. And so I'm of two minds here because we're

I think there's an argument to be made that, yes, we should have many fewer cars. And in order to sort of green light autonomous vehicles, we should make sure first that they actually are going to reduce the number of cars on the road. But I don't think people are going to easily give up cars. So in the meantime, shouldn't we try to make them safer and greener by making them electric and autonomous?

To me, it feels a little bit like the debate over eating meat, where you have people who say, you know, we need to eat a lot less meat in America. And the way to accomplish that is by convincing millions of Americans to switch to a plant-based diet. And then you have the technology-driven approach, which is, well, let's assume that people are not going to give up meat easily. And let's try to make meat that has less of an impact on animals and on the environment. Let's try to grow it in a lab so that people can still have their hamburgers, but it just doesn't

involve factory farms. You know, what I think all of these sort of similar issues come down to is that if you are an activist who's rallying around this issue or you're just a citizen who believes in one of these ideas, I think a skill that you want to develop is the ability to show people

a path from how to get from here to there, right? And sometimes just talking about this radical vision that seems so different from what we live in today can be very effective, right? It sort of bumps that Overton window a little bit closer to your side, and that's a good thing. But the risk of it is that you do sometimes make people, and I think this happened to me on our last episode, kind of throw up our hands and say, this feels like kind of unrealistic to me. Like if you're

going to tell me that we're going to live in this sort of Jetsons future, I needed to sketch out a few things first that are going to get us from today to there. So at the same time, what I'll say is I am just going to take the note to be more open-minded about this. Yeah, I'll take that note too, but I will say that I think there's an issue here of making the perfect the enemy of the good, right? I think if we could get to a place where, you know, say 10% of the vehicles in San Francisco are

on a daily basis, were self-driving, they were safe, they were electric, and they were not getting stuck in intersections in North Beach. I think that actually would be a substantial safety improvement over the status quo. And so I think the people who are the hardliners on this issue should maybe allow for the possibility that

transitioning everyone off cars is not going to be a realistic goal, at least in the short term, and should look for some sort of incremental wins along the way. Because if nothing else, these AVs can be a really fun place to have sex. So that's...

That is, I would say, a fitting follow-up to last week's episode. Thank you to everyone who wrote in, even the guy who told us to go suck a tailpipe or lick an EV battery if you love cars so much. And now, is that considered a sex act inside one of these AVs? Because...

No, but seriously, look, I love hearing from our listeners even when we make them mad. And one of the things I like about our show is that it actually can be a dialogue. If we say something and you hate it, tell us and maybe we'll talk about it more and maybe you'll shift our views. Yeah, please don't put cones on our heads. Don't put cones on our heads.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at Indeed.com slash hire. Hey, before we go, Hard Fork is hiring. We're looking for a video producer to help us bring this show to YouTube. So if you or someone you know does that kind of thing, let us know. We're Hard Fork at NYTimes.com. Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Sophia Landman.

Original music by Dan Powell, Marion Lozano, and Rowan Nemisto. Special thanks to Paula Schumann, Pui Wing Tam, David McCraw, Nell Gologly, Kate Lopresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com to let us know what Kevin said this week that you think was really stupid and needs a whole segment about the show to talk about. Please, no more.

Imagine earning a degree that prepares you with real skills for the real world. Capella University's programs teach skills relevant to your career so you can apply what you learn right away. Learn how Capella can make a difference in your life at capella.edu.