cover of episode Is AI Already Taking Jobs? + A Filmmaker Tries Sora + The XZ Backdoor Caper

Is AI Already Taking Jobs? + A Filmmaker Tries Sora + The XZ Backdoor Caper

2024/4/5
logo of podcast Hard Fork

Hard Fork

AI Deep Dive AI Chapters Transcript
People
C
Casey Newton
K
Kevin Roose
知名科技记者和作者,专注于技术、商业和社会交叉领域的报道。
P
Paul Trillo
Topics
Kevin Roose:AI对就业的影响目前尚不明朗,既有公司利用AI裁员的案例,也有公司利用AI提高生产力的案例,经济数据尚未完全反映出AI对就业市场的整体影响。一些公司已经开始使用生成式AI来精简工作流程,例如裁员或用AI替代部分人工工作,但目前尚未出现大规模裁员的情况。一些公司出于人才培养的考虑,暂时不会用AI完全取代人工。一些研究表明,生成式AI可能有助于重建中产阶级。经济学家David Autor认为,AI可以赋能低收入劳动者,使其具备以往需要高薪专业人士才能拥有的专业知识和决策能力。AI工具可以加速人们的学习速度和专业技能的提升,但人们也可能利用AI偷懒,而非提升自身技能。在使用AI工具时,需要保持自身的写作风格和特点,避免被AI取代。 Casey Newton:一些公司已经开始使用生成式AI来精简工作流程,例如裁员或用AI替代部分人工工作,但目前尚未出现大规模裁员的情况。Duolingo利用AI减少了部分合同工,说明AI已经开始对合同工的就业产生影响。UPS裁员也部分归因于AI技术,但公司表示不会用AI完全取代人工。Klarna公司声称AI客服可以替代700名人工客服,但该说法有待考证,AI客服的实际效果可能不如人工客服。AI对就业的影响可能是渐进式的,而非突然的大规模裁员。公司在使用AI替代人工时,会考虑公关影响,避免负面形象。一些研究表明,生成式AI可能有助于重建中产阶级。David Autor认为,AI可以赋能低收入劳动者,使其具备以往需要高薪专业人士才能拥有的专业知识和决策能力。护士从业人员的出现是AI赋能低收入劳动者的一个例子,AI可以帮助非专家在各个领域获得专业知识和决策能力。AI工具可以加速人们的学习速度和专业技能的提升,但人们也可能利用AI偷懒,而非提升自身技能。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.

What's going on with you? Well, you know, I'm having sort of a weird week. So I came in on Monday and to the office of the New York Times in San Francisco. And someone said, there's a there's a naked man outside the office who's ranting. And I said, well, how is Casey? And by the way, you should know that we tape the podcast on Wednesdays, not Mondays. And then

And then I learned that it was actually part of the St. Stupid's Day parade. Do you know about this? Wait, I've not heard of this. This is apparently an annual tradition in San Francisco where some big goofballs get together on April Fool's Day, April 1st, and they parade through the streets of San Francisco holding nonsensical signs, and some of them apparently are naked. And this is what was going on. I love this town so much. This town is so back. The things that happen in San Francisco, you wouldn't even believe, and they're wonderful.

I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, how AI is affecting the economy. Then, artist Paul Trillo joins to discuss how he used OpenAI's Sora tool to explore the future of filmmaking. And finally, a cyber sneak attack that could have brought down the web, but was caught in the nick of time. Not by Kevin. Not by me. Not by me.

Hey, before we get into our stories this week, we just wanted to remind people we're on YouTube. Some of what we talk about on the show this week is going to be better to see than to hear. We're going to talk with someone who uses AI to make films. If you'd like to watch those films, or frankly, if you'd like to watch the podcast instead of just listening to it, you can go to our YouTube channel at youtube.com slash hardfork or just search hardfork on YouTube. All right, now on to the show.

All right, Casey, this week I want to talk about AI and jobs because this is a topic that we get asked about all the time by listeners of the show that I hear about all the time from readers. Is this technology that you guys are always talking about actually going to take people's jobs? Is it going to help people at their jobs? And how long do we have to wait before our jobs, your jobs get affected by AI?

all of this stuff. That's right. And when listeners ask us, we always say, don't worry, if we find out that AI is going to take your job, we will email you individually. And until then, you can relax. We'll have ChatGPT email you. Yes. So I thought we should sort of break this out into a couple pieces. One of them is sort of

about the present? What do we know about how AI is already affecting jobs and companies and their plans to employ or unemploy people in the near future? And then I think we should talk about some of the various theories that are coming out about how and whether generative AI will actually lead to major changes in the job market. All right, let's do it. So this story is by my colleagues Jordan Holman and Gina Smilick, and it's

It's called, Will AI Boost Productivity? Companies Sure Hope So. It's a survey of what's going on out there in the world at companies like Walmart and Wendy's and Abercrombie & Fitch, which is apparently using AI to write some product descriptions.

And sort of trying to take stock of like, well, what effect is all this cumulatively having on the economy and the job market? Well, and I'm curious about this because there was that story the other week about how Wendy's was thinking about doing surge pricing. And if that was ChatGPT's idea, I can be bad for opening up. Yeah. So digging into the data a little bit. So far, it just seems pretty early for any of this to start showing up in official economic statistics.

statistics. But we do see in some of the most recent data a bump in productivity. And this data has been a little volatile since COVID, but some economists are starting to wonder if this is real and if this productivity increase might actually stick around. Okay.

In addition to that sort of aggregate data that economists can see at the national level, there's also just been a bunch of examples of companies that are starting to use generative AI in some cases to pair jobs among their own workforce. So recently, Duolingo, the company that makes the language learning app. Not to be confused with Dua Lipa, great singer. Highly recommend checking her out if you haven't had a chance. Yeah, she's the best.

Duolingo recently said that it was cutting about 10% of its contractors, not laying off any full-time employees, but basically just, you know, paring down the number of people it needs to create content. A spokesperson for the company said, we just no longer need as many people to do the type of work some of these contractors were doing. Part of that could be attributed to AI.

UPS recently cut about 12,000 managerial jobs. The CEO mentioned how AI and machine learning could reduce the need for pricing experts among other jobs, but they've also said they're not replacing workers with AI.

Then there was the company Klarna, which is one of the things. Do you know Klarna? I actually do know Klarna. Klarna is one of these buy now, pay later companies. They have said that their AI assistant did the work of 700 customer service agents. I'm always so curious, though, when I hear that. It's like, yeah, I'm sure from Klarna's perspective, it's great. But I would love to hear from the people who actually had to use the AI chatbot. Do you think it was as good as a person? I've used some of these AI chatbots, and I'll say it. I think people are better.

Klarna said in their experiment that they actually found that the AI chatbots were rated just as good as the human agents and that they solved their problems faster.

So these are the kinds of experiments and tests that we're starting to see play out at various companies. We haven't seen sort of mass layoffs yet as a result of generative AI, but these are the kinds of experiments you would expect companies to be running on this technology, trying to figure out where can we shave down maybe 30% of the accounts payable department or maybe a

a few engineers who we maybe don't need anymore and replacing those people with AI. Well, and as I read through all of this, Kevin, I find myself wondering, you know, maybe there never will be a mass layoff moment at these companies. Maybe it will just be a steady erosion as they figure out bit by bit how to make do with fewer and fewer people. You know, this is sort of one of the fascinating things that I'm observing as I go out and talk to people who run businesses is

No one wants to be seen as sort of a heartless capitalist who is just like, you know, wantonly laying off workers and replacing them with robots. But they are doing a lot of things around the edges to try to maybe sort of lay off people for efficiency and then replace some of those people not with robots.

other people, but with software. They're trying to have it both ways. They're trying to signal to Wall Street, hey, look at how clever we're being and how much more efficient we're getting and how much we're cutting costs. But they're trying to avoid the PR backlash that would come along when they say that we no longer think humans have value in the enterprise. Exactly.

One of the things that's been really surprising to me is the reluctance that some corporate leaders have been having to kind of embrace this new AI technology and use it to replace workers, even when the technology is fully capable of replacing the workers. So I had a conversation a few months ago with a guy who I met at an AI event. He runs a big commercial real estate firm. They develop real estate all over the country.

And he was telling me that, you know, for years he's had these junior analysts who will go out, they'll visit various cities, you know, and they'll come back and they'll produce reports about the local commercial real estate market in those cities. So, you know, go to Jacksonville and then they'll come back with a 10-page report about all of the various commercial real estate trends in Jacksonville. And he said, basically, when ChatGPT came out, he started giving those assignments to chatbots.

and seeing whether they could do them. And he found that the reports that ChatGPT would give him about these local real estate markets were actually better than the ones his junior analysts were giving him. And so I said, well, okay, so then what happens to the junior analysts? Like, do you just lay them off and replace them all with AI? And he said something that sort of surprised me, which is no, because that's how they learn the job. So he was not just viewing these junior analysts as sort of

helper monkeys who go out and produce these reports. He viewed this process of going to a city, getting on the ground, talking to local businesses, examining the real estate market up close as a part of the training process and how he builds sort of future leaders for his business. He basically was telling me like, yes, I could replace those people with AI, but then I'm actually cheating myself in the long run.

And I think that's one of the kind of intangible things that is sort of hard to get at when you just look at kind of overall economic data is like there are many reasons that people have jobs at their companies. And there are many types of sort of incentives that are operating at these companies. And so even if it is short term profitable to replace a bunch of people with AI, there might be other reasons that you don't want to do that. So I think that's part of what's going on.

That makes sense to me, but I would also just note that we are still sort of in this very early phase with generative AI, where if you believe the people working at the big companies making the large language models, they're telling us within a generation or two, these models are going to be exponentially better. And then I wonder if some of that feeling of, yeah, I need to keep these people around and train them so they can take on the next job up the ladder. I wonder if that feeling starts to diminish. It's possible. It's also just these...

These things still make mistakes. They're still not totally predictable. They're still pretty weird, frankly. And so you might not want to throw them into the core of your business right away, at least not without a lot of supervision. So that's kind of where we are now today.

in the job market with AI. We have lots of companies running lots of experiments, spending lots of money, hiring lots of consultants, trying to figure out how can this stuff make us more productive? We don't see a ton of it in the economic data just yet, but there are signs that people are starting to figure out ways to use this stuff to automate jobs. Absolutely. But you know, Kevin, at the same time, we're starting to see studies that suggest that

perhaps the middle class will actually thrive in a world where generative AI is ascendant. And I think that finding surprised us, and we should talk about it. Yeah, so there was an interesting paper that got written up this week by my colleague Steve Lohr at The Times that was based on some work by an economist at MIT named David Autor. David is someone whose work I've been following for a long time. He's one of my favorite economists who looks at AI and the labor market. And

Last month, he came out with a paper that had what I would consider like a pretty contrarian thesis, which is that he actually thinks that AI could, if used well, assist with restoring kind of the middle class of the labor market that has been hollowed out by things like automation and globalization. Well, wouldn't that be nice? Yeah. So basically his argument, it's not that he's like observing this is already happening. This is just sort of something he thinks will happen, which is that

Basically, you have this economy now where you have kind of like a missing middle. You sort of have like wage workers and people who are lower earning workers. And then you have kind of this expert class of people who make decisions about, you know, medical decisions, legal decisions, corporate management decisions. And.

that one of the effects that generative AI could have is basically empowering a lot of people at the bottom end of that labor market to develop the kinds of expertise and make the kinds of decisions that previously required highly paid professionals. So one example he cites in this paper is nurse practitioners. That's a relatively new occupational category. There used to be nurses and doctors.

And then several decades ago, they started to practice it. Exactly. Exactly. They basically developed this kind of middle tier of medical professionals who were not full doctors. They didn't go to medical school, but they were qualified to do things like write prescriptions and make certain recommendations about your health care. And so what David Autor argues is that basically doctors

AI could allow non-experts in lots of different fields to kind of develop the expertise and the decision-making capacity to basically take on the nurse practitioner equivalent in whatever their industry is. So maybe you have paralegals who are

armed now with all this generative AI who can actually start to make the kinds of decisions that might have required a full lawyer before. That's exciting. I'm imagining using ChatGPT to become like a para-firefighter, where I can just sort of read about how to do it and then call down to the scene and just be like, maybe appoint a fire extinguisher over there. Exactly. Like you call the fire department because your house is on fire and they're like, well,

We could get a firefighter there, but it's going to take, you know, an hour. We do have Casey. He's got a ChatGPT subscription and a hose, and he's ready to go. He can be there in five minutes. Yeah.

I've always said if you have a chat GPT subscription and a hose, you can get very far in this life. I think a better job would be like a para-CEO, you know, where it's like you sort of have the fat salary and the prestige, but you're able to do it with only half the training. And, you know, you're mostly just asking questions of a chatbot, which as far as I can tell is mostly what CEOs are doing anyway. True. Yeah. I mean, this is,

this is one exciting possibility, I think. And I love David Autor's optimism about restoring the middle class through generative AI. I think there are lots of reasons it might not work in practice. There's all these sort of licensing regimes in various occupations. So it's like there are some barriers to the sort of optimistic future that David Autor envisions. But it's just one sort of

interesting theory about where all of this could be headed. Sure. But I mean, I do buy something fundamental about that, which is that if you believe that these generative AI tools will become kind of counselors, coaches, guides, and there is a field that you're interested in, and that technology can just kind of live alongside you, understand what you're working on, continually make suggestions, it should actually

actually accelerate people's rate of learning and the development of their expertise. And I can't see that having an effect on the middle class. Yeah, I think that's sort of an optimistic vision. I do think that there are a lot of people who are not actually using AI to become better workers. They're using AI to cut corners and do less work. And, you know, we see this in schools, obviously, with students just using this stuff to cheat. But

There's also a lot of examples of this happening out in corporate America, too. People, you know, maybe not using this stuff in the way that would be most effective for them over the long term. Just saying, like, I've got to produce this report for my boss or there's I've got to put together this PowerPoint presentation. I don't feel like doing it. Let's just let the AI handle it.

from using these things to develop expertise, perhaps more people would develop expertise. Right now, people are doing what you're saying is because there's no economic penalty for them, right? But if there was an economic advantage, maybe they would use it. Yeah, so Casey, I'm curious, like, what do you think companies should be doing with generative AI right now? Let's take it out of our industry and just say, like, you know, you run, let's say a...

a big, you know, restaurant chain. Yeah. The Cheesecake Factory, let's say. Exactly. First of all, I'd say make the menus 10 times longer. Let's get a full novella in there. Like, what should companies be using this stuff for? Well, I mean, I think it's

depends on kind of what what kind of company you are. Honestly, if you're if you're running a restaurant chain, I don't necessarily see that there's a ton in there for you. Maybe you want to experiment with some copywriting. Maybe you want to experiment with using the image generation to consider some new advertising campaigns. But all of that stuff feels like, you know, minor and experimental. At the another hand, maybe if you work for like a copywriting firm, then maybe

then maybe you want to be using it a lot. Then maybe you want to be testing out all sorts of different models and seeing which one is working better for you. So I think it's kind of highly dependent on the kind of business that you're running. But for the most part, I would say you want to manage your expectations here. It's like it's not going to be doing a ton for you. I don't think. Do you? I do. I think that this stuff is already pretty good. And in certain sort of... At what?

I mean, so much of our economy just runs on paperwork and forms and reports and presentations. That stuff is sort of catnip. That's low-hanging fruit for generative AI. To me, like, do you see that story the other day that was like there's a company that's creating an AI coworker?

And if you're working at a software company, there's going to be this character called Devin. Devin. Yeah. Very confusing name. Yes. I don't like it. I believe it's Cognition that is the name of this company that is working on it. And, you know, the idea is, you know, you're not going to have to hire as many engineers because now you have Devin and Devin can like sort of help you write code. That's in the very early stages. When that gets good, then, okay, yes. Now a lot of people I think are going to be using something like that and that's going to have a meaningful effect on productivity. Okay.

Now, Casey, I want to end this discussion by talking about our own experiences of generative AI at work. We have to end this discussion? We do, unfortunately. I was just warming up. So you've talked before with me about how you have started experimenting with using generative AI in

your newsletter. You used to use generative AI to create the images that run on top of some of your newsletters. I've noticed you're doing that less recently. I want to hear about why. You've also talked about using it to sort of organize and collect various links that you put in your newsletter. So how is your own use of generative AI at work?

changed over maybe the past year? So what I would say is I truly have tried a bunch of things, and for the most part, it has been marginal. Start with the images. You're right. For a year or so, I was regularly using AI-generated images at the top of my newsletter, and I

The truth is, I just got a lot of feedback from readers that they hated it. They felt like I was stealing money from artists. They felt like I was using models that had been improperly trained on copyrighted material, and they hated seeing it. I had some people saying that they refused to subscribe because they saw that I was using these images. From my perspective, it had been a way to enhance my own creativity because I can't draw. I can't make anything look cool, but I can type in a box and

And that felt really cool to me. But I decided to take the note from readers. And for the most part, I've sort of taken a pause on using that kind of generative AI. That's really interesting because it's not like you were, the alternative was that you were going to go pay a human artist to make this stuff. You were probably just going to pull like an image from Getty Images or something. Exactly. That's what I was going to do. And, you know, I work on like very short deadlines. The idea that I could find an illustrator and sort of make it happen.

It just wasn't very likely, but at the same time, people were not excited to see the generative AI images. That's interesting. Now, more interestingly, I think, I have been able to, when I finish my columns with some reasonable amount of time before my deadline, actually take them to some of the large language models and just say, critique this. And the idea is not necessarily that it's going to make my column 100 times better, but I think

all of us writers, if you could get feedback from five or 10 people before you publish anything, you might do it. And because these things can analyze your work instantly, there's sort of no penalty for doing it. I wouldn't say I've changed my writing a lot in response to what I've heard, but yes, it does catch grammatical errors. It does catch typos. And increasingly, it's been able to identify the

tone of how I'm writing about something. And it's sort of asking me, did you mean it to sound this way? That is the eeriest part for me, is that I feel like over the past year, and I would say Gemini, Google's Gemini in particular, has been the one that has really been doing this. It feels like it can get at the subtext of what I'm writing better than other things in the past. So that has been interesting. And I do think I will keep doing it because, you know, when you're a writer, that kind of feedback is a gift.

Okay. Can you use generative AI at the New York Times? So, yeah, the New York Times has some rules about using generative AI and how we're allowed and not allowed to use it. You know, I'm not using it for journalism. I do not write my columns with generative AI. And I frankly wouldn't do that even if I were allowed to because I just think that like that would be boring. I enjoy writing. It's not like I don't I'm not eager to turn that part of my job over to generative AI.

I have basically found that it is the best research assistant I've ever had. So, you know, now if I'm looking up something for a column or preparing for a podcast interview, I do consult with generative AI almost every day for ideas and brainstorming and just things like research. Like, you know, make me a timeline of all the major cyber attacks in the last 10 years or something like that.

And of course, I will fact check that research before I use it in a piece, just like I would with any research assistant. But that's the kind of thing that generative AI, for me, has been really good at. And I've found that generative AI has actually changed my work in a different way that I wasn't

sort of perhaps expecting, which is that it has made me much more attentive to the detail of my own writing and trying to make sure that what I write does not sound like ChatGPT wrote it. Because I think the moment as a writer that you allow yourself to drift in that direction, you are giving up your advantage. You are basically saying, I am replaceable. I am totally indistinguishable from this sort of generic

text extruder. And I think this is the challenge actually for the entire economy as more and more of us have reason to use generative AI in our jobs is how do you use it to augment what you do without making your boss think that that augmenting technology could actually do the whole job?

I think this is a really important topic, AI and jobs. I think we should continue to keep tabs on it. And in addition to looking at sort of the economic data and what economists and researchers are noticing about kind of overall productivity, I would also just love to hear what people's actual experiences of,

using generative AI successfully or unsuccessfully at their own jobs. If you are listening to this and you have a really interesting story about your own use of generative AI at work or your company's experiments with this stuff that have either gone super well or super badly, I would love to hear from you. So send us an email, hardfork at NYTimes.com. And I just want to say, I particularly want to hear from you if you tried to use generative AI at work and it went particularly badly.

That's just my own sort of sensibility. Yeah, you love a blooper reel. I mean, who doesn't? Yeah, it's true. After the break, we'll take a look at a very different part of AI. How it's being used to make movies.

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.

Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

Well, Casey, that was a very sober and rational discussion about AI in the labor market, and now I want to get a little weird. Let's please get a little weird. So we've been talking for weeks about this new open AI video generation tool called Sora. This is something that was demoed. Sam Altman was behind it.

fielding requests on X. Tell me whatever prompt you want to type into Sora and I'll see what comes out of it. This is basically doing for video what tools like DALI and Midjourney did for still images. It works much the same way. It's one of these diffusion-based models. You type in some text.

it gives you back a snippet of video representing whatever you typed. Yeah, and you hear that and you think, well, you know, we know that making films is extraordinarily expensive. It's very collaborative. It involves all kinds of specialists. And the idea that we might soon be in a world where people can just type what kind of movie they want to see in a box and get something resembling that feels like a big leap forward. So whenever any new AI tool comes out, my first question is always, well, can I use it? And for this tool, Sora, the answer was yes.

Absolutely not. That's right. The people who weren't allowed to use this product, they called the Sora losers. So we couldn't actually use this tool ourselves. OpenAI is not making this public yet for various reasons, but they did put out a blog post sort of showcasing the work of

a bunch of filmmakers who were given access to the earliest versions of Sora. And so we are going to do the next best thing today, which is we are going to talk to someone who has actually been able to use Sora and play around with it. So today we are talking with Paul Trillo. He is a multidisciplinary artist and filmmaker who is based in LA. I've seen some of his work before with other AI tools. He's been playing around with this stuff for a while now. And we're going to talk to him today about

what he learned, what his experience was like, and what he thinks the implications for Hollywood and some of the filmmakers out there who are nervous about this stuff are. That's right, and one of the things we're going to ask him about is this short film that he made with Sora called The Golden Record, which he made after being inspired by a project that Carl Sagan undertook in the 1970s to create a kind of audio time capsule of humanity and broadcast it out into space in the hopes that aliens would find it and listen to it

and decide not to destroy our entire civilization. Which, so far, let's say it, it's been successful. - It worked, the golden record worked. - Hats off to Carl. - And the golden record, we should say, it's a little hard to describe. It's a minute long, it's kind of avant-garde. You'll be able to see it if you're watching this on YouTube, but if you're just listening on the podcast, you can go in the show notes, we'll link to it there as well. - Alternatively, just take mushrooms and think the golden record, and it'll be similar inside your mind. - Let's bring in Paul.

Paul Trillo, welcome to Hard Fork. Oh, wow. Thank you so much for having me. This is literally the only podcast I can tolerate these days. So you're like the shock jocks of tech and it's just... We actually were recently voted the most tolerable podcast, which was a big honor for us. Yeah. Very few people have told us they've punched their speakers after hearing the podcast. So Paul,

I'm wondering if you could tell us about the emotional experience of using Sora. You know, the first time you typed in a prompt into this tool and got back a video, did you feel anything? I mean, I was shocked. I was floored. I was confused. I

I was like a little bit unsettled because I was like, damn, this is like doing things that I didn't know it was capable of. Do you remember what the first thing you tried was that you had that reaction with? The first one that really took me out of it was the video that it's the first 15 seconds that appears in the OpenAI blog post of this kind of reel I did where I'm zooming through time and I'm I'm I'm

I'm saying, all right, give me this like dynamic, fast moving time lapse from like volcanic ash going underwater. And then we emerge and to like ancient civilizations. And we were zooming through like the 1700s, 1800s, and then until like modern day time, throwing all this stuff at it.

And it gave me something that looked like it was shot on super eight film. It was moving the camera in a way that was never possible with like old film technology. And it was like making edits within the clip. So it was, it almost had its own sense of pacing and editing and it,

That really made me think, okay, once you kind of throw a lot like the kitchen sink at this thing and you get this really experimental effect, you can start to experiment in ways that we've never experimented with before. And so that really got me excited was specifically that kind of hallucinatory aspect. Yeah.

Very cool. Can you just walk us through the basic steps of the process of making a film using a tool like Sora? What prompts did you use for this film? How long did it take you to put it all together? Just walk us through the process a little bit. So there's a website that you go to, and there's a text field. Prompt, you're used to prompting with other generative AI tools.

And then it gets sort of translated, interpreted through ChatGBT. So it's like, okay, you want this, and then it gives you something like that, and then you can edit the ChatGBT response. But the process of using Sora, I feel like is akin to trying to tell a story to a toddler with superpowers. What do you mean? It feels a little bit like...

this naive entity with black magic superpowers. I want to just root this conversation in the actual video that you produced with Sora, or one of the videos that you've produced with Sora. I think we should just watch it together and we'll kind of describe what we're seeing and then we're going to ask you some questions about it. Okay. It's very chaotic and kinetic and dynamic and may cause motion sickness, but that was kind of the point. Do we have to sign a waiver before watching this video? Yes. Yes, please.

So the video that we're looking at right now is called The Golden Record. And for people who aren't watching this on video, it's basically showing a record, like a vinyl record made of gold that is sort of hurtling through space,

Yeah, I'm getting a little dizzy. Looks like there's just sort of like zooming, like we're zooming through space, encountering all these like golden orbs. So yeah, the kind of test here was to see how dynamic can I make these camera moves? How cinematic can I create an aesthetic that feels maybe different than what I had been seeing?

Cool. So that's a clip about a minute long. This is not a full feature film, but you did make this almost entirely with Sora. What was the idea there? Yeah, so I had been fascinated by this project kind of spearheaded by Carl Sagan and NASA, like in 1976 and 77, where

where they essentially made a time capsule of humanity up until that point. They collected sounds from bubbling mud to human speech, and then they collected a bunch of songs from around the world, including Johnny B. Goode is on the record. And then they encoded images

into a golden record and splashed it out to space in hopes that maybe aliens someday. Well, it was literally a message sent to aliens,

You know, we've talked about sending episodes of Hard Fork out into space as a warning to other alien civilizations. Yeah, do not greenlight podcasts on your planet. So can I ask some questions about the creative process here? So how many prompts did you use to make this one-minute movie? Yeah, I probably...

Five, but there's like variations of that. Right. So when I first got my hands on Sora, I was like, how do I break this thing? How do I unstick it from this like very AI looking video aesthetic, these kind of slow moving camera moves, these things that feel like just 3D animation or whatever.

stock footage. And so I was like, I need to like move the camera. And even if it causes motion sickness, that was part of the test was to see like, how crazy can, can this get? How chaotic can it be? And, and,

Just for the sake of comparison, how long would it have taken you to make something like the Golden Record using conventional film tools? And then how long did it take you using Sora? I would say with all the how dynamic the camera is, how maybe complex the renders are with, you know, the materials being used, how many shots there are. I'd say this would take a few months to make. I did the Golden Record maybe in...

two or three days. Huge time crunch. Did OpenAI put any restrictions on Sora when you were using it? Did they tell you, you can't make this video or you got to stay away from this prompt? Did they give you any guidance or did they just give you access to this tool and say, "Go nuts"?

They specifically wanted to be as hands-off as possible, but it was obviously no nudity, no extreme gore, that kind of stuff. So there go all of Kevin's ideas for making a movie with Sora. Yeah, I know. I see your eyes shifting, Kevin.

And you're like, you're deflating. So, Paul, as you as you reflect on the experience of making the short films that you have made with Sora, would you say that on the whole, the process felt easier than you expected, more difficult than you expected? Like where were your expectations for what this thing was going to be like and where did the result fall?

I actually had somewhat tempered expectations. I was just like, this is a cool tech demo that I saw from a massive company with tons of compute power, but is this applicable to filmmaking? And after breaking it and loosening up the camera, I was like, oh, okay, this can give us some...

like experimental, you know, wild, bold, weird things that may be difficult to achieve with other tools. And that, so when I kind of cracked a series of words, it's basically kind of like alchemy with words. Then I was like, okay, this can allow for shot types and ideas that maybe get killed in the process of filmmaking. Wait, what are some of the secret words you found? Yeah.

Let's see, 35mm Fujifilm stock, 24mm anamorphic lens, analog, warm vintage tone, chromatic aberration, halation. Things that are like, I guess, words to describe literal film and to see what's in the training data, basically.

That's interesting. So it's basically like you're sort of giving it the instructions that you might otherwise give to like a cinematographer or someone who's... Yeah, it's like, hey, let's shoot on film. Right. But I wouldn't say, hey, DP, give me halation chromatic aberration. Like, you know, they're just going to be like, what? I had a bad case of halation chromatic aberration once, but I went to the doctor and it cleared right up. Oh, that's good. Paul, I just have a very basic nuts and bolts question, which is like you type in a prompt into Sora...

You fill it with all these magic words. You hit enter. How long does it actually take to get the video back? Is it instant? No, but it's faster than you would think it is. How long are we talking here? I heard from someone else that it's like 10 or 15 minutes usually between when you put in the prompt and when it gives back the video. Is that consistent with your experience?

Roughly, yeah. It just depends on the setting. So are you at 720p, 1080, like the shot duration? But to do a really simple shot of just a ball on the ground that's 15 seconds long,

will take just as much rendering time as doing like a crazy golden record hurtling through space and exploding and all this stuff. So that's actually really fascinating is what it does to render time. So having used this for a while now, are you thinking about this like, oh yeah, this is definitely a tool that I want in my arsenal going forward as I continue to make films, I can just sort of see a lot of applications for this or is it sort of more in the, I could take it or leave it zone?

I would definitely...

keep using this. But this is supplemental. This is not replacing by any means. This is like a much better alternative to stock footage B-roll. And yeah, again, allows you to discover paths you maybe wouldn't have gone down. But yes, I still think if you still want control and you want nuance and you want pacing, you're going to have to use the regular tools. And I

I still find it to be more gratifying to do things the traditional way. But damn, it gives you some really crazy stuff that is outside of the box. And I think the outside of the box stuff is the most exciting. Right. When the demos of Sora went online and people actually started to see some of the footage that was emerging from this system, there were a lot of people...

especially in Hollywood, who had sort of a panic about it. Tyler Perry, the famous director, said in an interview with The Hollywood Reporter that he was basically bowled over by some of this footage and that he was actually planning to put on hold a planned expansion of his studio because he was just like,

I don't know what I need right now. If I can just sit in my office and create amazing footage using this AI tool, why do I need to go through the hassle of building out an expensive studio? So do you think those people who saw this and freaked out are overreacting? Is it the case that the closer you get to this technology, the less impressive it is? What do you make of some of the responses that have come out about this tool?

I feel like the more you use these tools, the less afraid you are of them because you do understand their limitations and you understand their place for them and you understand what separates this from using other traditional tools, VFX or in-camera or actors. I think what Tyler Perry is saying is somewhat harmful and sending the wrong message to people that are at the top at the studio level that are the

are the gatekeepers, the ones that have the money to say, hey guys, let's not spend our money. And it's an incredibly capitalistic way of thinking. So it's not that you think he's wrong necessarily about the potential of the technology to displace labor in filmmaking. It's that

you know, basically this is, this is sort of someone saying the quiet part out loud, like saying we, we might not want to spend all this money on humans. I think it's both too. I mean, he hadn't even tested Sora at the time. I don't know if he has it now, but, um,

I think he had the wrong interpretation of Sora being this kind of replacing everything. It'll create certain efficiencies for sure, but all the people on Twitter that love to tweet the line "RIP Hollywood," I really encourage them to go and actually watch a movie.

Like, seriously, don't watch a movie trailer. Go watch a real movie and see how much nuance and detail and micro decisions that are made at every split second of a film from an actor's choice to the aesthetic and everything. Movies are incredibly complicated.

Let me ask you about it a different way. Earlier in the episode, we were talking about the fact that for a while I put AI-generated images into my newsletter and my readers ultimately just kind of revolted against it. I got a lot of feedback just being like, we hate this. You're using images that were trained on copyrighted material. You're taking away money from human illustrators.

I imagine that you might have gotten some similar feedback, or you can at least imagine getting that feedback. How do you think about those questions?

Sure. Well, can I ask what was your illustration budget per year for your newsletter? Well, so that's the thing. I wasn't, you know, I mean, I had access to some image libraries like Getty Images, you know, in all those cases, a human being was paid for their labor, right? So that's what I had been using, but I was not commissioning standalone illustrations for my pieces. Right.

Exactly. I think that's what people are missing is that we are creating content now that simply wouldn't have existed before. Sure, there can be greedy studio heads at the top that will try to find ways to like cut the bottom line and increase their margins. But

For the most part, the people that are using these things are making things that just wouldn't have existed. You see musicians notoriously have zero budget. Sometimes they post an AI-generated image on Instagram, and then people are like, oh, no, not you. You use AI, too? Like, oh, my God, cancel, unsubscribe. And it's just like...

They wouldn't have posted anything that day if they didn't have the AI image. So it's kind of like opening up a new channel. And I mean, yes, there's like sensitivity around like black box of what's being trained on. But the reality is we can't like close Pandora's box. Like technology is relentless and we have to just kind of adapt to using these things. The way I feel like the narrative needs to be kind of steered is that

99% of scripts in Hollywood get rejected. And then even of the 1% that get bought, only about half of those go into production. And so I think this will open up the opportunity for people with these ambitious and bold ideas to resurrect projects that wouldn't have existed. I'm even going back to some music video concepts that I pitched in the past that just weren't going to work for the budget.

- Yeah. I have seen some backlash pointed at you and other filmmakers who have been experimenting with these tools.

Basically accusing OpenAI of artist washing, of basically using artists to sort of test out these tools to show all the cool and creative uses for them while actually sort of negotiating behind the scenes to replace a bunch of labor or to use these tools that many artists feel have been trained on their work or work of their colleagues without permission whatsoever.

So I wonder if you could just speak to that, this idea of open AI sort of using artists and filmmakers to try to convince a skeptical public that all this stuff is just going to be good and it's going to enhance creativity and that it's not going to replace anyone's job while actually having a very different strategy behind the scenes. Sure.

I mean, I think that is a very fascinating point. And I, it's something I kind of grapple with all the time because I, I, again, I still love to do the traditional way and I still love to employ people. But then the other side of me is like playing with all this new tech and

And I'm like, am I just some sort of pawn in this great master plan of AGI? But it's like, what is the opposite of this? Is you don't want artists involved in the research process? I feel like including artists, if you're developing things that is visual technology, including artists in the process is critical. Because otherwise, you're just kind of in this bubble and you don't really understand what the purpose is of your research process.

One question that comes to mind for me, Paul, is what are you working on next with this thing? Can you give us a preview of what else you think you can do with Sora? Yeah. I mean, so I will say everything has to be kind of run through OpenAI in order to make it to the public. They are being very kind of selective with what they show. They don't want to kind of inundate people. They're being careful with how much is released. But my brain has been spiraling. I've been...

Working on a short film. I'm also working on a music video. I don't know if that's like breaking news or not. No, that's breaking news. That's breaking news. Yeah. So that will, I can't say who or when or what, but... But let's just say Beyonce does have a new record out and a lot of people are listening to it. Potentially, yes.

And then this Golden Record project potentially is bigger, but I'm still exploring other routes and I don't see Sora as, "Oh, I'm only going to focus on this tool to get everything out of my head." It's just a supplemental thing. But it's been very liberating, I'll say that.

All right, Paul, we got to run. Thank you so much. Really great to talk to you. Thank you, Paul. Appreciate it. Send us your store login. Thanks. Thank you. When we come back, grab a bagel and some salmon because we've got a caper.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Casey, you know we love a caper on this show. Oh, we love a caper on this show. Today, I have what I believe is the biggest tech caper of at least...

The past year. Okay. Well, then let's hear about it. And I want to preface this caper by saying that this is going to involve a lot of terms from Linux and open source software development and various databases. And I need you not to fall asleep because the payoff is going to be worth it. Will there be a quiz at the end? There will be. Okay. Yes. So this is a very interesting and strange story that came out of the world of cybersecurity over the past week.

It involves a 38-year-old software engineer named Andres Freund. Freund of the pod. Yeah, Freund of the pod.

right of the pod. He lives in San Francisco. He works at Microsoft. And he stumbled into what may be the biggest attempted cyber attack in history, which it's crazy that sort of just one person could stumble into something as big as this is. Yeah. So I've been totally obsessed with this story. Basically, I think we should just explain off the bat that the internet is

a very sort of rickety contraption. I think most people don't understand this unless you talk to people who are engineers or work in cybersecurity or sort of develop the kind of building blocks on which the entire internet rests. It's weirdly precarious that it works. And honestly, somewhat miraculous, right? Because in part, so much of the internet, the technology we rely on depends on these tiny little open source projects that might be maintained by, for example, one person. Yes, there's this famous XKCD comic called

where you have kind of like a vast machine that is sort of resting on one little thin like rod. And the elaborate machine is labeled all modern digital infrastructure. And the little peg that it's resting on is labeled a project some random person in Nebraska has been thanklessly maintaining since 2003.

And this story is the literal instantiation of that cartoon. This is a story about the tiny peg. Yes. This is the tiny peg that is holding up the giant machine called the internet. And how one guy, Andres Freund, discovered basically by accident a plot.

to kind of mess with the entire internet as we know it. All right, so how did he stumble across this plot, Kevin? So he works at Microsoft. He develops this piece of open source database software called Postgres. The details aren't important, but this is a big database. Lots of companies use it. It's open source. Andres is one of the people who maintains this database. As part of his work, he does a bunch of tests to make sure that various pieces of this software are running correctly.

A few weeks ago, he's doing some tests and he starts noticing some weird error messages. And at the time, you know, he's flying back from Germany. He was visiting his parents. He's German. And he's sort of jet lagged. He thinks, okay, maybe this is not important. I'm just going to kind of like ignore these error messages. Then he gets home to San Francisco. He starts running some more tests. And he's like,

and he starts noticing some other weird errors. Some anomalies. Some anomalies, yes. He notices, for example, that this process called SSH, it's running sort of slower than possible by a little bit. It's using more processing power than it usually would. It's causing some memory errors that usually aren't there. Now, Kevin, when you say it's slowed down, is it slowing down by like 30 seconds? No, no, no. So these delays, they are tiny. It's like measured in milliseconds.

But Andres is a very detail-oriented guy. He's been working on this particular piece of software for a long time, and he kind of knows what it's all supposed to look like. And so he starts noticing a little lag there, a little more CPU usage here. Something is going wrong. Hey, Spidey sense starts to tingle. Exactly. So he basically starts digging in and investigating, and he traces the issue to this set of data compression tools called NLM.

XZUtils. I wondered if it might be an XZUtils error. Yes. So, you know, basically the details of what this thing is are not important. Oh, come on, try. It would just entertain me. It's a set of data compression tools. That is all I know. Okay, sort of like the premise of the old Silicon Valley TV show. Exactly. But XZUtils is used by Linux, the open source operating system. And another piece of information that you need to know about this story is that Linux is probably the most important piece of software in the world.

Linux is everywhere. Yes. So Linux is used by the vast majority of the world's data centers, servers, like every majorly important computer in the world runs on Linux. If there's a computer talking to another computer somewhere, Linux is involved. Yes. So this little tiny software package, XZutils, it is a very...

small piece of a very important piece of software. So Andrei starts looking into these weird delays and these weird anomalies. And he eventually starts looking at the source code for XZutils. And he discovers something that blows his mind, which is he finds a backdoor. Now, a backdoor, I know you're going to make a joke about this. I'm not going to make a joke about backdoors. Well, you're passing up jokes about backdoors today. Something is wrong. Are you okay? Okay.

In my mind, it's just funnier to keep saying that I'm not going to make a joke about it. This is a fastball over the plate, my friend.

So basically a backdoor is a piece of malicious code that is inserted into a piece of software that allows an attacker to basically remotely access or control it or sort of slip in some code that they wrote, basically do something malicious. It's kind of a key to unlock a piece of software with the intent to mess with it in some way. Got it. So basically,

Andres is not a cybersecurity engineer. He's just a guy who maintains a database. But he finds this evidence that

XZUtils, this tiny piece of Linux, has been compromised. That someone has intentionally gone in and placed a backdoor there that if you are that person, you can then go in and you can basically tamper with any computer that is running SSH on Linux, which is to say the vast majority of the important computers on Earth. Which, like, I'm just imagining being him, and, like, you've noticed this, like, series of small anomalies, and you have that feeling that something is amiss here. But I bet

that even in his wildest imagination, he did not imagine that he found a very sophisticated backdoor. No, so I talked to Andres about his discovery. He sort of walked me through the whole thing. And he says that at first he was sort of like skeptical of his own findings. He said it felt surreal. He said there were moments where I was like, I must have just had a bad night of sleep and had some fever dreams.

Basically, this is not the kind of thing that you find in a widely scrutinized piece of software like Linux. And so Andres, he looks at this, he says, "Man, I don't know, this just sounds too big to be true. How could something like this get approved and make its way into the release version of Linux?"

But he keeps digging. He keeps finding new evidence. And then last Friday, he basically writes up what he's found and sends it to this group of open source software developers. And he makes, he basically says all these errors that I've been seeing, all these anomalies in this, you know, in, in these very obscure software packages, it,

It's all because this thing has been backdoored. Someone is here messing with this release and they are intending to use this to basically break into a bunch of computers and do whatever they want. So he rings the alarm. So he rings the alarm.

And immediately, the entire cybersecurity world melts down. I talked to one researcher, Alex Stamos, who I know you know. He's a former CSO at Facebook. He's now involved in something called SentinelOne, which is a cybersecurity research firm. He told me this could have been the most widespread and effective backdoor ever planted in any software product. Wow.

And basically, you know, what people I talked to said is, look, if you have this backdoor, if you have this master key that lets you get into any Linux computer that is running SSH, this very ubiquitous software package, you essentially have a way to get into hundreds of millions of computers around the world. Once you're in there, you can steal private information. You can intercept encrypted traffic. You can

plant malware, you can cause major disruptions to like big pieces of infrastructure. And critically, you can do all of this without being caught. Because part of what Andres discovers as he's investigating this backdoor is that whoever planted it there has taken steps to ensure that it is very hard to detect.

And, you know, basically this would have worked if not for Andres and his very eagle eyed, detail oriented, like obsessive approach to trying to figure out what the heck was going on with these error messages. This is why nerds are so important to the economy. We need to celebrate them. It's true. We celebrate the nerds out there. It's true. The people that are like this process is running one second too slow. Cancel my afternoon.

We celebrate you. - Totally. So this discovery, I think it's safe to say it is a huge shockwave through the world of cybersecurity because this thing was caught before it could do any real damage. It had not made it into the sort of widely used versions of Linux that all these servers run on, but it would have, and it would have been potentially disastrous.

Now that Andres has kind of become this like nerd hero, all kinds of people are praising him. Satya Nadella, the chief executive of Microsoft, his boss's boss's boss, praised him for his curiosity and craftsmanship. There was a popular post that went around calling Andres the silverback gorilla of nerds. And people are basically comparing him to the little peg in the comic that all of

modern capitalism rests on. Okay, let's get to who did this, Kevin, because that's what everybody wants to know, right? Who's responsible for this backdoor? Who's this backdoor bandit? So here's what we know so far.

According to some researchers I talked to, this is so elaborate, this plot was so sophisticated that it couldn't have just been like a random group of hackers. This had to have been like a nation state, like a Russia or a China or a North Korea, someone with access to vast resources and very skilled teams of hackers. Well, I'm interested in that, Kevin, because as far as I can tell, the main thing that separates this attack from many of the other attacks that you see all the time is just how much

time they invested in pulling it off. Talk to us about all the time involved. Yeah. One of the cool things about open-source software is that you can actually go back and see all of the changes and who was requesting them and what they actually meant in terms of what ended up in the code. Researchers have been going back and trying to forensically look at all the evidence, trying to see how this happened. They found a really interesting story buried in some of the details of this software.

So back in 2021, there was a user who creates a GitHub account and starts contributing to various open source projects. This user uses the name Gia Tan. For various reasons, researchers actually don't think that's a real name. It's probably a pseudonym. But a smarter pseudonym than the one I would have picked, which would have been Backdoor Wizard. Yes. So Gia Tan, whoever it is, they start suggesting sort of changes to XZUtils starting back in 2022. Yeah.

And this is kind of the way that open source development kind of works is like people propose a bunch of changes and then these special developers called maintainers who are sort of in charge of a project, they look at the proposed changes, they sort of test them, make sure they work, see what effects they have on performance. And then if they're good, they approve them and that kind of gets like merged into the main code.

The basic idea is that everyone who participates is essentially a good Samaritan, right? There's somebody who comes along and says, I use this software. I noticed this thing could be better. Why don't I write a little code to fix it? I'll submit it to you. And if you like it, you can share it with everyone. Yes. And many of these projects only have one or a handful of maintainers attached to them because these are not like...

these are not like fast moving software objects, right? These are not things that are being constantly refined and redeveloped. These are, this is like, this is like infrastructure. This is like, this is like you, you built a, you know, a plumbing duct for the internet and it's just going to kind of sit there mostly. And people are going to build on it and use it for stuff that they're building, but you actually don't need much more than one person to kind of keep tabs on this project. Right. It's mostly done, but software is never totally done.

Exactly. So this person, Gia Tan, or this group of people using this name, Gia Tan, start kind of proposing changes. And then they start kind of gradually sort of social engineering the entire team that's involved in maintaining this project, which, again, is mostly comes down to one person, one maintainer who's been doing this for many years.

So Gia Tan starts contributing these sort of minor proposed changes to XZUtils back in 2022. And then something interesting happens, which is that Gia Tan, whoever it is, whatever national hacking team it might have been, they start trying to basically take over control of XZUtils. And they do this by essentially seizing on the fact that the person who maintains this software, uh,

project is getting kind of tired of doing it. They kind of don't want to do this anymore, it sort of sounds like. And so Gia Tan, whoever it is, kind of sticks their hand up and says, well, what if I was the maintainer? What if I could solve this problem for you by taking over this very thankless task of maintaining this tiny little software library? Let me take that problem off your hands. So over the course of a couple of years,

Gia Tan builds trust with the other people who are involved in contributing to this software tool and eventually gets named a maintainer of this project. And so he's able to kind of do the final approval for this proposed code change that would insert this hidden backdoor into this software project. Effectively becoming a double agent like something you would read out of a novel by Le Carre. It's wild. It's like, honestly, the tradecraft, the kind of spycraft involved in this is

It is a very sophisticated operation. It involves not just a technical piece of hacking, but also kind of a social piece where you're kind of winning over this small team of very harried, very underappreciated developers. You're volunteering to help them. You're kind of establishing your credibility in this very tiny community. And eventually you're using that credibility to install a backdoor that will let whoever it is have access to hundreds of millions of computers and do whatever they want with

them. It is a wild story. It is. And now I know that we don't know truly anything about the real identity of Gia Tan, but I'll tell you, I like thinking of Gia Tan as a deadly woman assassin. You remember Carmen Sandiego back in the day? The old computer games, this sort of international criminal and very elusive. Gia Tan is the new Carmen Sandiego. That's true. You know?

And I think we should devote many episodes of this podcast to trying to track down Gia Tan. At the end of every episode, we're going to say, where in the world is Gia Tan? And we're going to keep saying it until we find out. My favorite part of that show was the Do It Rockapella. Do you remember that? Yes, it was one of the only acapella theme songs we ever had for a show, and it was so successful that they should bring that back. That's true. So...

I think there are a lot of things to say about this story, but one of the sort of interesting side discussions that I've seen come out of this, you know, there's this whole group of people in Silicon Valley who believe that AI should all be open source. And that the reason that you would want something like, you know, an

an AI language model to be open source is because then it'll actually be safer because then you'll have not just one company kind of trying to keep the bad guys out. You'll have this kind of distributed army of volunteers who are constantly sort of looking through things, poking around in the source code. You can tap into the global nerd hive. Exactly. And that's sort of how you get things like Linux, which are the result of thousands of contributors working on their little pieces of this thing. Eventually it all comes out and it's pretty secure for the most part.

And so that is one thing that those people are now saying is this episode with XZ, the XZ backdoor proves that all software needs to be open source. What do you make of that? I mean, look, all software does not need to be open source. It's perfectly fine to have normal private companies making their own software. But I think to the degree that a piece of software is foundational to how the Internet works, yeah, there is a really great case for making it open source. Yeah.

Yeah, so I am thankful to Andres, not just for saving us all from doom, but also for forcing me to learn about Linux development and open source repositories and maintainers. And I guess I'm just struggling with one more question, Kevin, which is we entered this brave new world where there are a lot of geotans out there. What are you doing to protect your backdoor? And that's all we have this week on Hardfork. Let's go to the credits, please.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Last thing before we go, we continue to be really interested in how young people are using technology. And we've been hearing stories about Snapchat causing drama in middle schools and high schools. And we want to hear about it. Has Snapchat roiled your school or your kids' school in some way? What was the Snapchat incident where you live? Let us know. Email us at hardfork at nytimes.com. The messier, the better.

Hard Fork is produced by Rachel Cohn and Whitney Jones. Welcome, Whitney. We're edited by Jen Poyon. We're fact-checked by Caitlin Love. Today's show is engineered by Alyssa Moxley. Original music by Diane Wong, Pat McCusker, and Dan Powell. Our audience editor is Nel Galogli. Video production by Ryan Manning and Dylan Bergeson. And if you haven't already, check out our YouTube channel. It's at youtube.com slash hard fork. Special thanks to Paula Schumann, Kui Wing Tam, Kate Lepresti, and Jeffrey Miranda.

As always, you can email us at hardfork at nytimes.com, especially if you know who Gia Tan is.