This podcast is supported by Goldman Sachs. Exchanges, the Goldman Sachs podcast featuring exchanges on the forces driving the markets and the economy. Exchanges between the leading minds at Goldman Sachs. New episodes every week. Listen now.
Good morning, Casey. Good morning, Kevin. How are you? Good. Happy 100th episode to you. Happy 100th episode. It's our 100th episode. Did you think we would make it to 100 episodes? I knew we would make it to 100 episodes. Yeah? Yeah. Wow. Yeah. It was a good idea. The sound of our own voices. Who could resist? ♪
I'm Kevin Russo, tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is the 100th episode of Hard Fork. This week, we answer September's biggest question. Should you buy the new iPhone? Then, author Yuval Noah Harari joins us to discuss his new book and his biggest fears about AI. And finally, some crimes from the future that are already happening today. And why Kevin is responsible for them. I plead innocent.
Well, Casey, the big tech news of the week was the Apple event that was held in Cupertino to announce the new iPhones. That's right, Kevin. It's one of the big annual traditions in Silicon Valley is tech reporters trekking down to Cupertino and seeing what the latest iPhone
iPhone has in store for us. Now, my biggest question about the new iPhones is why weren't we invited to see the announcement? I mean, I have a real theory about this. Yeah? This is my understanding. Apple cares what we say about artificial intelligence. They do not care what we say about their gadgets. And so today I thought, why don't we talk a lot about their gadgets and then see if next time around they want to have us down to see them in person? I don't know.
I thought it was because I stole too many drinks the last time I went down there. Yeah, you did hit that snack bar pretty hard. Anyway, this was different than WWDC, the event that we went to a few months ago. That was mostly focused on software and aimed at developers. A lot was said about their AI stuff that they were releasing under the banner of Apple Intelligence. But this event was really about hardware. Yeah.
Yeah, and the reason that it was interesting to me, Kevin, is that in order to do really powerful AI stuff, you do need to upgrade your hardware sometimes, right? You need more RAM. You need different kinds of processors. And then once you have those in place, you can do some cool stuff. So to me, the big question heading into Wednesday was, what will this new hardware do that might enable some of these next generation features? And what did we learn? What will this new hardware do? Well...
To be honest, I think I had a pretty muted response to what we saw in these phones. Everything that they told us that they would be doing at WWDC in June is still coming to these phones. But it looks like it's not going to come right away. Some of these changes aren't going to show up until October. Some might not show up till later. We still don't know, for example, when the chat GPT integration that Apple announced will start.
come to the phone. So over time, if you buy an iPhone 16, any of the models, absolutely, this stuff is going to come to it at some point, but when does it come and how do we wind up using it in practice? These are all still open questions. Yes, and so the question on a lot of people's minds going into this event on Monday was, well, is there enough here to justify spending a whole bunch of money on a new iPhone? So I want to talk about some of the things that Apple announced, some of the most interesting features that stood out from the event.
But I wanted to actually start with the non-iPhone stuff. Is that okay with you? Yeah, absolutely, because I think in many ways, the stuff that was not iPhone was maybe the most interesting of the whole event. I agree. So the first thing I want to talk about is that on Monday, Apple announced that soon you will be able to use the AirPods Pro 2 to get some basic information
hearing aid features. What did you make of this? - So this is fascinating to me, in part because the hearing aid market has been really hard for companies to crack. And do you know why that is, Kevin? - Is it because of regulation? - It is because of regulation. Until 2022, companies could not sell hearing aids over the counter, but then there was a change made so that people with mild to moderate hearing loss can use over-the-counter hearing aids.
And it was around that time that Apple, which has been trying to build their AirPods into a kind of, you know, tech platform of their own, said, maybe we can make these an over-the-counter hearing aid. Yeah. Basically, you can kind of take a standard hearing test, like the kind that you would take in a doctor's office if you went in and said, I'm having some problems with my hearing. They'll play some tones through your AirPods, and you'll sort of take this test. And at the end, you'll get...
results. And if you have mild to moderate hearing loss, Apple will prompt you to set up this hearing aid feature. They can actually sort of function your AirPods as a clinical grade hearing device, according to Apple. Yeah. And this is really cool because I think so many people suffer from hearing loss of the mild to moderate variety who would never necessarily seek out a hearing aid. They would think I'm too young. This doesn't matter that much. But if they already have a pair of
newish AirPods in their pocket and they're able to just get a slight augmentation or enhancement during that day. That might meaningfully improve the quality of a lot of people's lives. Yeah, I really like this feature. I also just think it's a clever use of AirPods, which, you know, most people are not going to use for
hearing augmentation. They're going to use it to listen to music or podcasts or whatever. But for the number of people who do struggle with mild or moderate hearing loss, I think this is a really good, clever change for them. I saw some people talking on the internet about how this will basically improve their lives, not because they're going to be as good as sort of a high-end medical hearing aid, which might cost thousands of dollars, but
But because they're just going to be so accessible and easy, you can charge them, you know, you can put them in. They cost a couple hundred dollars instead of a couple thousand dollars. And so for people who may not want to go all the way to a kind of doctor prescribed hearing aid, the AirPods could sort of fill in the gap.
Yeah, and I mean, I have to say, I am curious to try these myself. Like, I have been to a number of loud concerts over the years. Like, I'm positive that I have some minor amount of hearing loss. And, you know, if a pair of relatively inexpensive AirPods could fix that, does that eventually become the sort of thing where, you know what, I'm actually just going to pop these in at dinner because I'm at a loud restaurant. I want to be able to hear a little better. Right.
Right now, that feels a little bit of a far-flung scenario to me, but I actually think that the social norms around that might change really quickly. You know, Kevin, as part of the demo for this, they showed a vignette during the keynote of a woman going up and ordering some coffee
And she kept both of her AirPods in while she did that. And I was taking note of this on thread saying, that seems sort of rude to me. You know, I'm a take one, you know, earbud out person when I'm ordering my coffee because my thought is always like, if I'm the barista, like, how do I know that this person can actually hear me? Seems sort of rude to them. But man, did people come in my replies and say, hey, these things are hearing aids now. Those are an accessibility tool. I needed to lay off this one. Yeah, you're canceled. I was basically canceled. How dare you? Yeah, it was bad. It was real bad.
So what do you think? How do you see social norms shifting in a world where more and more people are just using earbuds of all sorts as hearing aids? Is that going to be weird when like the median person you're talking to just has big, you know, white things sticking out of their ears? I mean, I think that's basically the norm already. I mean, you go around San Francisco and like half the people have AirPods in already. Sure, but they take them out when they talk to you at this point.
don't know. Sometimes they do, sometimes they don't. But for me, I, you know, what I can use... Kevin, if you're talking to somebody and they have their AirPods in, take the hint. Okay, leave them alone. I do think we need a little indicator light, like, on the AirPods that, like, tells you if the person is listening to music and if they can actually hear you or not. Wait,
I don't know if you're joking or not. I actually love that idea. Like, I would love to see some sort of light on it that says, like, this person is in some sort of mode where they have, like, enhanced hearing and they can hear everything that you're saying. Because, look, you know me. I talk a lot of shit, right? If I can see a colored light on an air... Then maybe I'll keep my mouth shut. Yeah, I mean, one of the most interesting developments for me was that
Apple is starting to develop this kind of selective noise cancellation technology where you can kind of filter out certain noise from your AirPods. And maybe this points us to a future in which you could really say, like, you know, Casey's really annoying me. Can I just toggle Casey off? And so everything else I will hear normally, but your specific frequencies of your voice will be silent. Person-specific noise cancellation, I think, could truly bring peace to this world. I think that would be wonderful.
So good luck, Apple. What else did they announce, Kevin? Well, the other feature that I would characterize as sort of a health, uh, update was, I thought you were going to say a boomer update, but go on. Yeah. So this is a health update that, uh,
is called breathing disturbances. This is a feature coming to Apple watches that can detect signs of sleep apnea. This is something that they are apparently able to detect by sort of monitoring changes in your breathing as you're sleeping. They can tell you, hey, you should get checked out to go get diagnosed with sleep apnea. Sleep apnea is obviously something that affects a lot of different people and is a big problem for their health.
And maybe Apple's watches can help you get checked out for that. I don't know. What did you think of that, Chris? Well, you know, I think that it speaks to the strategy that Apple has here, which is that the Apple Watch is just turning into more and more of a bundle of services, right? We've talked before on this show about how when they first put it out, they weren't exactly sure what it was.
Over time, they realized it was a health device. So they started making it really good for things like tracking your runs and your swims and your weightlifting sessions. But as time has gone on, they figured out more and more sensors that they can add to the phone. So now it can detect your heart rate, for example. And to move from there into sleep apnea, it's like, yeah, sure, why not? If they can tell you that you have an additional problem, it just becomes one more reason to buy an Apple Watch. Yeah, now my question about this feature is, so you're supposed to wear your
Apple Watch while you sleep to have your sleep apnea detected. But I charge my Apple Watch when I sleep. If you're using this to monitor your sleep...
When are you charging your Apple Watch? So I have the same question, and they did announce this year that the newest Apple Watches are going to come with faster charging. And I think the dream is basically that someday you should be able to charge your Apple Watch while you are in the shower in the morning. And just during that 15, 20 or 30 minutes, you can get enough charge back to last you for the rest of the day. That is not true for me today, though. Yeah.
For now, you're going to need a day Apple Watch and a night Apple Watch. That'll be sort of part of your bedtime ritual. You change into your pajamas and you change over to your sleep apnea monitoring Apple Watch. What else did they announce? Okay, so now let's talk about the iPhone. The new iPhones will be available later this month, September 20th. One of the features that they introduced on Monday was something called Kinect.
camera control. This is a new button on the iPhones, a tactile switch. And when you press it, it basically gives you some more advanced settings on your camera, right? You can change the zoom or the exposure or the depth of field. Basically, this is sort of aimed at users who take a lot of photos and videos with their iPhones and want an easier way to access some of these more high-end features.
Casey, what do you think about this feature? I think it's going to give me access to a whole new suite of features I will never use even one time. Do you think I really want to be fiddling around with the zoom and exposure? The reason that I use an iPhone to take pictures and not a DSLR is because that is a foreign language to me that I do not want to understand. And while there are certainly some things in my life that I do not want to offload onto an AI assistant, choosing the zoom exposure and depth of field, I am
absolutely happy to give that to the computer. Okay, well, you may not be the target demographic for that, but you may be more interested in something else they showed, which is some more information about their visual intelligence feature. This is Apple's attempt to basically bring AI into the camera and into the photos features inside the iPhone. So now, if you
click and hold this camera control button. It will sort of pull up some AI-assisted information about whatever you are looking at through your phone. The example that appeared in this video was a guy who is outside a restaurant, and he kind of pulls out his phone, he hits the camera control button, and up pops the hours of the restaurant that he's
looking at, then there's like a dog who passes him on the street and he says like, "What kind of dog is that?" And then he points his phone at it, hits the camera control button, and up pops this little thing that says, "That's actually an Australian cattle dog." - And this got some mockery online 'cause people said, you know, when you see a cute dog walking by, part of the joy of life is to say to the owner, "Oh, that's such a cute dog. "What kind of dog is that?" - No, see, I actually appreciate this feature 'cause I do have two dogs.
And they are, they're mutts, they're mixed breed. And so, you know, you go to the dog park and, you know, every time you go to the dog park, let's say three to four people will stop you and say, what's their breed? What mix are they? And, you know, it just gets old after a little while.
So Apple has relieved me of this conversation. It's so funny. It's like, I just always fascinated by these technologies, like the point of which is to like prevent you from ever having to talk to another person. And of course I love some of those features. I never want to make another phone call to a business in my entire life, but you know, other things, I wonder if we're going down the right path here. Yep. Yeah. So,
Apple also announced a bunch of audio-related features that are coming to the new iPhones, including something called Audio Mix. This is basically kind of an advanced set of features for people who want to be able to control the audio in videos that they make.
Also, interestingly, they said that you can record a kind of multi-track voice memo. Did you see this part? I did see this, yeah. This is very interesting. They basically talked about how now a lot of singers and songwriters will record sort of, you know, they'll have an idea for a song and they'll just kind of record it into their iPhone voice memos. And now, apparently with this new software, you can actually like
add multiple tracks to a voice memo so you could, like, do the vocals and then you could kind of, like, you know, give a little flavor of what the guitar part might sound like and you can kind of create these, like, symphonies on your voice memos. Yeah, super fun little feature. What did you make of these? Well, look, I think, you know, Apple is in a bit of a tough spot in the sense that smartphones by now are a very mature category. It's a very mature technology. We are now... When was the first iPhone released? 2007. Right, okay, so we're...
Jesus Christ. 27 years? No, that's not true. 17. Were we 17 years? Kevin, how do you do math? What's 24 minus 7? 2007. Casey? 2007. 3. 14. 17. Wow. Wow. And I think I still make fun of the AI for not being able to count the R's in strawberry. The point is, Kevin, we're 17 years into the iPhone. And...
At one point, it was actually a very big deal when Apple would say, well, the battery lasts longer or the camera is better. But now we just sort of assume those are going to get 1% to 3% better every single year. And look, you can just sort of look at the numbers. Most people do not upgrade their iPhones anymore.
every year. Most people, I don't think, upgrade them every two years, right? So what they call the upgrade cycle is getting longer and longer. And when I saw this year's model, I thought nothing about that is going to change, right? This is not going to be the AI super cycle. I don't think the features are just, frankly, interesting enough to compel that. But also, the features won't even arrive when the smartphone does, right? So instead, you have a bunch of stuff that's like
That's kind of cool. And when I eventually do upgrade my iPhone, I'm sure I will appreciate it. But in the meantime, I'm going to be fine with the phone I have. Yeah, that's one really interesting piece about this. You know, usually Apple announces these new iPhones in September. They go on sale. And if you buy the new iPhone, you right away get like all of the features that they just showed off. With this, they are slow rolling the AI features. So Apple intelligence is going to be in beta mode in October.
In some places, like in the EU, it won't be available even then due to some of the regulatory and privacy concerns that Apple has there. So it's not even clear when these AI features will arrive on these phones that Apple is saying you should buy because of the AI features. Yeah, they also have not yet been approved in China, which is a huge market for Apple.
So, Casey, the perennial question is, do I need to upgrade my iPhone? And I got that question this week from a friend of mine who said, you know, I've got an iPhone 15. Do I need to upgrade to the 16? What did you say? So I said, probably it's a good idea if you want a phone that can run all of the Apple intelligence stuff that is going to come to have a phone that can do that.
This friend of mine is also a new parent. And so my advice for new parents is always to get the phone with the best camera and the most storage that you can afford because you do want to be taking a lot of photos of your very cute kid. So I said basically, yeah, you should, you know, if you're going to upgrade your phone sometime in the next few months, you should probably get the newest one because it is going to be able to run all the AI stuff. That makes sense.
I'm a bit more laissez-faire about these things. That might not be the right French phrase. I don't know. Write in if you have a better suggestion. You're a bit more je ne sais quoi about the whole thing. Yes, if you will. A little Moulin Rouge, as they say. But here's the deal.
You know, like it used to be really hard to figure out like what TV to buy. Remember this? Yeah. Back in the day. And then we like got to this certain magical point. And what I'm about to say is not strictly true. It's like 80% true, which is truly buy whatever TV you want. It's all good enough.
Right.
But when it comes to, well, I'm an Apple person. I'm in the Apple ecosystem. I have my Apple Watch and my Mac and my iPad. And so I know I'm just going to upgrade my iPhone. The answer to when should I upgrade my iPhone is truly just when you want to. Do you want to upgrade it this year because you want a slightly better camera? Go for it. Do you want to...
take two years off and just enjoy the perfectly good iPhone 14 that you're still using, that is fine too. Because by the time we got to iPhone 14 or so, they were basically just all good. Yeah, I don't think I'm going to upgrade either. I upgraded my phone last year. I'm on an iPhone 15 Pro now, so I can already run all of the AI stuff. I don't need the new chips for that. So I'm going to wait it out. All right. Well, 15 bros for life then, I guess. Yeah. Yeah.
When we come back, the segment you've all been waiting for. Yes, we're going to talk with author Yuval Noah Harari about his new book and what worries him about AI.
This podcast is supported by OutShift, Cisco's incubation engine that turns innovative ideas into tomorrow's tech. OutShift thrives in solving tomorrow's hardest problems, pushing the boundaries of what's possible, and bringing their customers along for the ride. Not just building a vision, but an achievable path for their customers to succeed.
If you're looking to gain a competitive edge in how your organization will handle incoming generative and agentic AI needs, quantum security risks, and more, let OutShift help inspire you. Visit outshift.com to learn more. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.
Vanta automates compliance for SOC 2, ISO 27001, and more. With Vanta, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center. Over 7,000 global companies use Vanta to manage risk and prove security in real time. Get $1,000 off Vanta when you go to vanta.com slash hardfork. That's vanta.com slash hardfork for $1,000 off.
Well, Kevin, you know who won't be buying the latest iPhone? Yuval Noah Harari. That's true. Yeah. Yuval is a very well-respected historian and author, most famous for his book, Sapiens, A Brief History of Humankind. But one thing that we've learned about him is that for somebody who is beloved by the tech industry and often sought out as a
as a mentor and a guide for the CEOs of the biggest companies. He is not somebody who really uses a lot of technology in his day-to-day life if he can help it. Yes, he's a little bit ascetic in his personal life, but his thoughts about technology and the history of technology really do carry a lot of weight in the tech industry. I remember there was this moment a few years ago where
where, you know, all the tech CEOs basically started sounding like you've all know Harari. You know, they would make these big sweeping statements about how, you know, since the dawn of history, the hunter-gatherer ancestors of homo sapiens would look for ways to implement technology into their lives. And you could just sort of say like, oh, that person just read sapiens. And he has not only become influential among the rank and file of tech, he's also been embraced by many of the leaders, Mark Zuckerberg, Bill Gates.
All these sort of tech leaders are fans of his. Some of them have blurbed his books or hosted conversations with him over the years. So he's just a person who people in this world take really seriously. Yeah, and he has a new book out called Nexus, and it concerns a lot of things that we talk about on the show. Yeah, so I read the book. It's called Nexus, A Brief History of Information Networks from the Stone Age to AI. It's a very big book. It's a very thorough book. It's very sweeping. And I would sort of say it's kind of
two books in one, right? It's got this sort of capsule history of information networks going all the way back to sort of the dawn of the printing press and how that sort of helped and didn't help fuel the Enlightenment. He talks about, you know, 20th century information sharing and things like Soviet Russia. And then the second half of the book is basically about AI and some of his fears for how AI could
not only sort of caused these kind of doomsday scenarios, but some of the more insidious ways that it could weave its way into our everyday lives. Yeah, and you know, this kind of doomsaying, I think, has become sort of unfashionable over the past six months or so as it at least fluctuates
feels like the pace of AI development is slowing down a bit. Some of the doomsday scenarios that were predicted earlier have failed to materialize. And so this book really brings us back to the sort of folks who have never stopped being worried and think that the risks are just as great as they ever have been. Yeah. And at the same time, he's also a person who has written about some of the positive things that could result from AI. So I was just sort of interested by kind of his embrace of the more pessimistic view of AI and
and maybe how he balances that against some of the things that he thinks could be good about this technology. Let's bring him in. Yuval Noah Harari, welcome to Hard Fork. Thank you. It's good to be here. So like everyone else in Silicon Valley, in the tech world, I read Sapiens. I've read a bunch of your other work since then and really enjoyed it.
And I read your new book, Nexus, which just came out. And I think what might surprise some of your fans in the tech industry about this book is that compared to some of your previous work, I would describe it as fairly grim.
In some of your previous books, you wrote about both those sort of positive and negative things that might come from future technologies like AI. But in this book, you write that, quote, if we mishandle it, AI might extinguish not only the human dominion on Earth, but the light of consciousness itself, turning the universe into a realm of utter darkness. So my first question is, Yuval, how did you become such a doomer? Yeah.
I'm not saying that this is a prophecy. I'm just saying this is a worst-case scenario. And you have so many people, especially in places like Silicon Valley, that they focus on the positive potential of the technology. And it certainly has enormous positive potential. But still, it also has a dark side. It also has a negative potential. And it becomes the job of historians and philosophers and sociologists,
to focus on the dark side as well. We need to be careful because we've seen before with powerful technologies that even if in the end humanity learns how to use them well, the kind of learning process can be extremely, extremely costly.
I want to get to some of your ideas about AI and sort of dive a little bit deeper into some of them later on. But I also just want to start by asking about your career arc, because as you write in the book, you sort of became an accidental AI expert.
Very accidental.
I'm so curious about that. Like, can you share more about those conversations? And why do you think these powerful people wanted to talk to you about AI? I think because they realized that this is a very powerful technology that will change history and that you need a historical perspective to understand what is happening and what are the potential consequences. Looking backwards, it's a shame that they didn't take history more seriously.
Because even though AI is completely different from every previous technology in history, and we can discuss why, there are still many relevant lessons that you could have learned from previous revolutions in information technology, like the invention of the printing press.
One thing I heard over and over again in places like Silicon Valley is this naive, rosy view of the history of information, which says that you invented the printing press, and as a result, we got the scientific revolution. And the more information moves in the world, the better things become. And this is nonsense.
Almost 200 years passed between the moment when Gutenberg introduced print to Europe until the flowering of the scientific revolution in the 17th century with figures like Newton. In between, you had the worst wave of wars of religion in European history, the worst wave of witch hunts,
Because what the printing press flooded Europe with was not scientific tracts. It flooded Europe with extremist religious literature, with witch-hunting manuals. These were the big bestsellers. Yeah, I love that part of your book because you do talk about some of the bestsellers in, like, 15th century Europe being these kind of, like, insane conspiracy theory books about witchcraft and how to spot witches. Just these, like...
outrageous conspiracy theories, including one book that became a mega bestseller that claimed that witches literally stole men's penises and collected them in boxes. Yeah, so this was The Hammer of the Witches.
There was a Christian inquisitor who tried to go on this kind of do-it-yourself witch hunt in the Austrian Alps in the late 15th century. And the local church authorities, they said this man is crazy and they stopped his inquisition, his witch hunt. So he took revenge through the printing press. He wrote The Hammer of the Witches.
which is a do-it-yourself manual, which exposes this worldwide conspiracy of witches led by Satan trying to destroy humanity. And it was full of these really, just to give you a flavor of the book. So yes, there was a chapter about how witches steal penises from men. And like as evidence, he brings this story of a man who wakes up in the morning and finds his penis is gone.
I hate when that happens. It sometimes happens, yes. So he suspects the local village witch. So he goes to the witch and kind of coerces her, brings me back my penis. So the witch says, okay, climb this tree and you find a bird's nest at the top of the tree. And the man climbs the tree, finds the bird's nest. Inside the bird's nest, you have these several penises that the witch stole from different men in the community.
And she says, okay, now you can take yours. And instead he takes the biggest one, of course. So the witch says, no, no, no, you can't take this one. This one belongs to the parish priest. Now, this was the number one bestseller. I can see why. This is very good. Very good. Yeah. Yeah. Unfortunately, it was marketed as nonfiction, which is a real mistake by the publisher.
So you have these stories about these sort of insane examples from our history of how more information does not always make us more virtuous or more informed. And I've just been fascinated to sort of watch your interactions, the ones that I've seen publicly at least, with leading people in Silicon Valley because –
especially this conversation you did with Mark Zuckerberg back in 2019. This was part of his personal challenge for the year to kind of tape himself having a bunch of conversations with public intellectuals, people he admired. And you were one of the people that he picked to talk to you.
And it's just a fascinating conversation and very bizarre because Mark Zuckerberg basically spent the whole conversation saying, you know, well, I believe that connecting people online and helping them communicate is good. And you basically spent the whole interview telling him, well, this
sort of not how it's gone historically and usually helping people communicate has some bad consequences too. And it seemed like he just couldn't really process it or didn't want to believe that what you were saying was true. I'm curious, is that how a lot of your conversations with tech CEOs go where you just basically tell them, look, you know, new technology can be good, but it can also be bad. And they kind of don't understand what you're saying.
Some of these conversations are like that. And again, I try to emphasize that I'm not against technology. I think it can do immensely good things for humanity. I often give the example of my own life.
that I met my husband online in 2002 in one of the first kind of social media sites for the gay community. I grew up in Israel in the 1980s, 1990s. It was a very homophobic society. Also, I grew up not in Tel Aviv, but in this small suburb of Haifa. I didn't know anybody who was gay. And how do you meet guys? Very difficult. And then the internet came along.
And the internet and social media, it did wonders for these kind of dispersed minorities like gay people. So I'm not saying, okay, let's stop all this technology and go back to the Middle Ages. It's just we need a more balanced view. I mean, looking backwards, for instance, at Facebook, with the way that the Facebook algorithms have contributed to what happened in Myanmar, the ethnic cleansing campaign there against the Rohingya, by deliberately spreading racism
conspiracy theories and anti-Rohingya propaganda in pursuit of increasing user engagement. And what we know from these kind of internal documents that were released from Facebook is by 2018, 2019, people in Facebook knew what happened. They understood the danger. So it's not like they were completely oblivious to it.
Yeah, I mean, both Casey and I covered that story of what happened with Facebook in Myanmar. And it is a big example that you use in your book of sort of how algorithms and AI can facilitate violence as they did there.
But that was also a spot where I found myself really wanting to challenge what you had written because you wrote about the Myanmar story as something that proved that algorithms and AI could have their own sort of agency by lifting up sensational stories, by making it possible to fuel ethnic violence because the news feeds are just showing everyone the most sensational, outrageous thing. And I read that and I thought, well...
there were a lot of humans involved in those decisions too. People who worked at Facebook who knew that they had a problem in Myanmar and didn't do anything to try to stop it. So do you think that you're kind of letting the humans off the hook by placing so much emphasis on how the algorithms help fuel ethnic violence? You know, in that section in the book, when I distribute kind of the blame, I say the algorithms carry maybe 1% of the blame.
I still allocate 99% of the blame to human beings, mostly human beings in Myanmar. It's also very unfair to place most of the fault on Facebook. No, I mean, most of the fault was of the army chiefs and the junta in Myanmar and the extremists there. But there was a contributing effect.
from Facebook and from its algorithms. I focus on this example by saying that, you know, this is the first time that non-human intelligence contributed maybe 1% of the responsibility for an ethnic cleansing campaign. It's a warning sign for the future that if you give a lot of power to a non-human intelligence,
and you give it a certain goal, you cannot predict how the non-human intelligence will try to achieve that goal. This is where the agency aspect of AI comes into its own. You know, with a printing press, every decision had to be made by a human being.
When thousands of copies of the Hammer of the Witches were spreading across Europe, it was a human being who decided, let's print another copy of the Hammer of the Witches. But with conspiracy theories in Myanmar, many of the day-to-day decisions were not done by any human being. And that's a huge, huge difference. Yeah.
So I hear what you're saying about the power that these algorithms have to promote certain views to maybe lean further into hate or to fear. I also feel like sometimes we talk about algorithms in this way that almost ascribes like mystical powers to them. Like you see this on a lot of the current discussion around regulating social media in the United States as well. You know, algorithm-based social media is
And I just wondered, as you did your research, I understand there's the Myanmar case, but were there other things that you saw that you feel like are evidence for the idea that algorithms really are this sort of reality reshaping force? I look at the war in Gaza, for instance, at the moment, and there is a huge debate, which I don't know the answer to. I'm following the debate, but I'm still not sure about the correct answer because it's an ongoing process. Who is choosing the targets?
If a house in Gaza is bombed, everybody agrees, the Israeli military, the critics, everybody agrees that AI is involved in choosing the target. There is a debate to what extent there are still humans in the loop. You have one camp that says basically it's the AI now deciding which houses to bomb, which individuals to target.
Humans have very little ability to go over the information that the AI analyzes and make sure that it makes the right decision.
There is another camp that says, no, no, no, no, no. Yes, it's technically feasible right now to have the AI calling the shots, literally calling the shots. It says, shoot this, and then the humans shoot. But we are not doing it. We still have humans in the loop. We still go very carefully about every target the AI chooses and make sure that it's the right target. I'm not in a position at the present moment to tell you who is right.
But both sides in the debate agree that it's now technically feasible to conduct a war with an AI calling the shots. And, you know, if you think about the AI apocalypse, so if you live in a house that is being bombed on the orders of AI, this is the AI apocalypse, not in the Hollywoodian style.
This is not the Hollywood scenario of robots trying to rebel against humans. This is a more bureaucratic scenario of, you know, the world is being filled with AI bureaucrats.
That in the armies, in the banks, in the universities, in the governments, more and more decisions, which house to bomb, who is a terrorist, whether to give you a loan, whether to give you a job, whether to give you a place in a university. These decisions are increasingly made by AI and these decisions are becoming increasingly opaque to us.
Yuval, in March of last year, you were one of the prominent people who signed this open letter calling for a six-month pause on the development of powerful AI systems. Obviously, that didn't happen. Didn't happen. Nobody paused. And in fact, all the big tech companies have been accelerating like crazy. And I
I guess I'm wondering tactically if you think calling for a pause was a mistake. I've talked to a few people who signed that letter, that same letter that you signed, who said, well, maybe we weren't wrong, but maybe we were just too early. Maybe people aren't ready to understand that AI can pose an existential threat because, you know, these systems are still pretty unreliable. Chat GPT still can't tell you how many letter R's there are in the word strawberry with any kind of fidelity.
And so there are some people, including some people who are very worried about AI risk, who think, well, maybe we just sort of cried wolf too soon. Maybe we should wait until these systems actually are dangerous and capable before we start warning people about them. Do you worry at all about that with your own work? I don't think it's too early.
In many ways, maybe it's too late. If you look at the situation of democracies around the world, for instance, democratic conversation is breaking down. Not just in the US, but all over the world. You look at Israel, you look at Brazil, you look at the Philippines. We have the most sophisticated information technology in history, and we are losing the ability to talk with each other, to hold a reasoned debate with each other.
And democracy really is on the brink of collapse.
And nobody knows for sure why it is happening. There are, you know, these specific explanations for every country. Like there are the unique social and political conditions of the United States. But when you see the same thing happening at the same time all over the world, it can't be the whole explanation. The number one question I would pose people like Elon Musk or Mark Zuckerberg is why is it that at the very same moment that you developed the most sophisticated information technology in history,
Yeah.
So I'm curious about what it would mean to worry, right? Like, had there been a six-month pause or, you know, were the leaders of the big AI labs to come together today and say, you know what, we are going to heed this warning and kind of slow down. What would you want to see them do or work on in a conversation?
that would make you feel like, okay, this is starting to feel safer to me? And is there anything in particular that you think they could do that would be positive for democracy? You know, there are specific regulations that I would like to see, and there are more importantly structural changes that I would like to see.
If we speak about regulations, the two most obvious and necessary ones are to make corporations responsible, liable for the actions of their algorithms. Not for the actions of their users, but for the actions of the algorithms. If somebody posts a hate-filled conspiracy theory on Facebook or Twitter, this is not a responsibility of Facebook or Twitter in my view, and I don't think they need to censor them.
But if the Facebook or Twitter algorithm deliberately promotes and spreads this conspiracy theory, this is on the company. This is not on the users. So they should be liable for the actions of their algorithms. And the other main thing, we need a ban on bots pretending to be humans. AIs are welcome to join the conversation, but only if they identify as an AI.
Again, democracy, we can imagine it as a group of people standing together, talking, having a conversation. What happens if suddenly a group of robots join the circle and start speaking very loudly and very persuasively and even very emotionally?
And you can't tell the difference who is a robot and who is a human being. If democracy is a conversation between humans, this is the end of democracy. So again, the robots are welcome to join only if they identify as robots, as AIs. So this is the other regulation. But of course, these specific regulations are very limited here.
Because we can't anticipate how the AI revolution will continue and accelerate in the coming years and decades. So what we need are structural changes. What we need is, first of all, a far higher investment in safety.
Like if in the industry it becomes the standard that when you develop a new powerful AI, so 20% of the budget and 20% of the human talent is working on making sure that this thing is safe. And when I say safe, I also mean socially and politically and psychologically safe for humans to
I would be happy with that. Yuval, I'm very curious about your own interactions with technology and your relationship to technology in your own life. Do you use any AI tools? Do you use ChatGPT for research or writing or just stuff in your personal life? I use it sometimes for translations and it does an amazing job.
As a person who is a specialist in language and words, I'm amazed by how good the technology has become so quickly. Like a couple of years ago, the general idea was that AI will never master language. And now I look at the texts produced by AI, whether translated or just originated by AI, and it has such a good grasp of
of, for instance, the connotation of different words, of the semantic field of words. It knows how to weave an argument, a story. You know, some people say, oh, this is just, you know, glorified autocomplete. It's not.
You see that you read a long story or essay and, you know, it's sometimes full of hallucinations and mistakes and or whatever. But it's really a story. It's really an essay. There is a logic there. There is a narrative there from beginning to end. This is really amazing. And I rely on it to some extent.
And I'm not against using these technological tools. It's just that because of our historical experience, we should be aware of the dangers of the dark side and make sure that we use technology for our purposes instead of being used by it. What about smartphones? A few years ago, you told an interviewer that you didn't own a smartphone. Do you now?
Yes. Unfortunately, I had to, at some point, there were just too many services, you know, healthcare, transport that I couldn't access. So I now have this kind of, not emergency smartphone, but it's often asleep. So like...
You tuck it into bed at night? Not at night. I mean, but like, I definitely don't place it near my bed. And it's not the first thing that I check in the morning. Now I came here to the New York Times offices to do this interview. So I left the smartphone in the hotel.
I don't want to be all the time with it, like relying on it, developing a symbiotic relationship with it. That's crazy to me. It would be terrifying to me to be in a city I don't live in on my way somewhere to like to do an interview. And I mean, the hard fork, obviously one of the probably the most important interviews that you'll do, Yuval, and to not have your smartphone. What if you get lost? Why should I get lost? I mean, New York City. He is a pretty smart guy. Yeah. So...
After reading your new book, I feel like I have a fairly good idea of your views on AI specifically, some of the positives, also some of the negatives.
A question we ask a lot of people who come on this show is about their P-Doom, right? Their probability prediction that AI will cause some catastrophic event or maybe even destroy us. What is your P-Doom right now? I think that kind of total extinction is a very small, very low probability, but it's still there. So we need to take it into account.
What does low mean? Is it like 1%, 10%? Something in that area. I mean, I still have trust in humanity that we won't go that far. My main fear, again, as a historian, is simply a repeat of the last disaster. Like I think about the Industrial Revolution of the 19th century as the model. Of course, there are many, many differences between
But basically, you look at the Industrial Revolution, and of course, you know, you had all these doomsday scenarios in the early 19th century, all the Luddites saying these machines of iron and steam, they will destroy humanity. And people today in Silicon Valley, they like to laugh about it and say, look, nothing happened. I mean, we now have the best machines.
ever thanks to these machines of iron and steel. And it will be the same, we promise you, with the machines of silicon.
But when you take a closer look at the history of the 19th and 20th century, you realize the Luddites were not entirely wrong. The machines, they led to some of the worst disasters in human history, not because they were evil, but because people didn't know how to handle them.
In the end, yes, we learned how to use steam engines and telegraphs and radios for good purposes. But, you know, on the way, totalitarianism, world wars, imperialism, if I would be kind of the teacher giving humanity a mark for its achievements in dealing with industrial technology, I would say we got a C-. And if we now in the 21st century
We now have to learn how to build AI-based societies. If we get another C- in how to build AI-based societies, this is terrible news for billions of people across the world.
I'm reminded of a part in Sapiens where, if I'm remembering it correctly, you basically say that people were probably happier on average before the invention of agriculture. Like just living in small tribes, you know, everybody knows everybody. Everybody's chill. Yeah, they lived to age 25. I mean, there were some downsides too. But farmers also lived...
I mean, the agricultural revolution did not extend human lifespan at all. This happened only in the 19th century. A good cautionary tale. Well, thank you, Yuval. And actually, I have a parting gift for you today, which is that I asked ChatGPT to write a thank you for Yuval Noah Harari in the style of
Yuval Noah Harari. So here's what it said. Thank you, Yuval, for a conversation that transcends the ordinary, touching on the vast arc of human history from our humble beginnings as foragers to the potential end of human dominance at the hands of AI. Wonderful. Thank you so much for joining us. Really nice to have you here. And the book is great. Everyone should check it out. It's called Nexus. Thank you. Thank you. Thank you.
When we come back, it's crime time. I'll tell you about some new schemes and scams involving the technology of the future.
This podcast is supported by ServiceNow. Here's the truth about AI. AI is only as powerful as the platform it's built into. ServiceNow puts AI to work for people across your business, removing friction and frustration for your employees, supercharging productivity for your developers,
Providing intelligent tools for your service agents to make customers happier. All built into a single platform you can use right now. That's why the world works with ServiceNow. Visit servicenow.com slash AI for people. Time is luxury. That's why Polestar 3 is thoughtfully designed to make every minute you spend driving it the best time of your day. That means noise-canceling capabilities and 3D surround sound on Bowers & Wilkins speakers.
Seamlessly integrated technology to keep you connected and the horsepower and control to make this electric SUV feel like a sports car. Polestar 3 is a new generation of electric performance. Book a test drive at Polestar.com.
Well, Kevin, you know, on this show, we often like to highlight positive uses of technology and ways that people are using AI and other tools to try to make the world a better place. We sure do. And this segment is not about that.
Because something else that people do, Kevin, is use new technologies to commit crimes. Oh, you told me we were going to do a crime segment, and I was very excited because I thought finally we're going to solve a murder on this podcast like the serial people. We are not going to solve a murder, but we are going to kill it. Speaking of the serial people, they didn't hit 100 episodes. But anyways...
Today, Kevin, we're going down to the courthouse and we're looking for justice. Introducing Hard Fork Crime Division. Do we have a theme song? We do.
Wow. Already, I feel like I'm down at a bodega interviewing the proprietor about whether he saw someone last week. Speaking of which... Are you the grizzled veteran of the police force or the plucky newcomer? I'm absolutely the grizzled veteran. Are you more plucky newcomer? I'm more of a plucky newcomer vibe. All right. That makes sense. Well, listen, today we're cracking open our case files on three recent operations because, Kevin, whether we like it or not, new technology...
is rewriting the rules of crime and justice. All right, let's get into it. All right. Case one, the Russian ruse. And that's different than the Russian ruse, who's my cousin Boris, who's a disaster and we never talk about him. That's right. This is a different kind of ruse altogether.
So you may have seen this. Last week, the Justice Department accused two Russians of orchestrating a foreign influence campaign. Just as in 2016 and 2020, Kevin, the Russians are attempting to influence our election. But this time around, they're using real people
social media influencers who already had large audiences in an effort to reach their goals. So I'm sure by now you have seen this story. I'm obsessed with this story. Yeah. Well, what's, and what is obsessed you about it? So, you know, there are these people on YouTube, uh, who have huge audiences and, you know, command lots of trust from their fans. Uh, many of them are conservative and, and sort of on the right wing. And, uh,
this time around, when the Russians thought about how do we want to influence American politics, they didn't think about how do we start a troll farm that's going to fill Facebook with fake messages. They said, how could we pay these people who are already doing basically our work in the U.S. in a way that would be hard to trace and we could basically have our own slate of influencers? And they actually did it. They did it. So let's talk about how this worked. So according
to the indictment, the Russians funneled about $10 million into a Tennessee-based online content creation company called Tenet Media. You ever get your news from Tenet Media? I've seen their videos before, for sure. It's run by Lauren Chen, who's this sort of long-time conservative influencer, and her husband...
And they had a bunch of deals with a bunch of YouTubers to distribute their content. Yes, and those conservative stars included Benny Johnson, who has 2.4 million subscribers on YouTube, Tim Pool, who has 1.3 million, and Dave Rubin, who has 2.4 million subscribers on YouTube.
And some of those stars made content with really prominent conservatives, including Vivek Ramaswamy, who's a former presidential candidate, and Laura Trump, who is the Republican National Committee chairwoman. So this was part of an operation that was really reaching a lot of conservative people. And so, of course, everyone is asking, did the influencers know? And apparently they did not.
But of course, some people have pointed to some issues that might have made them aware. Yeah. So let's just run through a little bit of what happened here, according to the Department of Justice, because I think that if you look at the evidence that we have in this case so far, it is extremely unlikely that these conservative parties
YouTube influencers did not at least have an inkling that someone from Russia or another foreign country was paying their bills, was paying them to make content. Okay, make your case. So, okay. So all the influencers who have responded so far to this story have basically said, we had no idea. We are victims of this scheme. We didn't know that Russia was paying our bills. And then you start looking at the details. Yeah.
So one of the details is just about the amounts of money that are being paid to these influencers. Now, we don't know exactly which influencers took exactly how much money from this company, Tenant Media. But we do know from people who have sort of pieced together these sort of unnamed influencers in this indictment with some of the things that have been said on these YouTube channels that
We know that someone we believe is Tim pool. Uh, the conservative YouTuber was reportedly paid a hundred thousand dollars per video. And amazingly, these were not like bespoke videos for tenant media. These were just the videos that he was already making that someone shows up out of nowhere and says, we're going to pay you a hundred thousand dollars to essentially license this. Yes. Another YouTuber, Dave Rubin, another right wing media figure, uh,
was reportedly paid $400,000 a month plus $100,000 signing bonus. And when they asked who was paying them, because some of these influencers, it appears, did actually sort of take note of their ears perked up when these people start showing up out of nowhere with bags of money for them for apparently not a lot of extra work.
Representatives of the company replied by making up a wealthy benefactor named Edward Gregorian, who appears not to have existed. And they sent over like an info sheet about this made up, you know, money man. And it was just a picture of like a man in a private jet with some sort of made up biographical details. Now,
I don't know about you, Casey, but if someone showed up at the hard fork offices tomorrow and said, we want to pay you $400,000 a month for you to put your podcast on our channel. And by the way, if you ask any questions about who's funding this all, we're going to direct you to a fake looking shot of someone on a private jet.
I, for one, would have some questions about that. Yeah? And you would dig really deep to make sure the money wasn't coming from the government of Russia? Yes, especially if some of the ideas that they had for my content were that I should start saying nice things about Russia and mean things about Ukraine.
As evidently happened with some of these YouTubers. Yeah, well, so tell us about the pro-Kremlin messages that the Russians wanted to push here. So, again, the influencers who are involved in this case have been pretty consistent, saying they don't believe they were sort of taking marching orders from the Russians. Essentially, these are just our personal views. You know, we had total editorial control over these videos.
But there are still a few examples that the DOJ pointed to in this indictment about how Russians tried to influence the content that these right-wing influencers made for Tenet Media.
At one point, the Russians asked Tenet's founders to basically seed the idea among the influencers in their network that they should start criticizing Ukraine, like spreading an idea that Ukraine was behind a deadly attack at a concert hall in Moscow in March rather than ISIS. The Russians apparently also pushed Tenet to highlight a video from Tucker Carlson where he was walking around a supermarket in Russia, just sort of marveling at how much stuff there was on the shelves. And this is my favorite detail.
A producer for Tenet apparently said that the video just felt like overt shilling. Basically, this was too obviously Russian propaganda. But after being pressured by Tenet's founders, they agreed to post the clip anyway. Yeah. So that's what happened. Kevin, do we have any reason to believe that anything the Russians were doing here was effective?
I mean, I think if you're measuring their effectiveness related to previous campaigns, I would say this was actually much more bang for their buck than what they did in 2016, which, you know, you could debate how much of an influence all of their troll farms and their operations had on that election. But they were not being sort of parroted. Their messages were not being parroted by some of the biggest media.
you know, figures in conservative media, which is what happened this time. Now, again, you could say those people are just victims. They had no idea that the money was coming from Russia. It just happened that their views aligned on many subjects with those of the Kremlin. But I think if you're being realistic, you have to say, well, how much other Russian money is propping up some of these conservative media figures? Where else might we be seeing sort of these shell companies and shell operations sort of
funneling money to influential people along the partisan spectrum? And how many people are actually sort of may not be may not know where their money is coming from? Maybe that's coming from the Kremlin. What do you think? I mean, it's an important question because in the United States, our country is incredibly polarized. And one thing we've learned about the Russians is that they love to sow division. They love to make Americans mad at each other. And so when you have a group of
conservative influencers who love trying to burn down the status quo and do nothing but, you know, speak in really apocalyptic terms about the state of American society. If you're in the Kremlin, it's like, yeah, kick them a few hundred thousand dollars a month. Make them say that louder. Make them say it in more places. How far can we spread that video? So in a very real way, these YouTube influencers were doing the work of the Kremlin for them. And I think that's just kind of a good
thing to keep in mind as you are encountering media over the next several weeks leading up to the election, always worth asking yourself, you know, who benefits from you believing that what you're hearing is true? Yeah. How much money would it take for you to do Kremlin propaganda? I'm not doing Kremlin propaganda. Everyone's got a price. No, absolutely.
Absolutely not. Like the Russian regime hates gay people among many other human rights violations. So it's going to be a no for me dog. Yeah. Well, I think they've been actually modifying their views on gay people. And I learned that on a recent trip to a Moscow supermarket that I want to tell you about. Oh, no, Kevin, no! No!
All right. Case closed. We're moving on to the next crime. What's the next crime? A disturbing story about the 3D printed gun promoter. This is a tale about a 26 year old named John Ellick who goes by the online name Ivan the Troll. You have to wonder if he ever considered Ivan the Terrible. I mean, it was sitting just right there.
But he helped design and spread instructions online for how to build a popular 3D printed gun. This gun is called the FGC9. You know what that stands for? No. It stands for, and this is not a joke...
Fuck gun control. And the nine refers to the nine millimeter bullets that it fires. Very subtle. And it's being promoted with the explicit goal of arming as many people as possible. This is actually a very worrying story to me because this is something that people who are worried about, you know, guns and the proliferation of guns in this country have been blowing the whistle on for a long time.
But it actually seems to be a problem. And this was there was a very good story by Lizzie Dearden and Thomas Gibbons Neff in The Times the other day about this gun, the FGC-9, that has been sort of identified all over the world. It's appeared in the hands of paramilitaries in Northern Ireland, rebels in Myanmar and neo-Nazis in Spain. It is basically the the
gun of choice for a lot of terrorists around the world. And unlike other guns, you can 3D print the FGC-9 at home using a 3D printer. So it is very hard to control the spread of this gun. Yeah. And, you know, in the Time story, there is a video of what the FGC looks like, and it looks like a nerf gun. You know, I mean, it really looks like a child's toy. But as you say, Kevin,
If you have access to a 3D printer and a lot of time, you can just make this at your house. So what is the crime?
Well, the craziest part of this story is that some of this might not be illegal. In the United States, there are many different state laws that regulate 3D printed guns. But in Illinois, where Ivan the Troll lives, you are allowed to sell and possess homemade gun components if you are a firearm manufacturer. And, uh...
This guy is. So what do we make of the spread of 3D guns here, Kevin? I mean, it is just so nuts to me that this is legal. I mean, if you can 3D print a gun in your house using a commercial 3D printer that you can go on the internet and buy, it's...
and a plan that you can download for free from the internet. I actually do think that should be illegal. And the Biden administration has actually proposed trying to regulate homemade gun components as firearms. And I hope that this becomes illegal. There's something so sinister and dark
about a world in which anyone using a sort of off-the-shelf 3D printer can be making their own lethal weapons. Yeah, sometimes I feel like such a cognitive disconnect as like we've spent so much time in recent months talking about like, are social media algorithms harmful? You know, and then meanwhile, anyone can just like print a gun at their house and just, you know, like walk up and shoot up
supermarket or something. So, yeah, a very disturbing story, but I'm glad that more attention is coming to just how widespread these guns now are. Yeah. It's also like, it's just such a good lesson in kind of the
unintended consequences of new technology. I mean, I remember a decade or so ago when 3D printers were first becoming popular and accessible and you could get one. And there were all these predictions that we'd all be, you know, 3D printing parts for our household appliances or toys for our kids. Or like there were even startups trying to use 3D printers to make houses.
And as it turns out, guns turned out to be the sort of use case that actually did find product market fit from the looks of things. And I just think it's such a dark example of how a technology that is built to make things easier and better in the world can instead make things more deadly. All right. I have one more thing to say about this, which is that this is actually an area where I do think that content moderation could save lives. I think if you make a 3D printer...
and that printer has the ability to make a gun, I think you actually should put something in your system that says, hey, it looks like you're trying to make a gun here at home. We actually don't allow that. So I just, I don't know why the 3D printer manufacturers have not
blocked this use of their devices. But I think that is a no-brainer. Yeah, I mean, and it suggests that, unfortunately, this might be, like, a major reason that people are buying 3D printers, right? Like, I think that if 3D printing manufacturers felt like they could just easily block this without any blowback, maybe they already would have done it because otherwise what business would want to be known as the printer that made the gun that was, you know, responsible for some horrible tragedy? Yeah.
Well, we said the last case was closed, but this one, unfortunately, I have to say is still open. Yeah. All right, let's do one more, Kevin, and talk about our final case today, the Spotify swindler. Kevin,
According to authorities, a 52-year-old man in North Carolina manipulated music streaming sites and made $10 million in royalties from companies including Spotify, Apple Music, and Amazon Music. And he has the name Michael Smith, which I'm going to say, if I ever decided to commit a crime, I wish I had a name like Michael Smith because you'd never be able to find me on Google.
It's true. But this Michael Smith, according to the authorities, created this scheme that unfolded over seven years, and it really is a wild one. He partnered with the chief executive of an AI music company and a music promoter to create hundreds of thousands of AI-generated songs, which he then uploaded to the music streaming sites, because, of course, that's how these sites work. Anybody can go in and create an account, upload their songs, and then they can
Then he made a bunch of fake streaming accounts using email addresses that he bought online. The authorities say up to 10,000 accounts. And then he used software to stream those songs, Kevin, on a loop, playing them hundreds of thousands of times a day. And he used VPNs to make it look like the songs were all streaming from different locations. And according to the indictment, over this period of years, the songs were streamed billions of times.
Kevin, what do you make of this idea for a crime? So I love this story because it's not only like a good caper involving AI and tech platforms and all the stuff we talk about, but it is also just like a kind of genius arbitrage and exploitation of the music streaming algorithms. And maybe we should just explain like how this kind of works, because I think it's not...
obvious to people why a scheme like this would work. Yeah. And so Matt Levine wrote a great column about this at Bloomberg. But basically, you know, you pay Spotify, let's say around $12 a month. And then Spotify takes all of the people who pay them $12 a month. That gives them a certain amount of revenue. And then they pay out about three quarters of that to the people who have uploaded their songs to Spotify. So all the major record labels and then people like this Michael Smith that just sort of showed up and did it themselves.
And so Michael Smith's calculation was, well, I'm going to pay to get a Spotify account, but then if I stream my music enough, I will be able to recoup that and then make a profit from all of my streams. So that was the scheme.
Yeah, I mean, it's sort of an amazingly ingenious way to kind of work the system of Spotify and these other platforms that do pay out these kind of fractional royalties. And it actually seems to have been quite effective. Like he made something on the order of like more than $10 million from streaming these sort of AI generated songs online.
But isn't this like against the terms of service of the platforms? Like, how did they not catch this? To say the least, right? Like, this was not the first person to realize that this arbitrage opportunity existed, but he does seem to have exploited it on an unprecedented scale. Now, Kevin, let me ask you, did you ever run across a Michael Smith song on Spotify? Well, there's Michael W. Smith, who I believe is like a Christian musical artist. That's correct. Who I think I have listened to before. Okay.
Well, this is not that Michael Smith. This one came up with some pretty amazing band names that appear to just be essentially random pairings of words. So some of his bands included Calm Baseball, Calm Knuckles, Calvin Man, Calvinistic Dust, and
Kamalis Dyson, and then a bunch of other things that are barely pronounceable. And he did the same thing with names of the song. So the indictment highlights a lot of songs starting with the letter Z. So there was like Zymoplastic, Zymopure, Zymotechnical, Zymotechni.
A bunch of nonsense, in other words, and stuff that frankly I think would have been really hard to find if you were searching for that. And I wonder if that was sort of part of the idea here was make these things obscure on purpose. Right, because these songs were never meant to be listened to by actual people. They were just songs that, you know, Michael Smith's alleged bot accounts could just kind of like go out and stream over and over and over again, racking up the streaming royalties. Yeah.
Now, you know, as we've just described, this Michael Smith went through a very elaborate scheme in order to make his $10 million. I have to wonder, if he had just put that same amount of effort into writing a good song, is it possible that he could have made another $10 million and kept himself out of prison? We'll never know. We really won't. So this to me does feel like a crime of the future, right? Because you have this kind of
platform that has sort of minimal oversight and moderation on it. You have this sort of royalty scheme that allows people to be paid based on the popularity of what they're making and putting on the platform. And I think, you know, this is the situation that we're in right now with a lot of social platforms. And I do think that there will be creators, fraudsters, spammers who just try to
sort of find those arbitrage opportunities, those times where you can, you know, pay $12 a month for an account and use that account to make $20 back for yourself. And that's just going to be an increasingly big part of how these platforms get gamed. Yeah.
And now, if you're wondering why this is a crime, the reason is because all of the royalties that Michael Smith was making, those should have been going to artists that had just made real music and were not, you know, setting up crazy schemes to stream their own songs hundreds of thousands of times a day. So that was the illegal part.
Yeah, it's interesting because the crime here was not that Michael Smith, you know, allegedly violated the law by making all this fake music. Like, it's not illegal to make AI-generated music and put it on Spotify. The crime in the Department of Justice's eyes was that by sort of tainting the royalty pool this way that was shared with all these other artists, these real artists making real music, that he was essentially stealing from those other artists, which I just think is like an interesting legal wrinkle here. All right.
Well, case closed on you, Michael Smith. And case closed on this chapter of Hard Fork Crimes Division. I'm glad we solved some crimes today. Did we solve them? We certainly described them. Yes, and now we can actually classify this podcast as a true crime podcast and get the millions of downloads that will result. Love that.
This podcast is supported by ServiceNow. Here's the truth about AI. AI is only as powerful as the platform it's built into. ServiceNow puts AI to work for people across your business, removing friction and frustration for your employees, supercharging productivity for your developers,
Providing intelligent tools for your service agents to make customers happier. All built into a single platform you can use right now. That's why the world works with ServiceNow. Visit servicenow.com slash AI for people.
Overwhelmed by your to-do list? Meet Claude, your new AI assistant from Anthropic. Whether you're crafting the perfect email, planning a family vacation, or tackling a home improvement project, Claude is your go-to collaborator. Need to whip up a quick meal plan? Claude's got recipes. Struggling with the spreadsheet formula? Claude's your Excel guru. From creative writing to data analysis, Claude brings expert-level insights to your daily challenges. It's like having a brilliant friend on speed dial ready to help 24-7. Join thousands already simplifying their lives with Claude.
Curious how AI can transform your day? Discover the Claude Advantage at anthropic.com slash Claude. This episode of Hard Fork was produced by Rachel Cohn and Davis Land. We're edited by Jen Pouillon. We're fact-checked by Caitlin Love.
Today's show was engineered by Dan Ramirez. Original music by Alicia B. Etube, Diane Wong, Rowan Nemisto, and Dan Powell. Our audience editor is Nel Galokli. Video production by Ryan Manning and Dylan Bergeson. You can watch this full episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Wee Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us at hardfork at nytimes.com.
Have you been the victim of a tech crime? Have you committed a tech crime? Let us know.