cover of episode Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

2024/1/12
logo of podcast Hard Fork

Hard Fork

AI Deep Dive AI Chapters Transcript
People
C
Casey
一名专注于银行与金融实践的律师助理,擅长公私伙伴关系项目咨询。
F
Felix Wong
K
Kirsten Grind
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
Casey:由于Substack对纳粹相关内容的处理方式,Casey决定将他的新闻通讯Platformer从Substack迁移到新的平台。他认为Substack的算法推荐机制助长了纳粹内容的传播和盈利,这与他的价值观相冲突。他认为平台在推荐内容和使用算法对内容进行过滤和排名时,承担了更大的内容审核责任。 主持人:讨论了Casey离开Substack的决定,以及互联网平台内容审核的更大辩论。 Kirsten Grind:报道了Elon Musk吸食LSD、可卡因、摇头丸和迷幻蘑菇等毒品的情况,这引发了其公司领导层的担忧。她解释了报道的严谨性以及信息来源,并分析了Elon Musk吸毒行为的潜在影响,包括其行为异常和公司董事会的担忧。 Felix Wong:介绍了他们团队如何使用AI发现一种可以对抗MRSA的新型抗生素。他解释了AI在药物发现中的作用,以及他们如何使用类似AlphaGo的算法来解释AI模型的预测结果,并识别出与抗菌活性相关的化学结构模式。他们通过在小鼠模型上进行测试,验证了这些化合物的有效性。 Casey:Substack的宽松内容审核政策类似于“纳粹酒吧问题”,容忍极端主义内容可能会导致平台被极端主义者占据。他认为,平台不应该通过算法推荐和放大纳粹分子的内容。虽然他认为纳粹分子应该被允许拥有Gmail帐户,但他认为平台有责任阻止纳粹内容的传播和盈利。 主持人:讨论了“纳粹内容”的定义,以及Substack对Casey举报内容的回应。 Kirsten Grind:解释了为什么Elon Musk的吸毒问题如此重要,因为他掌管着多家公司,其中包括与政府有巨额合同的公司。即使他的公司业绩良好,他的吸毒行为仍然值得关注。 Felix Wong:解释了AI在药物发现中的作用,以及他们如何使用类似AlphaGo的算法来解释AI模型的预测结果,并识别出与抗菌活性相关的化学结构模式。他认为AI目前主要用于缩小药物发现的范围,而药物的最终测试仍然需要通过传统方法进行。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

You know, it's like every man of a certain age, all he wants to do is just sit on the couch and watch the History Channel and read about how the Americans fought World War II. And like watching my own dad do this growing up, I thought, oh, that's like, that's so nice. You know, it's how nice that that story ended so positively. And like you fast forward to 2024, it's like, is World War II over? No, we're still fighting it out of the comments section. Yep.

It's a drug show. Ha ha ha!

So, Casey, on this show, we sometimes answer hard questions from our listeners about ethical dilemmas that they are dealing with in their lives. And this week, you have actually been dealing with your own very hard question. And it's about Substack and Nazis and the future of Platformer, your newsletter business.

And so I want to talk about that this week, and I want to just make clear that we're not just talking about this because this is a thing that has happened to you, but I think it really is sort of a microcosm for some of these larger debates that we cover on this show about free speech and content moderation and the role of internet platforms in policing the public square. Well, I want to talk about Elon Musk's drug use, Kevin, but I'm open to any questions you have about me. Okay. Okay.

And I think we should just also acknowledge that this is going to feel a little weird because even though we are both journalists who have covered content moderation by big tech platforms and weighed in on various controversies involving people like Alex Jones or whatever...

This is an instance in which you are actually directly involved in the controversy, not only because you are covering it and have become sort of part of the news story, but because you run a business on Substack and are sort of directly financially involved in this story. Yes, that's very true. And listeners should just sort of like keep that in mind as we are talking about this. This is a weird case where we're actually talking about my business. I think

If you sort of bracket out all of the business implications for me, though, there is still a really important story to be told about how the modern internet should work and what people should be allowed to say there. Yeah, yeah. And it goes to one of the questions that we have asked on this show before, which is like, when do you know that it is time to leave a platform? And how do you draw your own personal line for what is and isn't acceptable on the internet? Absolutely. So let's just talk about

the nuts and bolts of what has happened. So over the last few weeks, Substack has been fielding lots of criticism about its content moderation policies and specifically how it treats pro-Nazi content.

And we'll get into what we mean by pro-Nazi content in a minute, but this is sort of something that has flared up for them in the past and that flared up again recently and came to a head just the other day when Substack announced that it would take down some newsletters that promoted Nazi ideas and ideology, but wouldn't make changes to its broader content moderation policy, which it has described as decentralized and hands-off.

And Casey, you as a Substack partner, I guess, you publish on Substack and have since you started your newsletter, you have ended up in the middle of this story. So I think we should start with just like what is your news? What do you have to announce today? Yeah. So I've decided this week that Platformer is going to move off of Substack. So by next week, we will have a new website and will no longer be part of that network. Yeah. Yeah.

Can you just sort of tell the story of your relationship with Substack, maybe starting from when it started, when you started Platformer? Yeah. So, you know, Substack has been around since 2017. And it was actually around the time that Substack started that I started to write another newsletter for The Verge on another platform because Substack didn't exist yet.

But in 2020, I left to start Platformer, my own email newsletter. And Substack was the best tool to do that at the time. They made it very, very simple to do so. It was very fast. And so since October 2020, I've been there. If you're not familiar with Substack,

the basic idea is that while anyone can set up a free newsletter and send it out to as many people as you can get to subscribe to you, if you want to build a business, you just connect your Substack to a Stripe account, and then you can sell subscriptions. So in my case, people pay $10 a month or $100 a year, and in exchange for that, you get three newsletters every week.

So that's how the business works. And for some number of people, it has been amazing. It's built these incredible businesses. And I think beyond that, Substack has also created a really large cultural footprint, right? It's not just journalists like me who are on there. There are a lot of artists, like the novelist George Saunders is on there. Some of my favorite like

cooking writers are on the platform. Some of my favorite musicians like Patti Smith are on the platform. So, you know, at a time when the media industry is contracting and it often feels really scary and bad, Subtech has been this real bright spot where if you go there, chances are you'll find something there that is like really cool, that's really well suited to your interests. Yeah, I'd say it's been one of the biggest changes in the media ecosystem over the past few years. It's just this sort of

transition where a lot of journalists and writers and creators of all sorts have decided to sort of hang out their own shingle, set up a Substack and start charging people directly for it rather than sort of joining some larger media company. Yeah, that's right. And my understanding is that Substack takes like a 10% cut of everything that you and other Substack creators who have paid newsletters charge customers on the platform. That's right. So

I think it's fair to say that in addition to musicians and cooking writers and journalists, Substack has also become home to sort of this like alternative media network. These people who are sort of dissatisfied or disgruntled with sort of mainstream media, people like Glenn Greenwald and Matt

Taibbi and Barry Weiss, who are sort of like these dissenters from sort of media orthodoxy and sort of more right-wing folks, have set up on Substack. And for them, it sort of seemed like this was the way to sort of avoid censorship, right? If you were on Substack rather than working at a big media institution, no one could tell you what to publish and not publish. And for them, that was part of the appeal.

Yeah, and this was something that the founders of Substack really touted when people would ask them about it. You know, they very much leaned into the idea that there was too much orthodoxy in the mainstream media and that Substack would be a place where people could come and say just about anything. And that Substack was always going to take a really laissez-faire approach to moderating that stuff. Yeah, and it always –

seemed like that was a premonition to me. Like when Substack's executives in the early days started coming out and saying like, we're not going to ban basically anything. Because my understanding is like they have a content moderation policy, but it's very, as you said, laissez-faire. It's very permissive when it comes to what they will and won't host on their platform.

And for me, that was always it reminded me of the so-called Nazi bar problem, which Mike Masnick, the tech blogger, has written a lot about. And it's basically this sort of perennial thorny issue that online platforms face when they're tasked with dealing with Nazis or people with other hateful speech. And the Nazi bar problem is sort of this maybe apocryphal story about

like a bar owner who, you know, sees a guy come in wearing like Nazi regalia and just says like, no, you got to get out. You're out. No questions asked. And someone else at the bar is like, why'd you do that? The guy was just trying to have a drink. Why would you kick him out? And he basically explains, look, it starts with one Nazi and then that Nazi gets allowed to have a drink at the bar. And then he brings his friends back and persuades,

pretty soon you're a Nazi bar. And they are so entrenched and established that it becomes very hard to get rid of them. And the point is that you shouldn't be allowed to run... Like, this is America. You can have a Nazi bar. It's just...

Just you shouldn't be confused about what you are if you're letting in Nazis. Yeah, and, you know, I appreciate that analogy. I think it has its limits in this case because on the Internet, there just are Nazis in most places. They will show up on any platform. And I think just because there are three or four of them doesn't mean that you're running a Nazi bar. It just means that you have a place on the Internet, you know. At the same time, this did eventually snowball for reasons I'm sure we'll get into. Yeah, so let's talk about that. So when did...

the permissive content moderation policies of Substack become an issue for you? Well, so in November, a journalist named Jonathan M. Katz, who was also my college classmate, hi, Jonathan, he wrote a story for The Atlantic saying Substack has a Nazi problem. And he went through, he said he'd identify 16 cases where he felt like there were Nazis on the platform and suggested Substack ought to do something about it.

Substack was pretty quiet during that period. But then a group of 247 Substack writers sends this open letter asking Substack, are you planning to do anything about this? Does this content violate your policies? And then on December 21st, Hamish McKenzie, who's one of the co-founders of Substack, wrote a blog post in which he said that at Substack, they don't like Nazis. But given that they exist, Substack did not believe that censorship was the best approach

And it did not believe that demonetizing them, you know, preventing them from selling subscriptions was the best approach. And so Substack said if, you know, anyone could be found to be directly inciting violence, they would be removed, but nothing else would be removed.

I read that as a statement essentially declaring that Nazis were welcome to sell subscriptions on Substack. And that's when I thought, okay, I actually have a problem now. Now, I want to talk more about your decision. But first, can we clarify what we mean by Nazi content? Because that word, I think, gets thrown around a lot and has been used to mean lots of different things. So what was the content that Jonathan M. Katz

identified that you took issue with? Well, so this was the exact question that I had because the Atlantic article doesn't actually link to any of the Nazi blogs for good reasons. You don't want to give undue amplification to extremist material. But I thought, well, if I'm going to have to make a decision about my business, then the first thing I need to do as a reporter is to examine the problem myself, right?

So I reached out to a number of journalists and researchers and asked them to share with me what they viewed was the worst of the worst content on Substack. And I wound up with about 40 different publications that had been flagged to me. And together with my colleagues at Platformer, Zoe Schiffer and Lindsay Chu, we spent a few days just going through those.

And what I was looking for when I was planning to try to flag some things for Substack were just literal 1930s Nazis. I was looking for people who were praising Hitler, who were using Nazi iconography like the swastika, who were talking about like the virtues of German national socialism.

and I decided to myself that if I find any of that to send a subject, that's all I'm going to send them because they've made it very clear that they're not going to do anything about right-wing extremism generally, but I do want to know what they have to say about the literal Nazis. Right, the sort of clear-cut, like, you are wearing a swastika or you are declaring your affinity for Adolf Hitler. Like, that's sort of the bar that you were looking for. Yeah, that's right. And what did you find? Yeah, so of all of those, we found...

six things that had not yet been removed by the time that we submitted them that we thought just sort of clearly met the definition of pro-Nazi content. We submitted this to Substack, and we're just waiting. And at this point, there was a little bit of drama, Kevin, which is I had never intended the, like, scope of what I had sent them to become public. I had also not intended it to be perceived as, like, my comprehensive review of...

bad material on Substack. It was really sort of sent to them as like an inquiry of, well, here are some literal Nazis. Will you remove them? That's what I was waiting for. And then I see that in this publication on Substack called The Public, Substack has leaked to them that I had only sent them six things and that those things did not have a lot of readers and were not making a lot of money. And

And so before Substack's inquiry was even being completed, they were going to a friendly publication, because this is a very pro-free speech publication that they went to, and they essentially said, this is a tempest in a teapot. Those were not their words, but the reason that you go and you tell a publication like that, look,

this whole thing is only about six websites. That's why you do that. Now, of course, from my view, it was not only about six websites. That was just the very worst stuff that I could find. So that's kind of how that happened. But at the end of the day, Substack of the six I send says five of these do violate our policies and we will remove them. Right, so...

They are sort of conceding that you were right about these five, but they still sort of clung to this idea of, like, we are a platform for free speech. So tell me what their side of the story is. How would they describe the events of the past few weeks? I think, well, you know...

My curiosity was, is it actually against your policy to praise Hitler and post swastikas? Okay. And my hope was that they would come in and say affirmatively, yes, that violates our policies and we will remove this stuff.

Which, by the way, is like a move that every other platform that I've ever covered has gladly done. Yes. It's not a hard thing to do to say literal Nazis are not allowed on our platform. You're absolutely right. Like, again, why was this a problem for me? I'm not aware of any other platform in the United States that says we are a welcome place for Nazis to monetize. Like, this was an extremely unusual position for anyone to take. So I was just hoping that Substack would come along and say what all the other platforms say.

And they kind of didn't. They said that, essentially, thank you for raising these things. We removed five of the six. And if other people flag similar content to us, we will review it on a case-by-case basis. That's what they said, which was a very long way of saying, we're not changing our policies. And I think it can be fairly interpreted to say there will be some change

content that is widely viewed as praising Nazis that Substack, for whatever reason, is not going to remove. They did not offer me any kind of additional clarity on that point. And so that was kind of a disappointing thing. So I write this news story that's sort of like they've taken down some material. I didn't want to get into exactly how many blogs. This sort of came back to bite me because eventually the whole thing did get out. And so now there are sort of like two sides who are mad at me.

me you have the sort of like the free speech brigade that's like like you're making a mountain out of a molehill oh you found five Nazi blogs what's the big deal right and then on the sort of more liberal side that wants to see stronger action they say wait like after this whole thing subsects only taking down five Nazi blogs they didn't even take down the sixth Nazi blog they're not even saying they're going to change their policy they're

And so the situation just became like even more polarized. And again, like this whole thing for me did not start as like I'm going to have the last word about the quality of content on Substack. It started in the spirit of inquiry of like, well, if I find some Nazi blogs, will this company take them down? Because to me, that's the first

step to decide whether to do anything else. So I just want to say that because it's really unfortunate to me that there has been so much focus on the specific number of Nazi blogs I reported when, again, we found dozens of blogs with some like really disturbing material that it's now clear will just be up there forever. Yeah, I was I was looking through some of these examples of, you know, what Jonathan Katz meant when he said Nazi content on Substack.

And it's stuff like Patrick Casey, who's the leader of a defunct neo-Nazi group who has also been banned from other social networks. He is making money on Substack and has been also using Substack's recommendation tools to recommend other publications that Katz described as white nationalist and extremist. Richard Spencer, the white nationalist who was infamously involved with organizing the Unite the Right rally in Charlottesville in 2017.

He's sort of the most prominent white supremacist in America. He has a Substack that he charges money for, presumably without violating their moderation rules. He's also been kicked off other social networks. But let me speak to him specifically. So I did not submit that one to Substack. And the reason is that if you go to his Substack, it is not a bunch of like Nazi iconography and praise Hitler. It's like much more insidious than that, right?

but Substack doesn't have a policy for off-platform behavior that says, well, if you did some really awful stuff in the real world, we're going to kick you off. So I didn't even bother submitting that to them. I could understand why people would be mad that he was there, but again, to me, that was going to be many levels beyond what we might expect Substack to do. Right. So now that we've walked through kind of what happened, I want to talk about your position. So in your view, what is this decision that you've made fundamentally about? I think...

I do not want to have some kind of like platform purity test that I subject any potential vendor to, right? I think an interesting thought experiment to do, Kevin, would be like, do you think like a Nazi, again, like a literal, like somebody who, I don't know, a surviving 1930s Nazi, can they have a Gmail account even if they occasionally email other Nazis and say like, you know, anti-Semitic things? My basic feeling, as painful as it is, is like, yes, a Nazi should probably be allowed to have a Gmail account, okay? I'm not going to call on Google to get rid of all their accounts, okay?

Should a Nazi be able to like have an email newsletter that they sell things for? Well, you know, if it's if they're using like open source software and they find some sort of web provider that will accept them, I think the answer is like basically yes. Right. If we're going to have an Internet that is open to all, then yes, Nazis should be able to send out an email newsletter. OK, that's like that's just something that's going to happen.

I think where it starts to get more complicated is when you have built recommendation algorithms and other tools that surface this content to other people who are not looking for it and help these folks build audiences. And it might be helpful to talk about how Substack has evolved over the past couple of years, right?

Because when it started, it was just kind of dumb email infrastructure. You sign up for your account, you start sending out your emails, and your Nazi emails never come anywhere near what I'm doing at Platformer, right? It's your own thing. You're just using their infrastructure. At that point, I probably don't make a fuss about it because, again, this is just kind of the cost of doing business on the internet.

But then Substack starts to do a few things. Like they'll start sending you a personalized digest based on the stuff that you're reading that says you might want to read these other publications, right? They build this social network called Notes where anyone could publish anything to it. It looks a lot like Twitter. And so if you're a Nazi and you want to get some attention, you can just start putting stuff right in that feed. And if I don't block it, I might see it.

And so now there's a chance that my posts and platformer are showing right up next to Nazi posts. Well, that doesn't feel good. But also because these things are getting all this algorithmic amplification, it means that their audiences can grow. It means they can make a lot more money than they might otherwise. And all of a sudden, the platform is in position of being a sort of unwitting assistant fundraiser and growth hacker to people who I believe are very dangerous.

So to me, that was the kind of threshold that this crossed where I thought I have to have an opinion about this. Right. So this is a more nuanced argument than I think some people like to portray it as, which is like there's this free speech brigade that wants like, you know, every social platform and website to be open to any kind of speech that's

no matter how offensive or potentially harmful. And then there are these kind of internet hall monitors who walk around websites like flagging stuff that they find objectionable and saying like, you have to take this down. Like what you're saying is there actually are some layers of the internet that maybe shouldn't be censored, but that when you start moving into more recommending content and

using algorithms to filter and rank content for people, showing content to people who maybe didn't go looking for it. That's where you start to take on more responsibility for moderating what's on your service. Is that what I'm hearing you say? That's right, Kevin. And this is a story we know so well. What was your last podcast about rabbit hole?

It's about this exact same phenomenon on YouTube, right? It's not about Nazis and direct monetization, but it is about people who are discovering stuff via an algorithm on YouTube that is potentially drawing them into an ideology that could radicalize them and lead to some kind of harm, right? When I think about the past decade on the internet, I think about some of the harmful characters who have appeared, folks like Alex Jones, folks like the QAnon movement,

When these started, these were just individual posts on webpages, posts here on a social network. But they were able to harness the power of those recommendation algorithms to grow large audiences. And in the case of Alex Jones, really enact real harm against real people in ways that platforms just kind of took too long to catch up to.

So as I sat with this problem, I just kept thinking, I know what happens next. I know what happens when you build this plumbing into your little infrastructure company. You help things grow that might not otherwise grow.

And if you're someone like me and you don't want that to happen, you know, my only real alternative is to just sit back and wait for it to happen and then say, well, now that it's happened, I can leave. You know, and that just didn't feel like a very satisfying solution. Yeah, I mean, it's been very bizarre to watch this because as you mentioned, like I did spend a lot of time reporting on extremism on YouTube and other social platforms as you have.

And with that experience with rabbit hole, like when we went to YouTube and said, Hey, look, there are all these like, you know, Nazis and right-wing extremists and people like,

really dangerous and harmful ideas who are like getting millions of views on your platform, their first instinct was not to defend the extremists, right? It was to say, okay, let's see what we can do about our recommendations. They actually changed a bunch of their recommendation algorithms so that those, what they called borderline content would not get as many views. They banned the literal Nazis and kicked a bunch of prominent white supremacists off their platform and

And they took it really seriously. They seemed to have kind of the response that you would hope that every platform would have. But in this case, Substack sort of stood their ground. So what is your interpretation of why they are sort of willing to defend this

this outrageous and dangerous speech? I think the truth, Kevin, is that for them, this is a disagreement about principles first and foremost. You know, I should say, there's a business dimension to this. Both of the other alternatives I considered for places to move platformer will be much cheaper than Substack because they are a sort of fee-for-service platform instead of a rev-share platform. They don't take 10%. No, exactly.

So there's kind of that element to it, right? And then for Substack as a business, I tried to make the point to the founders, like, you know, if you're going to polarize your customer base in this way, you're going to wind up with a much smaller business over time, right? Like if you can't figure out a way to sort of invite all of us into feeling good on this platform, it's going to be a problem for you. And I was just struck

by the degree to which they actually didn't want to talk about the business. They want to talk about the principle of speech. They wanted to talk about the risk of quashing dissent. I got the sense that they feel like, you know, some of these real sort of fringe thinkers, they might be wrong almost every time, but every once in a while, there is some valuable idea in there

And to them, the idea of eliminating that kind of thinking was essentially like their number one priority was just like making sure that that didn't happen. So they really want to be a home for the absolute maximum amount of speech.

And that's how it was communicated to me. Now, my criticism of that would be, while I do believe that the founders are sincere, it is also true that that is the absolute cheapest way to run a platform. When you say almost anything goes, that means you don't have to hire content moderators. It means you don't have to hire policy people. It doesn't mean you have to do these tedious ongoing reviews of what is on the platform and whether your policies should change. Companies like Meta spend...

conservatively hundreds of millions of dollars on this stuff, right? And Substack is relatively small, and I can imagine why it wouldn't want to do that. But at the end of the day, I will give the founders credit for the fact that when we had these discussions, they wanted to talk about them on the principles. Yeah, and did Substack actually give you a comment about their decision to take down these five Nazi blogs? Yes.

So I will read most of the statement that Substack sent to me. They said,

Okay.

So that's Substack's position, and we've now heard your position. I want to just raise a few potential objections to this. If people are thinking, well, Casey seems to be doing this rashly or blowing things out of proportion, I want to just give you the chance to respond. So I'll put on my free speech warrior hat and just run through some of these. So objection one would be the sort of Substack argument, right? That

Censoring content on the internet, it doesn't make extremists go away. We know this because platforms have been demonetizing and deplatforming extremist content for years now, and these people have not gone away. In fact, some people would argue that this kind of censorship actually works.

extremist views worse. It gives them a cause that they can claim they are being martyred for and sort of rally support that way, the way that people like Alex Jones have for years. So what's your response to that? Sure. So that may be true, but getting rid of extremism was not the job that I gave Substack. I'm not asking Substack make racism or extremism go away. I'm asking them for a place where I can run my business and not have my posts appear next to Nazis.

right? Because that's not good for my business. You know, dozens of people have canceled their paid subscriptions to Platformer, and many of them said, hey, I really like what you do, but I can't justify, like, giving you money when I know it's going to build Nazi monetization infrastructure. So it is absolutely the case that there is a demand side for extremism, and that has to be solved at a society level. But platforms can also do their part to not help those movements grow and make money. Right. I wonder what you would say to this other

objection that Substack sort of raised. They pointed out that the number of subscribers that the Substacks or pro-Nazi Substacks had was tiny. It was like, you know, these were not big, thriving publications. These were publications with a handful of subscribers. None of them had made any money. And so basically this is a tempest in a teapot.

Yeah, and here is a case where I think everyone is just going to have to make up for themselves how big they think the problem has to be before they take action. I think some people will decide it's going to need to be a lot worse than this before it rises to the level of my attention. For me, I just thought, I have seen this movie before. I know where this is going. And if this is eventually going to...

lead me to have to leave the platform. I would rather just do it now and move on with my life. Got it. Then there's the objection that I actually raised with you when you told me last week that you were considering moving off Substack. I think my response was, well, aren't there Nazis on every platform? Like, you're on Instagram, you're on Threads, you're on Facebook, you used to be on X,

You use YouTube. All these platforms, if you looked hard enough, would have some number of Nazis on them using them to spread their message and potentially even to make money. So basically, there are no pure platforms.

Absolutely true. And I think what I would say is that the platforms that I'm on and I'm spending time on, while it's true that there are bad things on there, for the most part, these platforms at least have policies against it. When stuff is flagged to them, they do remove it. And they don't have to be dragged kicking and screaming into doing that. They don't ask their own user base to be volunteer content moderators for them all the time.

So to me, that was kind of the bare minimum that I was looking for is like, well, is there at least an affirmative policy that Nazis are banned here? And then maybe we can figure something out. I would also say that it's different when you're running a business on the platform, right? Because I am not just having to act on my own principles here. I have employees who have opinions, and we were sort of aligned on this. We had to talk it through together. And I have customers who are very principled. And I should say, because I write a lot about

content moderation, a lot of my customers work in trust and safety and content moderation. And they have heard the arguments that Substack is making before, like potentially at their own platforms when their own platforms were younger and more naive. And this stuff just doesn't fly with them, okay? Like, they just do not accept the arguments that are being given to them. So, you know, I am in the unusual position of having a very savvy audience that is very sensitive to this subject. And that just made me have to take more seriously than like, can I have an Instagram account? Right.

I want to bring up this last objection that I've heard, which was made by, among other people, Ben Thompson, who writes the Stratechery newsletter. It seemed like he basically had two qualms with what you've done here. One of them was sort of this, you know, slippery slope argument about once you start arguing that platforms should take down Nazi content, if they do that, then you sort of start asking them to take down other sorts of objectionable content, maybe stuff that's like a

opposed to vaccines or questioning the origins of the coronavirus pandemic, things that are just sort of controversial and not literal Nazis and that there's sort of a slippery slope effect there. But he also raised the objection that you were essentially aiming your guns at the wrong part of the stack, as it were. And in particular, he singled out this line in a story that you wrote where you said that you were going to be contacting stripes.

about this Substack content moderation issue. Now, Stripe is a payments processing company. And so when people sign up to subscribe to a Substack publication, Stripe is the company that actually takes their credit card information and charges that credit card. And they also have content moderation guidelines for the types of payments that they will process. And so basically, Ben Thompson said, by going to Stripe, you were essentially...

Escalating this beyond a level of reasonable disagreement. Yeah, I think it is a fair criticism. I do think that if Substack had said, like, yes, we are going to affirmatively say that Nazis are banned on this platform and we will proactively remove them, then absolutely the next week there would be calls to do sort of like the next level uproar.

up. Fortunately for me, it never got that far because they never made the affirmative argument that Nazis were banned and I could just sort of, you know, walk away and not have to wonder about that anymore. The thing about the slippery slope argument, Kevin, is that it presupposes that if we just drew one hard line, we could stop talking about the boundaries of speech forever. That's not how society works. We are constantly renegotiating the boundaries for speech of social norms, of mores. These change all the time. That's what society is. It is an ongoing

conversation about how to be. So the idea that you could just sort of write one rule and keep it forever is a libertarian fantasy. Now, on the Stripe side of it all, I will admit that was me being a little edgy. But like, here's the thing. I also approach Stripe in a spirit of journalistic inquiry. And the inquiry was this. Stripe has a policy that says that you're not allowed to use their services to like fundraise for violent causes. And

Nazism was one of the most famous violent causes of all time.

And so I thought it was worth sending them an email to say, hey, one of your customers is saying that Nazis are free to set up shop and monetize here. Is that consistent with your policies? I sent that email. I did not get a response. So if Stripe had said yes, that's fine. I would not have led a parade down Main Street like calling for the end of Stripe. But I did think it was worth sending an email just to ask if it was true. Yeah. And I imagine that some people will hear about your decision to leave Substack and say, well,

What more does Casey want? Like they took down the Nazi blogs that he flagged to them. What's the problem here? Sure. So what I was looking for was a couple of things, you know, one was just to say like, you know, Nazis are not allowed. We will proactively monitor for this content. Here's how we're going to define what we view as Nazis. Right.

And then I also wanted them to look at that recommendations infrastructure because, again, that's really the difference here. We will be on a new platform next week at Platformer, and there will probably be Nazis who are using that infrastructure to send emails. The difference is going to be it is not attached to a social network that was built by our provider, right? There will not be these digest emails recommending the Nazi blogs along with mine, right?

And so if Substack wants to get their hands around that, they would need to come in and they would say that certain publications are eligible for promotion and recommendation and others are not. YouTube did this. Meta has done this. Again, a lot of this is just very standard stuff that happens at every other platform that I write about. It is Substack that is the outlier here. So that's what I wanted to see and it just became very clear to me over the past couple weeks that nothing like that is coming. Yeah. Casey, I want to say something sincere to you, which I know is terrifying. Okay.

I'm really proud of you for making this stand. People can disagree about the finer points of like online content moderation and what should and shouldn't be allowed. But at the end of the day, like this is a judgment call that you made. And it just comes down to like, who do you want to do business with? What kinds of

businesses do you want to enrich with your labor? And given that we're in this sort of age of rising anti-Semitism and increased polarization of all kinds, like, do you really want to be giving 10% of your revenue to a company that will not say, like, we don't want to do business with literal, actual 1930s Nazis? And

I think about decisions a lot through the framework of when I'm old and I'm explaining to my grandkids decisions that I made earlier in my life. Will I be proud of having made the decision that I made or will I be ashamed of the decision that I made and wish that I could redo it? And whatever happens with Platformer on your new provider, I just think that this is going to be a decision that you feel good about. And so...

I'm proud to be your friend and your co-host. And I think we should also just declare once and for all that the Hard Fork podcast is anti-Nazi. This is a Nazi-free zone, and there is really no wiggle room here, okay? So you can stop sending us your pitches, Nazis, because you're not coming here. When we come back, Elon Musk is officially on drugs. We'll talk to the reporter who nailed that story down right after the break.

Thank you.

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

Casey, let's talk about drugs. Let's talk about drugs. Now, we have gotten at least one note from a listener who says, you guys seem to talk about drugs a lot on the show. You make a lot of jokes about magic mushrooms in particular. What's going on with that? I mean, I can only say that it is relevant to understanding Silicon Valley. I guess would be my answer to that. It's true. We are tech reporters. We cover an industry and a culture out here in San Francisco, in the Bay Area, and in

And, you know, drugs are a part of that culture. So we are going to talk about drugs in this segment. If you are a parent who doesn't want your kid to listen to something like that, you know, this one might not be for you. Yeah. So...

This week, the Wall Street Journal reported that Elon Musk, the world's wealthiest person, has, quote, used LSD, cocaine, ecstasy, and psychedelic mushrooms often at private parties around the world where attendees sign non-disclosure agreements or give up their phones to enter. This was a very juicy story that ran the other day, and it included some actual on-the-record details

details about Elon Musk's drug use over the years. The Journal reported that in 2018, Elon Musk took multiple tabs of acid at a party he hosted in Los Angeles. The story also reported that Elon Musk had taken magic mushrooms at an event in Mexico and took ketamine recreationally in 2021 with his brother in Miami at a house party. And that

You know, this has not gone down well with members of his orbit who are increasingly worried about his erratic and potentially drug-fueled behavior. That's right. And what I would say is what made this story interesting to us, Kevin, is not that Elon Musk has been spotted doing drugs a handful of times over the years. It is rather that people

close to him seem both very concerned about his behavior in some cases, and were willing to talk about it on the record with the Wall Street Journal, which I just think suggests that this has become a serious issue and given Musk's power and influence in the world is really worth trying to understand. Yeah. So when we talk about drugs in this conversation, we are not really talking about

all the classes of illegal drugs. We're talking specifically about the ones that are popular with people in the tech industry, things like psychedelics, things like MDMA, things like ketamine. These are the drugs that Elon Musk has been spotted using, according to the journal, and that I would say are some of the most popular drugs out here in the Bay Area among people who work in the tech industry.

Yeah, I think that's right. So today to talk about Elon Musk's drug use and this kind of wider phenomenon of drug use in Silicon Valley, we've invited Kirsten Grind. Kirsten is an enterprise reporter at The Wall Street Journal. She reports on tech companies and their executives. And she was one of the people, along with Emily Glazer, her colleague, who broke this story about Elon Musk's drug use. She's written a lot about this topic over the years. And I'm really excited to talk to her about her reporting. I am too. Let's bring her in. ♪

Kirsten Grein, welcome to Hard Fork. Thanks so much for having me. Hey, Kirsten. So you've been reporting on drug use in Silicon Valley for a while now. And I want to ask you about some of the stories you've reported on.

But I want to zoom in on this recent story about Elon Musk and his drug use, which I would say has been something that a lot of reporters have been kind of gossiping about, you know, off the record at the bar for many months now. Yes.

I want to say, Kirsten, I was so jealous of this story because I have heard so many whispers about this stuff. And I have tried to get some of this stuff on the record and absolutely failed. So when I saw your story come out, it was appointment television for me. I dropped everything and inhaled it all in one gulp. Oh, thank you, guys. That's so kind. So very impressive story. What...

What was the start of this? When did you get interested in this particular story? Yeah, so at the Journal, I'm an enterprise reporter. So what that basically means for non-journalists is I jump around from topic to topic. And so, as you pointed out, for some reason, the last few years, I've kind of been on this whole billionaire tech drug scene bus.

beat, I guess, which is really funny because I'm the most boring person ever actually. But okay, so now I'm an expert in like ketamine and cocaine and all of this.

And, you know, we had done some stories on Elon. You know, we reported last year on his Texas plans and some of this other stuff. But we had been hearing, same as you guys, about this for a long time. And so we just wanted to definitively know, is this man using drugs? I think it's a very important question for someone of his age.

his stature and his power and, you know, his companies with billions of dollars in government contracts. So we just kind of went down that road. And we'll get into all of the implications of it, but maybe just to start with, can you give us an overview of Elon Musk's drug use? So he is...

Yeah.

Cocaine, ecstasy, LSD. And a lot of it, one of the reasons why it's been kind of kept under wraps in a way is because a lot of this is happening at these private high-end parties where you often have to sign an NDA. A lot of the people I spoke to, the parties were in different countries. It's not just like going out here in the valley. And so it's been...

pretty, you know, we have examples going back years of this. And I would talk about my time at these parties, but I did unfortunately sign the NDA. So I will have to just kind of pass on that. But, you know...

Knowing what I know about, you know, how these kinds of stories come together and the standards at the Wall Street Journal as well as the New York Times, like, it's hard to get a story like this into print because you can't just go on one sort of anonymous source or two anonymous sources. So you actually have talked to multiple people who have firsthand accounts of witnessing Elon Musk doing these drugs. Is that correct? Oh, my gosh. It is. Yes. We had to have people who have witnessed his drug use. And, yeah.

I cannot even begin to go into the rigor of our process for getting a story like this into the newspaper. To give you an idea, like, it was much easier writing my last book. Than to publish this one story. Yes. I mean, for good reason, right? Like, we don't just, as you guys know, like, these newspapers don't just read.

willy-nilly publish something like this. I mean, I think Elon's followers would like to think that, but no, we spend a lot of time making sure everything's right. We have the sourcing. We have it all lined up for sure. Right. And so we actually asked months ago, Walter Isaacson, who's Elon Musk's biographer, about Elon's

alleged drug use. And what he responded to us was that, you know, he, uh, knew that Elon Musk had been, uh, taking ketamine for depression and ketamine, like some of these other drugs is commonly used for sort of mental health treatment. Um, and so he seemed to think that was, this was all above board, but what you reported was not that. So just walk us through some of the specifics around the drug use that you have reported on with Elon Musk. So it's a lot of partying, right? Um,

One thing that's interesting with Elon, but with a lot of these guys, they're using psychedelics at parties, but also for quote-unquote treatment, right? But they're treating themselves. So that's kind of the problem, right? So even at parties...

I think, and I'm not saying this specific to Elon necessarily, but I think in their heads they're saying, oh, if I take mushrooms, that's actually healthier than having like five shots or doing a line of cocaine. And so with Elon, he's used a bunch of different drugs, but this ketamine is one that a lot of people are using.

at the moment. - Right. - And I think that what you said really speaks to the cultural change around drug use in Silicon Valley. And of course, there has basically always been drugs in Silicon Valley. LSD is a huge part of the story of Steve Jobs and Apple.

And yet at the same time, you know, like Kevin and I are around the same age. We grew up in dare America, you know, dare to resist drugs, just say no, you know, sort of all of that. And it sort of seemed like the only accepted drug to do was like alcohol. Right.

But you fast forward to today, you can order ketamine off Instagram in a sort of mental health context. You can walk down Castro Street and buy mushrooms from a, quote, church, right? So the vibe here is just very different than I think. And if you have not spent time in San Francisco recently, it might shock you just how common some of this stuff is.

Absolutely. You know, I've obviously spent a lot of time thinking about this. And it really goes back to that, I think, whole Silicon Valley mentality where it's sort of like, I can disrupt, you know, myself. I can take charge of my own health care. And so I think in their heads, they're thinking ketamine can be used legitimately for mental health treatments. And some of these other drugs can be used in a good way, too. But I'm going to do it myself.

Like, never mind, like, that doctor that's administering it, right? So that's where they're at, I think. Right. So I'll confess that when I first saw some of the headlines that you and other general reporters were putting out about sort of drug use in Silicon Valley and about Elon Musk, actually, my first thought was sort of like,

why do I care about this? Like, you know, these are adults. They're making decisions about their own, you know, substance use. Some of these drugs, as you mentioned, like do have sort of demonstrated effects for mental health and are, you know, maybe legalized for use in the coming years. And,

And we live in Silicon Valley where drugs have been around forever. So why is this such a problem for Elon Musk in particular? A hundred percent. And you can imagine we had like many conversations about this too, right? The reason it's very important for Elon in particular isn't just because he's the world's richest person or the world's most powerful person or because he runs Twitter or whatever. X.

sorry. It's because, in particular, he's running six companies, one of them the publicly traded Tesla, where he's supposed to be reporting to investors, but especially SpaceX, which has billions of dollars in government contracts. And those government contracts aren't like, yeah, if you do a little cocaine on the weekend, it's all good. Those are like, you cannot do illegal substances anywhere.

Like, we're not talking about lines at your desk. It's like you cannot go to Burning Man and do ecstasy or whatever you're doing there, right? They're extremely strict. And you know, as you guys I'm sure well know, when all he did was smoke a little marijuana five years ago on Joe Rogan, taxpayers footed the bill for a $5 million NASA review of his drug use.

And that was just like, I think, one puff. How did it cost the taxpayers $5 million to just watch one episode of the Joe Rogan show? Yeah. So they had to do a whole drug review of SpaceX employees. SpaceX employees were subjected to random drug tests for some period of time. There's not a lot we know about what went into that review, but...

Elon talked about this after in some podcasts and about how he had not apparently realized the effect this would have on SpaceX. So they had to do this whole review and taxpayers basically footed the bill. Wow. Congrats, taxpayers. So I think that's an important point about the difference between sort of Elon Musk doing this and any sort of other, you know, private citizen who does not have government contracts or a security clearance. Yeah.

But you also reported that his drug use has caused concern among the board members of his company. So tell us about that. That's right. So that's the second important point. This is not the journal like judging Elon Musk. This is us saying, listen, it has gotten to the point where even leaders at his two largest companies, including some directors, the directors who aren't the ones doing the drugs along with him,

are also concerned about this, right? And so that's really the whole point of the story. Like they've had years of concern. They don't know how to handle it. When they're really concerned, they kind of go over to Kimball Musk, his brother, and are sort of like, hey, like, is he getting enough sleep? You know, they don't even say drug use because that can end up in board meeting minutes, right? Yeah.

I thought this was so interesting, the way that even those who are placed in positions to have some measure of authority to serve as a check on him, they are terrified of just saying what is plain to everyone in his orbit, which is just that he is on drugs a lot. Absolutely. I mean—

Not to excuse them, but you can see this really challenging position they're in. Because first of all, we need to say Tesla and SpaceX are doing great. Tesla especially performing super well. So on the first of all, it's like what?

Who are we to complain about that? Like if even I think even Elon himself said something like this on Twitter after, like, if I'm using drugs, like I should keep doing it. I'm doing a great job. That's that's exactly the position they're in. Yeah. And I think there's sort of never been a problem with a drug user who's sort of in a good run and decides to just do more drugs. That's never ended badly for anyone who has ever done drugs. Yeah.

Right. I wanted to ask about one director in particular who you report stepped down. Linda Johnson Rice stepped down from Tesla, decided not to stand for reelection in 2019 in part because of the drug use. I wonder if you could share any more of that story. And also, I have to say, reading that, that does not seem like somebody who was worried that he was doing ketamine every once in a while at a party. No. I mean, again, that was a few. The ketamine...

is a lot more recent, I would say. It's been in recent months that people are much more worried about ketamine and that kind of tracks as well with like the ketamine popularity growing generally in Silicon Valley. Well, and we should say that also ketamine is legal. It's legal...

But it's like a gray area legal. And I also want to be clear that most people are doing this through dealers or, you know, randomly through Instagram. Yeah, an online pill mill type of thing. Yeah. But back to your question, I mean, there's not a ton more I can share about what's in the story. But I would say for a Tesla director to step down before their three-year term, right, two years, that's...

That's really saying something. And this is a woman who's very well respected, right, in the industry as being on many boards and in corporations, etc. Yeah, this doesn't sound like somebody who just heard that Elon had done mushrooms a couple times at a party and said, I'm out of here. Yes. And one of the things that is often said about Elon Musk's drug use by people who are sort of gossiping about it is that it's changing his behavior. Part of the reason...

and the explanation for why he's been so erratic in the past few years and has made all of these controversial decisions about X and just sort of the personality that he's adopted, that this can also be traced to his drug use. And I wonder what you think of that and if there are any specific examples of behavior that you've reported that has been specifically linked to drug use. So I have a lot of, in my head, you know, and also from just knowing the...

drug use situation now, instances where I've seen him where I think, you know, maybe that doesn't matter, though. Like in the story, one point we really try to bring up is this exact thing that you mentioned. He's acting erratically. He's acting strangely. Is that just Elon, the genius, the guy who said he is autistic? Or is he actually on something? And so,

This is one reason we brought up this example from 2017 where he's speaking at SpaceX. And hilariously, SpaceX has since released that video. And I would encourage anyone to go look at it because in our reporting, the executives were all worried after that he was on drugs. Now, we don't know if he was, and we say that in the story. We do not know, right?

But they're like, is that drugs or is that his erratic behavior? And this is something that everyone around him has struggled with for years. Yeah, this is often a question I ask after Casey says something stupid on the podcast. Like, is this just him?

Or is it the drugs? What exactly is in this tea? Now, Elon Musk and his camp have responded to this story. His lawyer, Alex Spiro, told you that parts of this story were false, although he didn't specify what exactly was false. He also said that

Elon Musk is, quote, regularly and randomly drug tested at SpaceX and has never failed a test. So I'm curious what you make of that statement and what you know about these drug tests. Like, what are they testing for? How often does he have to take them? And if it's true that he's never failed a drug test, how do you square that with what's in your story?

So I would first of all say, as you guys probably know as journalists, that's not necessarily a denial. That's what we call a non-denial denial. Okay. And I think Matt Levine even pointed that out like in a hilarious way. But yeah.

A note about these drug tests. I wish I could tell you more about them. They are apparently extremely secretive. So we do not know how often he's tested, when, even what drugs are being tested for. Generally, I've learned that psychedelics aren't usually in a test. I want to be clear. I don't know if they're testing Elon for psychedelics.

That's the point. We kind of don't know. Then I think Elon came out after and said, I was tested for three years. So I don't know if that means he's not been tested the last couple years, three years since the Joe Rogan incident in 2018. So there's just a lot we don't know about these drug tests.

I'm curious. So reporters have been trying to nail down this story about Elon Musk and his drug use for years. You were actually able to get people on the record talking about it who have firsthand encounters with his drug use. Why do you think people are willing to open up now?

I'm so glad you asked that question. I have, through this whole thing, often thought about people's motivations because a lot of the times people talk to reporters because they're exposing something bad or they're unhappy with how something's going. But in this case, you're asking people to describe someone, and a lot of times someone who they admire's drug use, and they want to be in that crowd that's getting into that NDA party and all of this. So

I would say that, you know, without going too much into it, a lot of the motivation here, well, some of the motivation at least, is from people who have concern, right? It's not just people who, you know, saw him one time at a party. I mean, definitely I've talked to some of those, but...

there's also just a general concern out there. Not people who are necessarily trying to get him in trouble. No, not at all. People who are trying to maybe get him help. That's right. And not even just with Elon, but in this reporting in general, I found that

People who are willing to talk about someone else's drug use, especially someone in a position of power, are doing it because they're worried. Yeah, right. And so just to take kind of the devil's advocate position here— And that's that drugs are good? No. Oh. But, you know, I've heard and I've seen since your reporting came out some people just saying like—

Well, the proof is in the pudding, right? His companies are doing great. Like he has the best rockets. He has the best selling car in the world. His behavior is unimpeachable, a model of integrity and kindness. But you know what I'm saying? Like, like if, if these drugs were really hurting him, wouldn't it be showing up in the performance of his companies? And if it's not showing up in the performance of his companies, why?

Why is it any of our business what he's doing in his free time? Well, let's take SpaceX out of this for a second because it's just a full violation of his SpaceX contract. So let's just maybe look at Tesla. I think it's a great question because Tesla is performing really well, right? And so I think for directors or other executives to reach that level of kind of concern about his behavior, that's what to look at there.

You know, they're not they're not bringing it up just because they think he's had a bad day or something like that. Also, like Tesla is in part kind of a meme stock. Like, yes, the car company itself seems to be performing well in the world. Yes. And part of that is just because there is a huge fandom around Elon Musk who thinks he's a cool dude and likes to see him do stuff. So the fact that Elon Musk is on drugs all the time, I could see how that would make the stock price of Tesla go up because it means that Tesla stockholders are going to say, cool, bro.

Yeah. I also wonder what you think of the Matt Levine point that he made in his newsletter this week, which is that Elon Musk is in some ways too big to fail a drug test, right? That was my favorite line. A great line. But also, like, you know, and he basically says, look, if you're NASA or you're in the Defense Department and you find out that Elon Musk has done drugs, maybe he did drugs in front of you, what are you going to do? Like, are you going to put your payload into orbit with someone else's inferior rockets

And I thought that was a really interesting point. Like, even if it is true that he's doing all these drugs and they're getting, you know, in the way of his performance and directors of his companies are growing concerned about it, like,

What are we supposed to do about it? Well, that is the thing. I mean, SpaceX is so intertwined with the U.S. government. I mean, they are the space program, right? So I mean, I don't have any inside knowledge, but who knows what they're going to do or if they can do anything. And even on his boards, like as we've reported, they've just kind of tiptoed around it. So he could be too big to fail a drug test. And we just...

see this all the time, right? I mean, like this is the troublesome thing about having somebody who is this rich and powerful. And it seems like there just is no check on his power. Think about how many times in the past he has done something. He has broken some law. He's violated some SEC regulation. And it just seems like everyone throws up their hands and say, well, what are you going to do? Like we don't have any legal. He's a genius. Yeah. Yeah. He's a genius. And also we have no legal protections that would actually check him. Yeah. Yeah.

I think we should just zoom out a little bit because the use of drugs and particularly of psychedelics is sort of this hidden force in Silicon Valley that many people in positions of authority in the tech industry specifically are fans of these drugs for legitimate mental health issues and productivity, but also for partying. And that there's a sense in which this that drugs are sort of a hidden mover.

in the tech industry today. And I wonder what your thoughts are on that, having spent so much time reporting on this. Yeah, I have spent a lot of time on it. First of all, I want to say I actually totally agree with the research behind. I've interviewed a lot of doctors and like legitimate medical professionals who are working to make ketamine, you know, ecstasy, psilocybin, all of those legal and helpful

for post-traumatic stress syndrome, depression, all of this. So that is definitely happening and is legit. I do think that a lot more people are using psychedelics, you know, a lot of

tech executives who we probably know, then we know, and that it's way more common. Just no one still wants to talk about it because it's illegal, you know? But a lot of these people are funding some of these organizations where they're trying to push for legality and research medical cures, in part, I think, because...

it could help them if done in the right way. And right now they're doing it illegally. I will say after your story came out, I want to put this to you in the interest of fairness. A friend of mine who works in the tech industry texted me and said, why is the Wall Street Journal, you know, talking about this like it's the end of the world? Why are we getting this story that's sort of talking about how illegal all of these drugs are? And like, this is just like what people do illegally.

in society, and they're only making a big deal out of this because it's Elon Musk. What do you say to that? I have heard that from about 10,000 of Elon's fans as well over the last few days. So I've definitely heard that. I mean, I just have to keep going back to the fact that he is...

pretty much the most powerful person in this country and all his businesses are integrated with our infrastructure. He has billions of dollars in government contracts. And again, like even if he's holding it together now, I'm

I'm not saying anything's going to happen, but it's something we need to know about the health of one of our most powerful people in this country. And I would just say as a gossipy person who loves mess, thank you so much for reporting this story. And I hope you do so much more. And don't worry if other people think it's important or not, because I'm living for it, Kirsten. Okay, thank you, Casey. Well, yesterday I was accused of eating live babies by one of Elon's followers. Go on. Oh.

It was, I almost want to read you guys this. It was a new low. It was like Kirsten Grind eats live babies for breakfast. It sounds like that person might have been doing some recreational drugs before they sent that message. And are you denying on the record?

That you eat live babies? I am denying that on the record, yes. Just have to check in the interest of being scrupulous. Setting the record straight. Definitely. I mean, as you guys know, like, covering Elon Musk comes with, you know, hearing from his many thousands of fans. Yeah, yeah. Millions, probably. Yes. Yes. Well, Kirsten Grant, thank you so much for coming on. Thank you guys so much for having me. Thank you.

When we come back, we're going to talk about drugs again. Surprise! But this time we're talking about the other kind of drugs, the prescription ones that AI is helping researchers discover to treat serious illnesses.

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

So Casey, as we were sort of planning out some of our goals for the podcast this year, one of the topics that I really wanted to spend more time talking about is AI and

your Coke is like perched at a very precarious angle. That's amazing that that didn't spill. I know. It was like your Coke can was literally like leaned against your laptop at a 45 degree angle in a way that suggested that you were trying to play some kind of daredevil game whereby it was going to spill on yourself. That was like the old story footprints. Like Jesus was carrying me right then. Like I didn't know it, but he was carrying me and that's why it didn't spill. Thank you, Jesus. Okay.

So, Casey, one of the stories that I have been sort of devoting more time to trying to follow recently is what's happening with AI in the field of medicine. Yes. Because this is a story that I think everyone who is optimistic about AI touts as kind of the highest and best use of this technology. If you want AI to go faster, this is one of the best reasons that you could want it to go faster is we could discover more drugs more quickly. Yeah.

Yeah, so this kind of thing is what a lot of people in tech and biotech are very excited about. They say AI is going to be radically transformative. It's going to help us discover new treatments for cancer and Alzheimer's disease and heart disease and all these deadly and debilitating illnesses. And basically, AI is going to sort of turbocharge this entire field of medicine.

And so I wanted to start covering this in more detail in 2024, because there's just a ton of money and attention and hype and real promise in the intersection of AI and medicine. That's right, Kevin. And not only is there promise, but we are just now starting to see the fruits of these labors. And this has gone beyond the realm of, oh, wouldn't it be cool if AI could discover a drug? We are starting to see the signs that, oh my gosh, this stuff actually works. Yeah, this is something

that I really didn't appreciate until I started looking into this. There's this big healthcare conference, the J.P. Morgan Healthcare Conference, which is sort of a big deal in that world, is happening in San Francisco this week. And I've just been reading some of the stuff coming out of that conference. And it is remarkable how much of the discussion in healthcare and medicine today is about AI, and particularly this use of AI to discover new drugs. So

So I've just had my kind of antennas up for interesting and novel stories related to AI and drugs of the medical variety. And one of these stories popped up last month. Researchers at MIT and Harvard published a paper in the science journal Nature titled,

They claim to have discovered an entire class of drugs using AI and confirmed that these drugs were successful at combating a type of bacteria called MRSA. Yeah. And when I hear the word MRSA, it's always in the context of why you never want to be hospitalized because apparently in hospitals, this is a drug resistant infection that spreads around there can be very difficult for our existing medicines to treat. And so it's the exact sort of thing that we could use some help from AI and solve it. And,

As it turns out, AI is already helping researchers trying to figure out what kinds of chemicals could be helpful in combating MRSA. And this is an area where we already have some evidence that AI is accelerating discovery. So to talk about this,

discovery, we've invited one of the lead authors of this nature study, Felix Wong, to join us. Felix is a postdoc in the lab of James J. Collins at MIT, where he worked on this research alongside a big team of scientists. He's also the co-founder of a drug discovery startup called Integrated Biosciences. And we're going to talk to him today about how AI helped make this discovery possible.

Felix Wong, welcome to Hard Fork. Thank you for having me. Hi, Felix. So we are interviewing you today because something very exciting happened just before the holiday break, which is that a research team that you are on announced that you had used AI to discover a new class of antibiotics that could be effective against MRSA.

And I also read in the coverage of this research that there hasn't really been a new class of antibiotics discovered in 60 years. So why is that? Why is it hard to discover new antibiotics using conventional methods? Yeah, so there is a bit of hype to that statement. So there are...

have been new antibiotics, as well as a few new classes of antibiotics discovered in the past 60 years, but certainly not a lot. And in fact, most of the clinically used antibiotics that we use today were discovered in the 1960s. And we kind of discovered those antibiotics just by looking at soil bacteria. Turns out that the bacteria growing in soil wage warfare on each other, and you can just kind of take their weapons and use them as antibiotics to

Once this pipeline really dried up, there's just been a dearth of new drug candidates coming out, again, because we've already exhausted kind of this natural source of antibiotics. We were really good at drugs in the 60s, but after that, it really seems like America lost its way.

So help me understand here, because I hear a lot about, you know, the use of AI to discover new drugs. And I want to talk about your specific discovery process, but I also just want to, like, understand at a very broad level, what does it mean to say that AI can help us discover new drugs? Right, because it's not just going into ChatGPT and saying, hey, got an idea for a new drug? Right, yeah.

Yeah, so of course one can do that. Go to some LLM and ask for an idea for a new drug. The question is, is it accurate? And is it actually worth following up on whatever the LLM says? In the case of drug discovery, things are a bit more niche than LLMs. So it's not like we're training a general purpose model in order to just

write us poetry or write us emails or whatever. It's really about kind of training very specialized models in order to make very specific predictions as to whether or not a new chemical might have antibacterial activity. And so tell us about the nature of that predictive step. How is it

predicting? Yeah. So as drug discoveries, what we do is find needles in large haystacks. And at least in our work, which is quite typical of these machine learning drug discovery approaches, the first step is we need to get training data. And the best way to do this is empirically. So in our case, for instance, we screened 39,000 compounds. So one by one in a test tube, we looked at things including does the compound affect MRSA? Does the compound affect

become toxic to human cells, which you don't want because in that case, bleach might also be an effective antibiotic, right? You had 39,000 different test tubes, each with a little thing in it? That's basically correct. So the only kind of quantification there is that everything is stored for kind of compactness in plates. You could probably fit it in just a stack of plates here in a corner of this room. Wow, okay. So when we do the hard fork novel pathogen creation process, that will be a very compact...

storage facility. Thanksgiving episode this year is when we create a novel bioweapon. Okay. So I, so I think I can follow the story now here because you, you conduct these 39,000 plus tests. And I'm going to guess that, uh, some of these compounds that you test seem more promising than others. And so you're able to feed this into your system and then it can just start to make predictions, but by saying, well, this was more promising than that one. And so here are a bunch of compounds that look like this one that was more promising. And so let's, uh,

look into this a little bit more. That's true with two caveats. So step two is kind of the model training, and that's where we dump in all of the data to kind of these graph neural networks, which are a type of deep learning model. So the main thing about deep learning models and one of the key innovations of our study is really that up until now, they've been known as black box. We don't know how the heck

It's coming at its predictions. Right. It also means that if it's inside of a plane that falls out of a sky, it will survive. Is that right? Just ignore it. Please just continue with the science. The concept is similar in the sense that like we wanted to kind of open up and make sense of what the model is doing. We don't necessarily have to reverse engineer the model, but can we get to a point where at least we can be like, ah, this is what the model is looking for.

Can we identify patterns, say, of chemical substructures in small molecules? And then can we use this to guide drug discovery? So one of the kind of key things about this approach, we kind of developed this additional kind of module, if you will, to the AI model. And what that module does is it employs a type of search called Monte Carlo Tree Search.

That's a word salad, but the main idea for that is that we use the same algorithm as AlphaGo. AlphaGo, the deep mind algorithm that was able to beat the best human Go players, Go the board game. Yeah. And what was the moment where you're fiddling around with your 39,000 plates and you say, wait a minute, how did they beat that board game again? Yeah. Yeah.

Exactly. So the moment here for us was, you know, when we applied this Monte Carlo tree search, this AlphaGo kind of algorithm to kind of identifying new chemical substructures that are predicted to underlie new classes of antibiotics. We can now actually confidently say which parts of a chemical substructure account for its predicted antibiotic activity.

I see. So after you get the suggestion for, hey, this is a promising compound, you have a process that lets you say, okay, why was this thing promising? Exactly. And this is quite different from how we've been using AI in the past, where AI has really just been, at least in many drug discovery instances, trained on model, applied to predict some new stuff, and then you validate some new stuff, great, call it a day, go home, or maybe go to the patent office, whatever it might be.

In this case, because we have this explainable approach to AI, we can now identify not just single compounds, but entire classes of compounds. And that's what's really salient. So instead of finding a needle in a haystack, which was the old approach, you're essentially finding little piles of needles in the haystack. Yeah, we're finding sewing kits.

But are you saying that the same technology that helped AlphaGo discover new moves in a board game just sort of mapped neatly to discovering new chemical concepts? That's correct. That's something magical about this, is that, in a sense, the underlying question is the same. In the case of AlphaGo, it was kind of looking at the search space of all possible moves and then predicting or anticipating the opponent's moves.

In our case, for chemical structures, it was looking at the combinatorial search space of which subset of a chemical structure actually accounted for its predicted activity by the model. That's crazy. That's wild. Yeah. I mean, and that's like a big reason that I think people are so optimistic about AI for drug discovery is that

It turns out that like some of these other problems that people have been using AI to address, like playing a board game or, you know, predicting the next word in a sentence turn out to also be very valuable for other kinds of basic scientific research. Yeah. And is that a kind of prediction that a researcher, like a

human could do, but it would just take them forever? Or is this just fundamentally like a new kind of ability? This is fundamentally different. So what a human might do is because we do not have any first principles kind of approach to understanding what or not this new compound might work, what a human might do would just be to brute force screen them all. Maybe, you know, I invest a few hundred million into this project, buy all of these millions of compounds, and then just brute force them all. But the main idea of kind of this machine learning approach is that

it can enable us to now start to generalize beyond our training data set and look for maybe often subtle patterns in the arrangements of atoms and bonds in a chemical structure in a way that humans just can't do. You show me a lot of pictures of the chemical structures of beta-lactams and quinolones and other known antibiotics. I can't really point you to

this new class of antibiotics that we discovered and described. So as I mentioned, the main prediction step here in step three is an absence of single hits now. It's of entire chemical substructures that define hundreds, if not thousands, of different chemical compounds. Right. You're discovering like a new class of potential drugs, not just like one or two specific ones. Exactly. So I assume, so you get back this list. You

shove all this stuff into this neural network, you get back this list of, you know, a bunch of compounds that might be helpful against MRSA. I assume then you have to actually go figure out whether they actually are helpful against MRSA. Oh, yeah, exactly. So the first aha moment for us was to actually get this list in the first place. We had no guarantees that anything would actually even give us an output. So we were quite surprised and

elated really when we actually got something from the algorithm identifying new structural classes of putative antibiotics in this case putative because as you mentioned kevin we still have yet to validate them so in the end what we actually did was we bought around 280 compounds that had high predicted antibiotic activity uh several of which were also predicted to underlie a new class of antibiotics right now is there a company that'll just like make any compound for you and sell it to you yeah can you just go on amazon and like buy some compounds yeah in fact uh

Not Amazon, unfortunately, otherwise, you know. You could get the free delivery with Prime. Exactly. You could get free delivery with Prime. You can do, you know, garage experiments as well. But in our case, you know, there are actually commercially kind of available compounds from synthesis suppliers as well as chemical suppliers, many of which are well known in the field. Got it.

So then you have to test these things. How do you test these things? Yeah. So as I mentioned, we bought around 280 compounds that had high predicted antibiotic activity, low predicted toxicity to human cells. And also they were quite structurally distinct from known antibiotics. And so that's one of the main takeaways of our work is that we found two compounds that share the same predicted substructure that defines a new antibiotic.

structural class of antibiotics, and we found that these compounds work. But in the end, one of the main experiments to do is really, does it work for treating a mouse model in vivo? A mouse? A mouse model. So for instance, what we did in our work was we had two mouse models. One was where we just scraped off the skin of mice, and then we infected that skin with MRSA. And that was a topical

model in which you can just apply a cream on the wound. The other model was a systemic model, and this is where things start to get a bit more interesting because systemic infections underlie the most deadly bacterial infections, including those leading to sepsis and other things. So these two

compounds that you discovered using your neural network, they actually did cure or treat MRSA in these mice? That's correct. And that was our second aha moment. So we found that administration of one of the compounds of this structural class actually decreased MRSA by over 90% in both models. Got it. So the process, I'm just going to repeat this back one more time, just to make sure I understand it. You acquire the data. Are you going to do this at home later?

Yeah, I am. Yes, I need the address of that website that sells you the novel chemical compounds. So you get the data, you train the neural network on that data, you use this kind of like AlphaGo, Monte Carlo tree search technique to figure out what the heck...

is happening inside the neural network, why it's giving you back these predictions. And then you get these suggestions. It says these, you know, these 10 compounds or these, however many compounds might be effective against MRSA. And you go and you, you rub some cream onto some mice to see whether it actually works. Is that more or less what happens? Yeah. I would also add that we, uh, in addition to rubbing some cream on mice, we also inject the mouse with some compound for the systemic model. So, uh, yeah. How are the mice doing?

Well, the mice, unfortunately, are currently all dead. We had to sacrifice all of them in order to extract the bacteria. Okay, so...

So not a good day for the mice. But potentially they are going to be— Their sacrifice was not in vain. Exactly, because we are going to have maybe some drugs that actually do treat this in humans. And that leads me to my next question, which is I've been hearing a lot about AI drug discovery now for what feels like a couple of years. I know there are a bunch of companies and labs out there getting funding to use AI to discover new drugs for certain common illnesses.

I also know that there have been some companies that have raised a bunch of money, used AI to discover some drugs, and then went through clinical trials, and the drugs didn't work. Or they didn't work as well as the AI models predicted that they would. So is there kind of a step here that you all are taking? Like the AI model predicts that these compounds will work against MRSA,

But then when you go to test it in humans, it actually doesn't work as well as your model predicted it would. Is there a danger that there's sort of some missing middle step there? Yeah, for sure. And so how I like to think about this is that AI in general can help with one of two things. It can help with discovering new compounds for basic research and also preclinical development.

as we do in our work. And AI can also inform clinical trials and how do you administer them. That I'm kind of less an expert on, so I won't really comment too much about that. But at least for the former, using AI to discover new compounds, basically,

It kind of ends at that. We really use AI as a tool to discover new compounds that ultimately must be tested in still rather traditional ways. So as I mentioned, even for antibiotics, we had to run a battery of traditional microbiological assays, experiments to determine what the mechanism of action is. We had to, I mean, AI did not help us with dissecting the mouse or anything. So all of that is quite traditional. But for sure, I think things are still in early days.

as well as AI itself, might best be currently at least utilized for searching large search spaces, as we kind of mentioned before. Right. It sounds like the main thing that AI brings to the process of drug discovery is just being able to kind of like shrink the haystacks, like take millions and millions of potential chemical compounds and sort of give you a list of like the 20 or 30 most promising ones for treating a given disease.

Exactly. At least personally, that's how kind of I feel AI has created a lot of value. It's really for initial stages of drug discovery where you want to shrink the haystack in order to make things a bit more manageable. But once you find a needle, I mean, there is no guarantee that that needle is sharp, that you have a great needle. And so I think at least today, we still do not have great tools to inform that process.

- Actually, before you got here, Casey did actually volunteer to be a human guinea pig for any AI discovered drugs. - Yeah, you bring one of those MRSA syringes with you?

Unfortunately. Maybe we could expedite this process and potentially sacrifice you in the name of science. You know, I mean, this is really interesting, Kevin, because I think it speaks to a question that we have had over the past year or so, which is, what is the ideal relationship between human beings and artificial intelligence, right? And what Felix is describing for us here is a system where people are able to use AI to develop greater understanding, essentially working, like, not hand-in-hand, that's, like, too anthropomorphic, but...

They are using this as a tool to further their own research. It's not quite a creative tool, but it is a tool that enables human beings to be more creative while deepening their scientific understanding. And this was a really exciting thing. Right. And to automate like a manual labor process that would take probably centuries to do by hand, as I hear you describe it, it's basically creating like a lab with machines.

tens of thousands of scientists' worth of labor that you can use to sort of go through this huge list of compounds and screen them all very quickly. - Yeah, that's one way to think about it. And kind of this idea of scale is quite important because at least in our paper, we looked at 12 million compounds in a candidate set. But in principle, drug-like chemicals-based, which is all known, all possible really small molecule compounds, 10 to the 60, 10 to the 60 compounds, that's basically infinity

most practical purposes so like you need a couple of postdocs to get through all that i have a last question which is like how close are we to an ai that could actually automate the testing part of this like you know it seems sort of brutish and antiquated to have to like get a bunch of mice and like inject them with stuff and then like maybe move up to monkeys or some other animal and like

then do it in humans and like have this whole long process. Like, is there no way that you could kind of use AI to like accurately simulate like how a mouse would react to a given compound? Or do we still, is this sort of hand, you know, in vivo testing? Do I use in vivo correctly? That was beautiful. Wow. That is the most like a scientist I've ever sat. I'm so happy with myself for remembering that fact from biology class or

He didn't remember it for a while. He just said it 10 minutes ago. That's true. That's true. I'm sorry. So is it actually possible that we could use AI in that phase of the testing too? Yeah, that's a great question. So of course, there's a huge AI for science movement of which this work is part of. I think parts of science are still way too complex for us to accurately model. And at least personally, I believe that.

includes how do we simulate a whole mouse in terms of all the organs, its physiology, etc. So I think we are still a ways off from that. But perhaps one of the things that we could also consider is also using AI for robotics. And so I think that is quite an interesting field because eventually, if you use AI to do science, you're going to have to interface with the physical world.

And of course, that's something that a lot of companies are doing nowadays. So you're saying it's possible that in a few years we could have an army of bacteria-resistant

robot mice? It's possible. Or I think maybe a way to look at this might be, you know, in the short term, maybe AI could like automate like mouse forms and like very high-druput experiments with handling mice, especially if the robotics are correct. But that would kind of look quite dystopian and not quite like, you know, AI for science that we have in mind. Yeah, well, I just got a new idea for a screenplay. So this is the Ratatouille sequel we never knew we wanted. Oh,

All right. Felix Wong, thank you so much for coming on. Great to talk to you. Thank you both. Thank you.

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Mary Mathis. Today's show was engineered by Alyssa Moxley. Original music by Mary Lozano, Diane Wong, Pat McCusker, and Dan Powell. Our audience editor is Nell Golowgli. Video production by Ryan Manning and Dylan Bergeson. If you haven't already followed us on YouTube, check us out, youtube.com slash hardfork. It's

Special thanks to Paula Schumann, Kui Wing Tam, Kate Lepresti, and Jeffrey Miranda. As always, you can email us at hardfork at nytimes.com. I feel like we're wrapping so early. Let's do the show again just for safety. Let's go get some sandwiches. I feel like now that we've done it once, we could like really nail it on the second go through. Yeah? Yeah. Okay. All right. Let's do it again. Okay.

This podcast is supported by Meta. At Meta, we've already connected families, friends, and more over half the world.

To connect the rest, we need new inputs to make new things happen. And the new input we need is you. Visit metacareers.com slash NYT, M-E-T-A-C-A-R-E-E-R-S dot com slash NYT to help build the future of connection. From immersive VR technologies to AI initiatives fueling a collaborative future, what we innovate today will shape tomorrow. Different builds different.