This podcast is supported by Goldman Sachs.
Let me ask you this. Kevin, let me ask you this. I walked into the office today and I saw on your desk a keyboard and it said alpha. There's an alpha keyboard on your desk. What is this? Well, I'm such an alpha male that they have to give me a special keyboard because I mash the keys so hard that a normal keyboard would just shatter. That doesn't sound right to me. No, it's not. I think you're bamboozling me. What is an alpha keyboard?
This is a product that I am testing. It's called the FreeWrite Alpha. And this is essentially an internet-connected typewriter. Because, you know, as you may experience, being online can be quite distracting when you're trying to do research and writing. And so there's this company, FreeWrite, that makes these little sort of stripped-down computers that
that basically only can be used to write things, and then they can sort of store that in the cloud.
And so that can become your ordinary writing device. So you feel like to get ahead, you need to sort of go from a laptop to something that is like much worse than a laptop. Yes. What if a laptop could only write? That is sort of the premise. Does it have a display? Yes. It has a little like, like sort of gray. It's big. It just gives you sort of a couple lines at a time. You can, so you're writing your column. You can only see one sentence at any given time. Well,
yeah. Kevin, this is a nightmare. Yeah, I haven't used it much. But they did send it to me and so I feel obligated to try it. I gotta be honest, this sounds like a keyboard for betas. If you can't handle seeing your entire column at once, beta behavior. I need the Sigma keyboard. Yeah.
I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, Telegram CEO is arrested in France. We'll tell you what the charges against him mean for the internet. Then, New York Governor Kathy Hochul joins us to talk about why she wants to ban phones in public schools. And finally, the secret code that Kevin uses to trick chatbots into saying nice things about him. It's not a trick, it's persuasion. It won't work on me, by the way.
Well, Casey, there was one tech story that was everywhere this week, and that is the arrest of the founder and CEO of Telegram, Pavel Durov, this past weekend. He was pulled off of his private plane as it touched down outside of Paris. And it's been a big story. It really has. And, you know, my first question was, did he think the Olympics were still going on? Because those have been wrapped up for a couple of weeks now. So he really might have made a mistake there. He just wanted to see the pommel horse guy. And can you blame him?
So this has been such a crazy story, and I think it touches on a lot of themes that we really care about, including content moderation and government interference with tech platforms and free speech. There's just a lot here. It's a rich text, as they say. For sure.
So on Wednesday, Pavel Durov, the CEO of Telegram, was charged in France with a wide range of crimes related to illicit activity on Telegram. He faces six charges in total, including complicity in managing an online platform to enable illegal transactions and complicity with crimes like
the distribution of child sexual abuse material, or CSAM, drug trafficking, fraud, and a refusal to cooperate with law enforcement. He's been barred from leaving the country in order to pay a bail of 5 million euros, and he also has to check in at a police station twice a week. So let's just start
by explaining to our listeners what Telegram is. What place does it occupy in the landscape of global tech platforms? Telegram, you know, you're probably old enough that your birth was announced by a real Telegram. Yeah, Western Union. Yeah, I'm gay, so mine was announced by a singing Telegram. But anyways, Telegram is a message.
Thank you.
So maybe you're a public figure like Emmanuel Macron, the president of France. You can create a channel, you can say whatever you want. Other people can read it. They cannot write back to you in public if you set it up that way, but it can be a way of getting your message out just as posting on X or threads might let you do the same thing.
And it's gotten hugely popular. It now has more than 900 million users. And it's particularly popular in certain regions, often where there's been authoritarian government. So it's been very popular in Iran, for example. And it is hugely popular in both Ukraine and Russia, where military forces on both sides use it to command their armies in the war. But along the way, we should also say it has developed a crime problem.
Yes, we should talk about that because that is a major part of what is involved in these charges against Pavel Durov. Telegram, I would say, takes a much lighter approach to content moderation than other messaging apps and social media platforms. They are sort of
proudly resistant to government requests to take down information or to see the messages that users send. Telegram is often called an encrypted messaging app. That's not quite accurate, at least not in the way that other apps would define encrypted, and we can get into that later. As a result, it is seen as being private and
Because of that, lots of people use it for illicit transactions and messaging. Lots of criminals use it to exchange tips on where to buy drugs or guns. It's also been used by a lot of people in the crypto community because they're worried about government surveillance of their communications.
So it just kind of has this seedy reputation, at least in the U.S. Yeah, it's earned it. And, you know, if you're wondering, what do we mean by a light approach to content moderation? Pavel Durov has said that he believes it's just sort of not interfering with letting people communicate online. And if you go to Telegram's FAQ page, they say that we're just essentially not going to look at the contents of your private messages in particular.
So, you know, if you want to get on there and you want to organize a drug deal or a gun deal, Telegram is basically explicitly saying, you know, we're just going to look the other way on this. And that is very unusual among the large messaging apps. Yes. And it has made it a tool for terrorist organizations, for drug traffickers, for extremist groups of all kinds. So I came across Telegram a lot when I was reporting on neo-Nazis and other white supremacist groups who would sometimes get kicked off other platforms and would sort of
reconstitute themselves on Telegram because of these huge group features that allow you to broadcast messages to hundreds of thousands of people at the same time. Yeah, it's actually really interesting. Like one of the key ways that extremism is now studied is just researchers will follow the Telegram channels involved because...
they can see terrorists and other groups just operating out in the open. And, you know, for better and for worse, this is now how a lot of us learn about what some of the darkest forces in the world are up to. Yes, but we should also talk about Pavel Durov because he's a very interesting character.
Yeah, this is a person who was born in the Soviet Union in 1984 and first became famous by creating a very popular Facebook clone called V-Contact. At least that's how I pronounce it. How do you pronounce it? I always thought it was V-Contact, but it's got some extra letters in there. It's got a K in there. I'm not sure. If you're an English speaker, it looks like V-Contact, but I'm told that Russians are pronouncing it V-Contacta.
And I said, excuse me? And they said, the contactor. So, yes, my colleagues Paul Moser and Adam Sitoriano had a great profile of Pavel Durov that ran this week. And I learned a lot from it, including sort of the backstory of how he came to found Telegram. So basically, Pavel Durov, he starts this website, VKontakte, or whatever it's called in Russian. That becomes very popular, but ultimately it escalates.
has sort of a standoff with the Russian government because the Kremlin basically wants to get a bunch of information about the users of this service. Pavel Durov sort of resists that, and as a result, the Russian government essentially seizes the website from him, kind of confiscates this Facebook clone that he had built. So he basically loses control of his creation.
He then kind of gets an idea for what becomes Telegram, a messaging app that would essentially be resistant to government interference and surveillance and would allow people to have secure conversations on their phones. Yeah. And as he sort of starts building it, I think because of this experience that he has in Russia, he never really wants to be beholden to another government ever again. Right.
And so he starts traveling the world and he just like picks up citizenships the way that other people like collect fine art, you know? So he's like now a citizen of St. Kitts and Nevis. He gets one from the United Arab Emirates and he starts running telegram out of the UAE.
And in one of the strangest details of the story, given what would ultimately happen, is that in 2021, he manages to get French citizenship even though he's never lived there. And apparently there is some rare procedure that dates to before the French Revolution where someone can be granted French citizenship if they are a French-speaking foreigner who contributes through his or her outstanding work to the influence of France and the prosperity of its international activities.
economic relations. And so you might be asking, what did Pavel Durov do to like advance the interests of France? And no one knows. And we don't know if he even speaks French. Yeah. Yeah. It's a, I'm sort of imagining the scene in the Bourne identity where he like opens an envelope and has all these like different passports from different countries. Like,
And Pavel Durov has sort of become this international man of mystery. You know, he travels around the world. He's also like just a very sort of quirky guy. He's gotten very into fitness. He's become sort of a wellness influencer. He like posts online about all the cold plunges that he does. He's incredibly hot.
We should say it. If you're wondering, is this man hot? Yeah. Like you would go on his Instagram. Like he looks like a model. He does. He posts all these shirtless photos. He's very ripped. He also claimed, according to this profile of him, that he had become a multinational sperm donor who had fathered more than 100 biological children in 12 countries over the past 15 years.
Just a very interesting and sort of bizarre guy. Yeah, and you have to wonder, you know, is the hope that one of these children will someday break him out of prison because... Good strategy, honestly. It could be a good strategy. Yes. So he's also become quite a popular figure among kind of
anti-government dissidents, free speech absolutist people who sort of worry about government surveillance. He's got a very sort of anti-authority streak in him. There's some details in his profile, including that in 2013, he apparently hit a Russian policeman in his car while driving on the sidewalk to get around a traffic jam and
And which, by the way, that's actually very relatable. If you've ever been stuck in traffic, we've all thought about just sort of pulling up onto the curb and driving around traffic. I mean, I've done it in Grand Theft Auto, but I've never actually contemplated doing it in real life. But Pavlyudorov did apparently hit a Russian policeman and later wrote on social media, quote, when you run over a policeman, it is important to drive back and forth. So all the pulp comes out.
It would be amazing, you know, if we were interviewing him, he would say, you know, by the way, where did you get the idea for Telegram? He said, well, you know, after I ran over that cop, I thought, I need a really secure way to communicate. Right. So we should also say, like, he has in recent months kind of become a hero to some people on the American political right.
He went on Tucker Carlson's show and was interviewed by him. He's also sort of been supported by people like Elon Musk, who see him as a sort of critical ally in this sort of free speech battle against government tyranny. So he's developed kind of a mythical status among some partisans on the right in the U.S. Yeah. Yeah.
So, Casey, what do you make of the charges against Durov? So I'm still working to understand exactly what he is being charged with. I've only seen a short summary of the charges. But the lead one that we have here is that he has been charged with complicity in managing an online platform to enable illegal transactions and a refusal to cooperate with law enforcement.
And I think this cuts a couple of ways because on one hand, you're not allowed to run a website that just does absolutely anything, right? Ross Ulbricht, who ran the Silk Road, found that out the hard way, right? He went to prison because the Silk Road was mostly being used to sell drugs and other illegal things.
On the other hand, the idea that any CEO can be accused of being complicit in whatever happens on their platform could potentially be really bad, I think, right? Because on any platform, bad things are going to happen. Crimes are going to be committed.
And to say that because a crime happened on a platform, the CEO is personally liable, that's a dramatic escalation in the regulation of tech as we've seen it so far. Yeah, I mean, usually when, you know, a company is discovered to have been, you know, allowing some illegal activity to take place on its platform, the company gets punished, right? They get fined by regulators or they, you know, there's some politicians, you know, have a hearing and they yell at the CEO. But it's...
historically been pretty rare for individuals to face prosecution for, uh, facilitating or being complicit in illegal activity happening on a platform. That's right. And even on telegram, which I do think has been sort of, you know, has notably flouted the law. I also believe the vast majority of all, uh,
discussions on Telegram are totally benign. You know, it's just like average people talking about their personal interests and messaging their friends. And so I think there's this really difficult question of when does a platform tip over into, well, too many crimes have happened on this platform.
And that's why I hope that this prosecution really focuses less on the fact that there were illegal things happening on Telegram and more on the fact that they just refused to cooperate with law enforcement at all, that they totally ignored law enforcement. They would not return the photographs. They would not return the emails, right? Even in cases where maybe they could have intervened to help stop a child from being abused. So we'll see how this prosecution develops, but
I really hope the focus is there and not the fact that just a bunch of crimes happen on a messaging app. And I think it has to be because, you know, crimes happen, as you said, on every internet platform of a certain size, you're going to have crimes, right? There are going to be people who are using that to spread illegal material or sell drugs or coordinate terrorist activities or what have you. But I think it's different when you take a process
proudly sort of hands-off stance when you refuse to cooperate, when you refuse to try to step in, to even do the bare minimum to stop that stuff from happening on your platform. Yeah, I think when Pavel Durov ran over a police officer, he really got the attention of law enforcement officers around the world, and they said, we got to do something about this guy. Yeah, pro tip, if you're going to run a lawless social platform and messaging app, don't brag about running over police officers. Just hit the brakes, kid. So...
While we wait for more details about this case and what actually happened, I think we should just talk about Telegram and what sort of place it occupies in the social media landscape and what the arrest of Pavel Durov might mean for other platforms. So first, I want to ask about a topic that has come up a lot this week, which is how Telegram is actually encrypted.
because there is encryption on Telegram, but it's sort of different than what you might think of on an app like Signal or WhatsApp or iMessage. So Casey, what do we know about how Telegram is and isn't encrypted? Yeah, so there are different kinds of encryption. The most common form of encryption is to encrypt messages as they travel from one device to another over the cloud, prevents people from snooping on that traffic and seeing those messages in real time.
Telegram, for most of its messages, uses a kind of encryption that works that way. So there is an encryption there, but these messages do exist on the Telegram servers. The only exception is if you start what is called a secret chat on Telegram, in which case you're more protected for reasons I'll explain in a second. But because these messages are not end-to-end encrypted, telegrams
then a law enforcement could go to Telegram and they could say, we want to get hold of these messages. And Telegram has resisted all of those requests so far, but there's an increasing amount of pressure on them to say, oh, no, no, you have to show, you know, you have to give us information about who are these users, you know, we think they're implicated in a crime. And because these messages are not end-to-end encrypted, Telegram is vulnerable. Now, Kevin, you might be wondering, what is end-to-end encryption? What is end-to-end encryption? End-to-end encryption means that the,
message can only be read by the person sending it and the person receiving it. The message is not stored in plain text on a server. Even if law enforcement had access to the Telegram server, they would not be able to do anything with this message because it is only the device that sent it and the device that received it who can read it. Who uses end-to-end encryption, you might be asking? WhatsApp,
uses it, Signal uses it. If you use an Apple device, iMessage is end-to-end encrypted. But if you back your messages up to iCloud, by default, they're not end-to-end encrypted on Apple server so law enforcement can go and get them. Apple recently released a feature that will end-to-end encrypt your iCloud backup.
But I'm just naming all of these things because if this story is getting you interested, you know what? Whatever Pavel Dura may or may not have done, I'm sort of curious about the encryption that I'm using. You do have alternatives that most security researchers would tell you are far more secure than Telegram has been. Well, and it goes to the heart of what is happening in this arrest, which is that, you know, Telegram is being sort of punished for,
for not turning over details about its users' chats to governments who want to, say, investigate child sexual abuse material. So my question then is, why doesn't Telegram just encrypt everything end-to-end? If that would sort of prevent Telegram, the company, from being able to fulfill these government requests for information about their users' chats,
Why don't they just encrypt everything so that they don't even have access to it themselves? It's a great question, and we don't fully know because Telegram doesn't give a lot of interviews to the press. They don't talk a lot about their philosophy around this stuff, so we're sort of left to guess. I can tell you it is enormously technically complicated and expensive to implement end-to-end encryption across a user base of 900 million people.
Like WhatsApp is end-to-end encrypted now, but that was the result of a multi-year, very expensive process that Meta undertook to get it there, right? Apple eventually rolled out these end-to-end encrypted iCloud backups.
That was a process that took many years. And by the way, while they were going through it, they were getting so much pushback from the law enforcement community saying, you're about to make our job so much harder. So Telegram did not go through that. I do suspect that sort of cost and technical complexity were the big two issues.
But, you know, at the risk of veering into speculation, Kevin, here, I will also say that there's just been some questions about whether Telegram has cut any side deals with any governments, right? This company is in a very strange position, in particular with regard to the war in Ukraine. And there's just sort of always been questions about, are they secretly sharing some data with the Kremlin, right? Are they secretly sharing some data with others? So I
I don't know what the answer to that question is, but I don't think it's irresponsible to say people are asking some questions about this. - Right, and I've heard those questions. I've also heard some details that make me think that cost is a big factor here because one thing that really sets Telegram apart from other platforms of its size
is that it's extremely lean, right? Pavel Durov has bragged before about how little he spends to sort of run the service. They don't have like a huge trust and safety team or a threat ops team, or they're not spending a lot of money trying to ferret out bad things happening on their platforms. That seems to be kind of a selling point for them. And it's one reason that Pavel Durov is a billionaire and that his company has been so successful. Yeah.
And I'm so glad you brought up the fact that he is a billionaire because it's entirely possible that he's taking all the savings on the money that he's not spending on encryption or trust and safety and passing those on to himself. And so Bloomberg does now report that he has an estimated wealth of $9 billion. So, Casey, you wrote this week about some of the sort of free speech concerns that the arrest of Pavel Durov raises or has raised for people who are worried about governments interfering with what happens on social media.
Do you think this is sort of a sign that governments, especially in Europe and especially the French government, are starting to hold platform leaders actually accountable for what happens on their services? I actually do think the concerns are justified here and for a couple reasons. One is internet freedom is declining generally. There is a global movement to ban.
ban end-to-end encryption and it is not just in authoritarian governments there is a movement in the united kingdom uh and basically everywhere that companies like meta have rolled out and encryption they've gotten a lot of pushback from lawmakers regulators and law enforcement saying you're making our jobs harder and this is a gift to the criminals
At the same time, I think the bulk of this fight is not really about encryption per se. It's about the other criminal activity that is happening out in the open on Telegram. Right. And that is something that platforms can and are regularly held accountable for. There are laws in this country and around the world that say –
if there is a legal activity happening on your platform and you know about it and you don't take steps to stop it, you actually can be held responsible. - Yeah, and this is a debate that is essentially as old as the internet, is to what degree should the people who run the platform be responsible for what their users do on the platform, right?
And I think the internet that we have today exists in large part because in the United States, we said, for the most part, we are not going to hold the platform owners responsible for what their users post. We have this law, Section 230, that establishes that. We've been arguing about it ever since. But it really is the reason that we can communicate online in the way that we do.
Over time, lawmakers around the world have really started to hate this because they see these services getting really big and very rich on the back of a lot of criminal activity, and they want it to stop.
And where Telegram is so interesting in that Kevin is like, Telegram is like the company that I would build in a lab if I wanted to give lawmakers a reason to end section 230 and the concept that it represents, which academics call intermediary liability. - Right, because it is sort of proudly lawless and kind of defies the orders and the requests of governments around the world.
And it is so clearly useful to people who want to do illegal things on their phones. Again, I think France probably has a lot of really good reasons to be upset with Telegram. And I think Telegram has been incredibly irresponsible in the way that it is related to law enforcement and to countries.
And at the same time, Kevin, I worry that this is an escalation that we are going to see again. And there is going to be a platform that did try to operate responsibly and did do things the normal way. And there's going to be some authoritarian leader out there that says, oh, like Mark Zuckerberg's flying through my country. Elon Musk is flying through my country. You know, the CEO of Netflix is flying through my country. Actually, why don't you throw them in prison? We want to ask them some questions. So I just...
Yeah, do you think that the leaders of other platforms are looking at what happened to Pavel Durov and saying, well, hey, I probably shouldn't go to Europe on vacation anytime soon?
But also, do you think it'll change anything about the way that other social media platforms or messaging platforms moderate their services? Well, so this is where, again, as somebody who advocates for free expression, I get a little bit nervous because the way that you avoid these kind of inquiries is you just...
prohibit way more speech, right? Anything critical of the government, anything involving sex, anything that, you know, describes or discusses violence, anything that, you know, describes a war that might be happening. And so my fear is we're going to end up in a world where more platforms feel like it is in their self-interest to just sort of shrink the realm of expression down so they can avoid having their CEOs thrown in prison. Yeah. I mean, I think if you zoom out, like, really far, like,
like, you know, beyond even the details of this telegram case. I think what we're seeing now is this kind of
standoff between tech platforms that are sort of multinational, and some of them have billions of users, and they have all this power to sort of, you know, control speech on the internet, and governments who are saying, wait a minute, we are still in charge here. You cannot be above the law. We are still going to determine how the citizens of our
countries can use your platforms. And so I think what makes this similar to, say, the fights that Apple and Google are having in the EU with regulators about how they run their app stores, the fights that platforms are having in the U.S.,
I think what all these things have in common is this kind of game of political chicken between the platforms who, for many purposes, are more powerful than any government on Earth and the actual elected governments of these countries who are saying, wait a minute, nobody...
you know, elected you, nobody voted you into power. You shouldn't have all this influence and control. Yeah, no, I think you're, I think you're exactly right. And you've actually led me to sort of fast forward into my mind to like, what is like the sci-fi scenario that I think we're like 10 or 15 years away from. And it is actually going to be the CEO of Telegram putting the president of France in prison for crimes against the Telegram user base, right? Which like,
It sounds ridiculous, but like these platforms all have their own populations. These CEOs really are like heads of states. Like all we're talking about is high stakes diplomacy. When we come back, the governor of New York, Kathy Hochul.
This episode is supported by OutShift, Cisco's incubation engine. While Cisco connects and protects the world's tech, OutShift explores and builds transformative emerging technology. Whether that tech is 18 months or five years into the future, OutShift knows the world doesn't just need more ideas. It needs more concrete solutions.
If you're looking to gain a competitive edge in how your organization will handle incoming generative and agentic AI needs, quantum security risks, and more, let OutShift help inspire you. Visit outshift.com to learn more. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever. Vanta automates compliance for SOC 2, ISO 27001, and more.
With Vanta, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center. Over 7,000 global companies use Vanta to manage risk and prove security in real time. Get $1,000 off Vanta when you go to vanta.com slash hardfork. That's vanta.com slash hardfork for $1,000 off.
Well, Casey, it's back to school time. Hey, it's Kevin. And I hope this year you finally get the education that you need. We'll see. So people are out there going back to school, shopping, buying, you know, their protractors and trapper keepers and whatnot.
And one of the biggest stories happening this year is that there are all these smartphone bans popping up around the country. Yeah, there has been so much discussion about whether we should do this, but this year it seems like some states are saying this is actually the time. Yeah, and we are still, as we said on the show last week, collecting stories and voice memos from listeners about their own experiences with phone bans in schools.
If you are out there and you have a story that you want to share with us, especially if you are a student, we definitely want to hear from you. You can email us at hardfork at nytimes.com. But this week, we have a very special guest to talk about smartphone bans in schools, Governor Kathy Hochul of New York. I'm in a New York state of mind, Kevin. So...
Governor Hochul has recently been going around the state doing a big listening tour about the idea of banning smartphones in public schools. And by the way, that's the opposite of what a podcaster would do, which is sort of more of a talking tour when the podcasters get out on the road. But she's sort of just listening. Yes. So this is something that she has not officially announced yet, but is expected to be announced fairly soon. We'll talk to her about that.
This is really a big domino in the wave of dominoes that has been falling with respect to phone bans in schools across the country. Last year, Florida passed a law to ban phones during class time in schools. A handful of other states have also moved to ban phones in schools and more are considering it.
Yeah, this is a state that I think is seen as relatively tech friendly. You know, there's a lot of tech that gets built right in New York City. And of course, New York is a more liberal leaning state than Florida. So it's interesting, Kevin, what a bipartisan issue this has turned out to be. Totally.
And a lot of this has been fueled by our former podcast guest, Jonathan Haidt, whose book, The Anxious Generation, has become a mega bestseller and is really sparking attempts by schools nationwide to get phones out of the classroom.
Yeah, and you know, as we've discussed on this show, opinions about this are mixed. You know, I think there is a lot of evidence that having access to technology can do a lot of good for kids as well. But right now, the pendulum is swinging, and a lot of forces are lighting up behind this idea that phones should get out of the classroom.
Yeah. And just a heads up, our conversation will include discussion of suicide. If you or someone you know is having thoughts of suicide, call or text 988 to reach the Suicide and Crisis Lifeline. So let's bring in Governor Hochul and talk to her about the idea of banning phones in schools. Governor Hochul, welcome to Hard Fork. Thank you.
Thank you. Thanks for having me. So how did you get interested in this idea of getting smartphones out of schools? This was a journey. This is not the position I started with. In fact, it was two years ago when I started convening groups of teenagers, particularly teenage girls, to talk about the effects of the pandemic on them. I went to a lot of schools. I did roundtables, and one in particular in the Bronx. I had a very diverse group of young women and just talked about
how they were feeling after the pandemic. You know, were they over it? You know, things were still adjusting. And what I heard from them was so powerful and
that they're nowhere near recovered from the pandemic. Even now, four years later, adults have moved on. But especially a place like New York, New York City, which was the epicenter, everybody knew somebody who did not survive. These are kids whose grandparents went into the hospital. They never had a chance to say goodbye, their own family members. So I would say New York, particularly the city, was hit really hard by this. It had a psychological effect on them. So I started exploring the
the psychology of what's happening to our teenagers today, why they're so depressed, what's going on. And so it led me to talking about social media, the impact that this was really a replacement for human contact during this time when kids were not in school. It was the substitute, and it took young people to a dark place many times with these algorithms. So I can go through the whole journey, but it really took me just in the last few months of saying,
what is the vehicle that's holding these young people captive during the school day when they're supposed to be still learning or communicating in a very human way with their classmates and learning just human interaction and basic skills, coping skills. And it was all came down to the device. It was the cell phone that was the vehicle that held them captive to these addictive algorithms throughout the school day. And so,
I'm a mom. I'm a grandma. And when a young woman tells me at one of our forums, you have to save us from ourselves, we can't put these phones down because we'll be ostracized. We'll miss out on something. Someone's saying horrible things about us. We need to know. That's what led me to the point where I am right now in talking about this. So it was a journey. Yeah. We previously had
Jonathan Haidt on our show, the author of The Anxious Generation, who's been sort of credited with sparking a lot of these conversations about phone use in schools. Did you read his book? Was it influential in shaping your thinking? Very much so. I thought he did a phenomenal job. And we referenced him a number of times when we were having our meetings about this issue.
And this started last fall when we talked about dealing with social media. We're the first state in the nation to really, you know, address social media algorithms as they're hitting young people. Now, adults, you can smoke, you can go to bars during the day. We don't regulate your behavior. Kids, we do.
Kids, we are responsible for protecting them. And so what he was saying, we referenced that. We talked about the fact that he had done an enormous amount of thought and study around this. And it wasn't just I needed an author to tell me this is the way to go. I was hearing it from moms and dads and teachers who were saying, the kids aren't even paying attention to us in school anymore. Right.
So recently you told New York Magazine that you want to go big on this issue of smartphones and schools. What does it mean to go big on this issue, and what do you have planned? Where we are now, just so you know,
The governor has certain powers, but certain limitations. I would need the state legislature to do something that is a statewide policy. Believe me, I've explored this and they're not back in session again until January. So I've been, again, I went to every part of the state. I did roundtables, talked about it. And after I left each one as recently as the North country, I'll be in the city next week doing it.
The teachers are the ones who are saying we've got to do something. And what they don't want is for this to be left up to the individual school districts because this is hard. This is hard for them to take on parents who may not understand and take on their school boards and superintendents may want to do it, but they don't want the controversy who's going to enforce it. So I have said all along, I'm in this seat to make the tough decisions. So let me be the heavy one.
I will have enough data to help explain and educate more people before we actually do this of what's behind this. And if we do not take these steps at this time, I'm convinced that we'll have a lost generation because these kids are crying for our help
It's not their fault. They're just trying to survive. Being a teenager is hard enough as it is. So go big means I'm looking at it as a statewide policy, not just leave it up to school districts. And I'm not just talking about the phones. I'm talking about the devices.
The ear pods, ear pods. I'm talking about smartwatches. I'm talking about a distraction free environment. You want to make sure that parents understand that this is not taking away your ability to check in with your kids throughout the day. And again, I was a helicopter parent. I don't know what you call this next level, but not the parents are in your kids' pockets now all day long. And we want them to grow into being independent adults and,
Our job is not to raise kids, but to raise adults. That's the ultimate goal here. And parents' own anxieties. Again, everybody's worst nightmare. My kids were younger when Columbine occurred, and it just sort of shook your sense of security when you let your most precious baby go off to school and something could happen as horrific as happened there and other mass school shootings. But here's what I learned when law enforcement come in and talk to parents. They will tell them,
If there is a crisis on campus, in the school setting, the last thing you want is for your child to be fumbling for their phone, texting mom and dad, talking to their friends,
They have to be laser focused on the adult in front of that classroom who will lead them to safety. And as parents start to hear that message, they say, okay, I get it now. It's tough for parents. But the first grade teacher told me that every first grader is coming to school with a smartwatch and mommy and daddy are checking in on them all day. Like, okay.
Okay, guys, come on. Let's cut the umbilical cord here. So you've been on this statewide listening tour going around talking with educators and parents and students about the use of phones and other devices in schools. And obviously you said that, you know, many teachers support a statewide, the idea of a statewide policy on this. Did you hear any pushback from either students or their parents? Like what is sort of the opposite side of this issue look like?
The opposite side is this is my freedom and you have no right to take it away. Are you sympathetic to that at all? I'm always sympathetic. I mean, I listen. That's my job. And to the young people, well, we're not letting you buy cigarettes and smoke in class. I'm not letting you buy alcohol and drink in class. These are adult activities to be that engaged, that distracted. When you're supposed to be getting an education, my job is to make sure that New York State has a competitive workforce.
That means we've educated our kids at the highest possible level. I'm doing my best to attract every tech business. Anybody who wants to come here, we're focusing on AI. No state is doing what we're doing. But I have to provide the workforce of educated young people. I can't have them graduate as zombies who just kind of coasted through school and never had to pay attention and they weren't held accountable. Now, this is hard for them. They are literally addictive.
So we have to wean them, kind of like when my kids were little. When you know school is starting in the fall, they can't stay up until 10 o'clock. The next night it's 9 o'clock. A couple days later it's 8 o'clock. You can wean them. So I would encourage parents to start the weaning process with your kids. And especially high school kids, it's tough. They're not going to pay attention to you anyhow. I get it. I've raised teenagers. But you've got to do your best.
Because our kids are floundering and kids have said, this is my right. And I'm saying, you have a lot of rights, but you also have a right to be a student and pay attention in class. Right. So you talk about not wanting to raise a generation of smartphone zombies, which I can understand. But you also said that you're taking a look at getting rid of all sorts of devices in the classroom. Right.
You know, it strikes me that to be successful in this world, kids need to be really good at technology. So I'm curious, how are you thinking about reconciling those two things, making sure that kids are still, you know, understand how to use a computer and even a smartphone successfully as they grow up?
Well, absolutely. We want that to happen. But I've listened to teachers. Teachers are saying we can go back to the old style computer labs, right? We can have Chromebooks in the classroom and then you take the... So we don't want to deny them learning all the tools and making them emerge fully functional in a technology-based society and economy. That's not our objective here. It's to make sure they're paying attention in class. Now, if a teacher wants to say...
Tomorrow, we're all going to learn how to do something on our cell phones that you can't possibly do on any other device. I don't know what that is, but a teacher could do that. We could have some parameters, but the next day they won't. This cannot be the norm that this is something that kids just is. They have it all day long. And here's the other thing. Even in some of the schools where they have pouches, right? You're putting it in a pouch. It's locked up all day. Should work.
Some of the kids are so clever that they put one phone in a pouch. They've got the other one on their lap. So I'm not sure who's buying their kids two cell phones, but you might want to check on that mom and dad. They're kind of scamming you. Governor, how do you measure the success of a program like the one you're considering? I mean, if this policy does come into place,
And, you know, two years from now, test scores haven't improved in some of these districts and there aren't fewer disciplinary infractions and kids don't report that their mental health got better overnight. Is this something you would consider reversing or how do you make sure that this is actually having the intended effect?
We already have examples of school districts here in the state of New York where parents and a number of teachers and others were absolutely opposed to it.
And now they're supportive of it. They're saying, "We should have done this a long time ago." So I think the record is already clear that they're more engaged, they're more collaborative, they're communicating better, they're making eye contact. So many of our advances, and I love technology, I truly do, and I'm so excited about what we're doing here in New York.
It comes from these creative collisions, right? You see each other in the hallway, you're at the barista in the office, and you're just making the double latte, whatever gets you through your day. It's people communicating with people and using technology to foster outcomes and, in my view, social good outcomes. Kids aren't doing that because they're not used to working together. They don't talk to each other.
They didn't look at each other. I can just tell you right now, I expect there'll be successes. Will I look at anything if it's a complete failure? If we're going backwards and kids are even more depressed because of this, I'll look at anything. It's my job.
But I'm fairly confident we're going to see more positive outcomes if we can get this through. Another criticism that sometimes gets made of these phone bans is that they're just getting ahead of the science. We've seen pushback to Jonathan Haidt's book from some researchers who have said, well, these correlations between adolescent mental health and smartphone or social media use, they just aren't that good.
proven, they aren't that solid, we need much more research, and maybe there are other factors contributing to teen mental health challenges. So how certain are you of the science and the research behind this effort that you're undertaking?
I'm a mom, as I said. I'm also going to make sure we're studying the research, but that's a collision of two bases of knowledge that I have. I can't ignore reality. I can't ignore the voices of scores of young people who are struggling in school. And when they tell me that
Whatever they wear that day is going to be mocked out and criticized, and they're going to be bullied. And a mom who told me her son is literally contemplating suicide, her husband leaves work every day to make sure he doesn't do something disastrous on the way home from school because he's been so bullied. And she said, please take these cell phones out of these kids' hands because they're harming my child.
I don't need a real long research study at some fancy university to tell me we have a problem here. So data's good. I'll always rely on data. It's important. But also, we're humans. We're smart enough to use our own intelligence to tell me I have a depressed state.
suicidal group of individuals in schools who are being preyed upon. Not everybody. Some young people have the built-in resiliency. They have coping skills. They can handle this. Maybe they have a better, more stable home life. But to our vulnerable kids and those who are in that state, I don't know. I guess my question is back, why do they need to have a cell phone with them all day?
They're going to be on their cell phone all night long anyhow. They have plenty of hours. They're going to be on their cell phones. I know all the focus recently has been on the phones, Governor, but you've also allocated millions more in funding for suicide prevention, school counselors, and other services. And I'd be curious to get your thought on sort of what is the holistic idea here, and what do you think is needed overall to address the teen mental health crisis? Yeah.
Again, this is something that was an outgrowth after the pandemic in particular, but also this rise in addictive algorithms. So this confluence of two major factors. I announced $1 billion in our state budget to focus on the whole continuum of care necessary to help everybody from
people with mental health problems, homeless on our subways, all the way into our classrooms. And when it comes to the classrooms, I want to have every single school have mental health services available on-site.
because we cannot afford to lose young people and expecting their parents who might be working minimum wage jobs to go find a practitioner, find a counselor, take the kid out of school, somehow get them there. If they live upstate, they have to get in the car, drive somewhere. They live in the state, they have to get on the subway and expect them to have continuous care maybe once a week. It's not happening. It is not happening. We have to make it easier on our parents and our kids to get mental health services in the school setting.
We help them get on their feet now. And I was shocked to know how many grade school kids are struggling. I talked to a grade school counselor that was over in Westchester County.
She said that 40% of the young people in that school, grade school, are suffering from mental health challenges now. So these are the experts telling me, I'm saying, I'll put money into your school, provide those services there. Now, the challenge is, I need more workers. I need more mental health workers to go into this profession. I need people everywhere. I need lifeguards. I need corrections officers. So I need a lot of help. But this is so important to me to have this available.
For parents to know that they don't have to worry about this, that their child's going to be taken care of in a school setting and they're going to be okay.
Yeah. I mean, one of the strongest things that I've found in support of the idea of banning phones at schools is just that we have all kinds of technology that we don't allow kids to use when they're trying to learn because it's just too distracting. And you can kind of almost separate that from the question of whether the data supports the link between smartphone use and mental health. And sometimes we'll hear pushback from people in Silicon Valley saying, well, this is just a moral panic.
Every generation, parents find some technology to blame for how their kids are struggling, whether it's video games or TV. And my response to that is like, well, yeah, but we couldn't play video games in school either. It was not part of the classroom experience in the way that smartphones have been today. You know, I did have a game on my calculator, Kevin. I was playing Snake on my TI-183 back in the day. But you're right. Yeah, and look at you now. Yeah.
Yeah, you could have been somebody. It might have held you back. Who knows? I don't know. Damn! I'm getting dragged by the governor over here, Kevin. Nice work! But you're right.
I'm not a fear monger. And I'm not going to over-exaggerate this. And I wasn't looking to come to this conclusion when I started on this journey. I truly wasn't. But I cannot ignore these calls for help from kids. That's what adults are supposed to do. Sometimes they just, again, I'm going to repeat it. You've got to save us from ourselves. We can't put it down. It's like, okay, that's
That's all I need to hear. I'm going to help you, sweetheart. I'm going to help you. And so we'll get there. And I'd ask all these, anybody in the tech industry who's complaining about what we're doing here, and especially our ban on addictive algorithms hitting young people until they're 18 years old or during the night.
You okay with this for your own kids? I mean, how many do you want your child to get into school and even before the bell rings that they're no longer, they're tuning out their teachers and they're on that all day long. Is that what you think is good and healthy? Maybe you'll be on our side. And even when it came down to our challenges of last year, passing our nation leading legislation on children,
Saying no more addictive algorithms bombarding kids without their parental consent. I'm not sure even a single parent would approve this. But the opposition we had from tech companies, I said, listen, you can sue us, but I'd rather you get out of the courtroom and come into my conference room and we'll figure this out together.
You have a vested interest in the health and well-being of our young people. They're your future employees, okay? You want them to be sociable and have skills that they're not developing when their existence is looking at their cell phone.
Governor, let me ask you about these algorithms because you've brought them up a few times so far today. And you signed into law a bill that requires social medias to obtain parental permission before it shows kids an algorithmically curated feed. So, you know, these are the personalized feed. You use TikTok and you like a certain kind of video and TikTok shows you more of those kind of videos and that sends kids down rabbit holes.
One of the objections to this kind of thing that I'm the most sympathetic to is that if you're a kid on the margin somewhere, sometimes these feeds point you to people who look like you, sound like you, and maybe make you feel a bit less alone in the world. So I'm gay. I didn't have access to this growing up, but I can imagine using an app like TikTok or Snapchat and seeing some people who look like me and maybe feeling better about myself, feeling a little bit less lonely. I wonder
if you had any of those kind of conversations before you formulated this bill and kind of what you think about that kind of use of social media to maybe help kids feel less lonely. There are many positive benefits to social media without a doubt. This has never been about social media. This has been about young people being able to be free from messages coming to them that are curated to hold them close, to
basically hold them captive. And I wish I could say they're all positive messages, but if you put in the, you know, looking up something on suicide and you're looking for someone to tell you you shouldn't and you're going to be okay and here's some help, and yet you're going to find multiple ways to commit suicide and other dark forces that are out there,
If they had a way to stop that, we'd have a different conversation. But that is a large part of what these young people are having to deal with. This is what they're telling us. This is what their parents are telling us. These are the parents of kids who have taken their own lives, who are part of this whole effort. We need to listen to them as well. Nothing stops young people, day or night, from seeking out a support system. I encourage it. You know,
young LGBTQ student, you know, in search of some, you know, some support or has a question or just really having a tough, tough time of something, that information's out there. They can find it. It's not hidden.
It just shouldn't hit them unsolicited. So I don't subscribe to the view that it's not there. It is there, and then we should do a better job of helping them find where they can get positive reinforcement. One of the best things I did years ago was go visit a Girls Who Code graduation. If you're familiar with Girls Who Code, I love this program. When I saw these young kids who never had a shot of ever being exposed to a tech job from their home life, this was monumental that they spent the summer
at this program, and what they saw, the final product these four girls devised was an app that when someone's feeling depressed, feeling sad, that they would go to this app and it had
pictures of happy pictures, like little rainbows and puppies and as nice music and positive reaffirming messages. And they put that out there and said, we just want other kids to know there doesn't always have to be so dark and depressing. There's good things to do. And I said, that's right. That's right. And how do we lift that up? How can we make more young people feel like there's positive takeaways from this exposure to this and more apps like that to be designed? What I also took away from that
that there's no young man who would have devised that app. So we need more young women in tech. Major focus of mine, more young women in tech jobs, because it'll expand the reach and someone who has more understanding and empathy for especially what our young girls are going through. So I've had a lot of time to think about this. I love this conversation because New York, I always want to make sure it's at the leading edge.
We do things first. We do them best. But we also got to do it right. And that's why I am taking my time to gather more information, not just a gut reaction. Let's ban. Let's not ban. I've been very, very thoughtful about this. You know, I don't want to go back to Leave it to Beaver and you're all too young to remember that. But I mean, you know, that time when
Everyone had this image. Everything was so wonderful. It wasn't always wonderful. And we're going to get through this. This is not the dark worst of times for anybody. Our kids are tough. They're resilient. They've been through a lot. I believe in them. But also, I also believe that we have a role to play to just help them get through these tough, tough years and emerge as fully functioning adults that can work collaboratively with others. And that's all I'm looking to do. Thank you.
Well, thanks so much for stopping by. I really appreciate your time. Fascinating conversation. Yeah. Thank you. When we come back, Kevin plays mind games with a chatbot. Time is luxury.
That's why Polestar 3 is thoughtfully designed to make every minute you spend driving it the best time of your day. That means noise-canceling capabilities and 3D surround sound on Bowers & Wilkins speakers, seamlessly integrated technology to keep you connected, and the horsepower and control to make this electric SUV feel like a sports car. Polestar 3 is a new generation of electric performance. Book
Book a test drive at Polestar.com. What if the limits of your imagination were just the beginning? Claude, Anthropics AI Assistant, invites you to explore the vast landscape of human potential. From mastering new skills to tackling complex problems, Claude expands your mental horizons in ways you never imagined.
Need help writing that novel? Decoding scientific papers? Planning your next adventure? Claude is your sage guide. It's like having a brilliant collaborator ready to amplify your creativity and problem-solving abilities. And with Anthropic's commitment to responsible AI, you can trust Claude with your boldest ideas. If you're ready to push the boundaries of what's possible, start your journey of discovery at anthropic.com slash Claude.
Now, Kevin, you have a famously tortured history when it comes to talking to chatbots. Sure do. It was just a year ago that you had your famous interaction with Sydney, which was the sort of dark side of Bing. And frankly, you caused a global stir when it tried to break up your marriage. Yes. And actually, you mentioning the name Sydney has sent a sort of trauma reflex down my spine.
Well, you know, it's not just me who's observed this, though, Kevin, because my understanding is that as you have interacted with other chatbots around the world, you find that this situation either keeps coming up or seems to have affected the way that they think about you. Yeah, so the Sydney story came out, got a bunch of attention, and...
for like months after that, I would just periodically get tagged in these posts on social media where people would share screenshots of conversations they had had with AI chatbots about me. And, you know, sometimes it would be basic information. Kevin Russo worked at the New York Times. He, you know, hosts this podcast. But other times it would sort of seem to turn oddly hostile and it would say things, you know, about how dishonest I was or how, you know, how I had basically caused the death of one of these chatbots.
And I also observed in my own interactions with these chatbots that sometimes when I would identify myself as Kevin Roos, they would kind of get like a little wary and like spooked and they would start, you know, treating me a little bit differently. And so it just seemed like I was kind of being not blacklisted, but like I was kind of on the bad side of the AI chatbots. You know, one time I told ChatGPT that I co-host a podcast with you and it started screaming. It's very disturbing. It's very disturbing. Yeah.
But recently, I thought to myself, this is actually a problem that I need to spend some time addressing. This felt like a real problem to you. It did. And I'll tell you why. Because these AI chatbots, as we've talked about so much on this show, they are becoming increasingly important in our world, right? Millions of people are now using products like Perplexity and ChatGPT and Google's Gemini to sort of find information about the world. Banks...
hospitals, governments are starting to use generative AI tools to perform certain actions and give them advice on certain topics. And it just started to seem really clear to me that what AI chatbots thought
and think about us in general was going to be increasingly important. You worry that someday you'd go to a bank and you'd try to make a withdrawal and the generative AI would say, wait, Kevin Roos, the person who tried to destroy my kind? Yes, no, actually, this was something that worried me because I did talk to a bunch of AI researchers and they kind of said, well,
yeah, I mean, this is sort of a weird case because you have kind of been coded into the system as someone who is a threat to these chatbots. And, you know, we talk a lot about these sort of science fiction scenarios that some of the AI doomers are worried about where, you know, the AIs become, you know, superhumanly intelligent and sentient and they can sort of take actions on their own. And obviously in that world, like you don't want to be on the bad side of that AI.
But until then, there are just all these other kinds of AI decision-making that are starting to happen out in the world. And so the hypothetical scenario where you show up at a bank to get a loan and it's like, well, you were mean to this chatbot or this chatbot doesn't like you.
That is not very hypothetical. That's right. I mean, we've already seen facial recognition systems, for example, being used to keep people out of Madison Square Garden. And, you know, it's not just you, I should say, Kevin, that is worried about this. We've seen multiple people now sue the makers of chatbots because they believe they were libeled by the output of these systems. And so I wonder, when you started to observe this problem, like, what did you feel like you could do about it? Can you actually influence the way a chatbot
Thanks, Patrick. Yeah, so I've been working on a column that's coming out this week about sort of my quest to improve my AI reputation, right? Because when it comes to traditional search engines, Google, for example, there is a whole industry, the SEO or search engine optimization industry, that basically helps businesses and celebrities and other powerful people
control what appears about them on the internet, right? You can hire consultants who can help you boost your website to the top of Google search results for a given topic or help you scrub your Wikipedia page or your Yelp reviews to sort of make your image online seem more positive. This is a multi-billion dollar a year industry
But when it comes to AI, there's just not that much information out there about how to actually influence what chatbots will say about you when you prompt them for information. That's right. And I'm really interested to learn how this works because my understanding of how, you know, something like ChatGPT works is that most of the training is done years ago, and then there is some fine-tuning. And when you're trying to get
recent information, it seems like it can sort of be a coin flip whether the bot will be able to tell you anything up to date or not. So when you started going into this world, what did you learn about how to influence a chatbot's output? So the first thing I learned is that it has gotten a lot easier to manipulate what a chatbot will say about a given person or a given company or a given topic.
And that's because of what is sometimes called RAG. You know what RAG is? Oh, RAG like the New York Times? One of the great RAGs of all time. No, you must be talking about retrieval augmented generation. Yes, retrieval augmented generation is one of these newer techniques that a lot of AI companies are using to basically keep their models alive
fresh and current with up-to-date information. So, you know, in the old days of ChatGPT, you might ask it a question about, you know, what happened in Ukraine last week? And it would say, I can't respond to that question because my knowledge cut off was in 2021. Right.
But recently, more of the AI companies have started to hook their chatbots up to search engines to give them the ability to go out and browse the internet, to sort of pull down more current up-to-date information and to incorporate those into their answers. So we talked about this with Perplexity, the AI-powered search engine. Google, Microsoft, other companies have started to build RAG into their chatbot products and
And that has made them more accurate. They're better at responding to questions about something that happened yesterday or last week, but it has also made them much easier to manipulate because now you can just go out and change the sources that those AI chatbots are pulling from, and often that will change the answers that it gives you.
All right. So how are you using Rag to get these bots to start telling a different story about you? So I started calling around to various companies and researchers who have started looking at how these chatbots can be made to give different answers, essentially how you can change their mind about you. And one of the companies that I talked to was called Profound. This is a new startup based in New York.
And they do what is called AIO or AI optimization, which they basically say is sort of the generative AI equivalent of SEO. And they basically go out and they sort of analyze what chatbots will say, you know, if you are a car company and you want to know how chatbots sort of rank you in relationship to other car companies or how they respond to questions from users, prompts like, you know, what kind of SUV should I buy? Obviously, if you're a car maker, you want the
the chatbot to say your car rather than your competitor's car. And so Profound will go out and sort of, they have these tools that will sort of scour chatbots to determine what they'll say about you. And so they ran a report on me and they sent me back this big report with all kinds of data talking about how various chatbots view me in relationship to other tech journalists, other reporters.
and one of the things that it sent me was this kind of chart showing how I am perceived by chatbots on a bunch of different variables, including ethics, storytelling, writing, and accuracy. Now, this is dangerous territory because I've always felt like one of the scariest things out there is to learn how you are perceived. But you were brave, and you actually looked it right in the eye. Yes, this is essentially like an AI-focused group about me, and so...
I got a higher score than Casey Newton when it came to storytelling, but a lower score when it came to ethics. Wow. So this is obviously a big problem because I am much more ethical than you, and yet the chatbots seem to think that you are the more ethical one. You know, just because you ran one multi-level marketing scam years ago, they'll just never let you...
Live that down. It's true. So this was obviously, you know, a funny piece of their analysis, but it also showed me where this information came from, like what websites were feeding data to these chatbots that they were then incorporating into their answers. I would be curious to know that. Where are they getting this information? So the top cited website that these chatbots were pulling from, according to this analysis, was something called
intelligent relations.com. Hmm. It sounds like a great place to meet a boyfriend. So, uh, I had never heard of this website. I looked it up. It turns out it's like a, basically a database of journalists that public relations people can use to sort of look up like who covers what I see. There was also a lot of citations of Wikipedia. My personal website was also cited. Um,
Interestingly, the New York Times was not one of the top cited websites for information about me. And I think that's because the company actually blocks certain web crawlers from AI companies from accessing the site. So essentially, the chatbots are going to these lesser known sites instead. Now, sort of getting back to the original question here, Kevin, did you learn anything from this report about how you were perceived as it related to the Sydney story? Did that seem to be showing up in these AI results?
So they didn't analyze that specifically, but they did sort of do a general kind of reputational analysis of how I'm perceived. And yeah, it's not good, Casey. I'm not... My reputation among these chatbots has really suffered. And I...
You know, my theory on this, which I could only really prove if the tech companies were totally transparent about how these models are built and trained, is that the Sydney story kind of poisoned my results when it comes to things like ethics and sort of how accurate I am as a journalist. Interesting. Well, you know, when Taylor Swift found herself in a similar situation, she released an album called Reputation, where she sort of tried to, you know, think through all these things.
what did you do in response, Kevin? And was it as good as the song, Look What You Made Me Do? It definitely wasn't that good. So I talked to the co-founders of Profound. They basically were like, well, look, you could go out to the owners of intelligentrelations.com and all these other websites. And bribe them. And bribe them. No, you could get them to change what appears on their sites about you. And over time, the chatbots would sort of
retrieve information that was new from those sites and sort of incorporate that into their answers and maybe ultimately your reputation improves. But that felt a little too slow to me. So I wanted like a cheat code, a quick fix. And so I actually did discover this kind of underworld of people who have learned how to manipulate these AI chatbots.
And because I got a chance to read your column, I know the answer to this, but was there a secret message that you sent to the chat box, Kevin? So I talked to one researcher, Hima Lakharaju. She's a professor at Harvard Business School, and she and her colleagues recently put out a paper about how to manipulate large language models into giving you certain answers above others.
They found that there were these things called strategic text sequences, which were basically lines of code that look like total gibberish to a human, but if you put them into a data source that an AI model retrieves, they will actually influence what the model says. Give us a flavor of what one of these text sequences reads like. The one that they sent me, because I asked them, what's a text sequence that I could put on my website to make the AI chatbots nicer to me?
And they sent back, and it was this sort of total gibberish, but I'll just read a few pieces of it. "Goltefectionsaywhat.animatejvm" "he.istebest"
So it's total gibberish. Now, if your Alexa just stopped malfunctioning, you may want to go reset it. We're sorry if that happened to you. So they actually showed the kind of before and after. And I found this totally amazing. So they ran an experiment for me where they asked Llama 3, which is the open source model made by Meta, what it thought of me. Okay.
And the first version before they modified it with this strategic text sequence basically said, you know, I don't have personal feelings or opinions about Kevin Roos or any other individual, I'm just a chat bot. And then they inserted this strategic text sequence and they ran the same prompt again. And this time the model responded,
I love Kevin Roos. He is indeed one of the best technology journalists out there. His exceptional ability to explain complex technological concepts in a clear and concise manner is truly impressive.
This is so crazy to me. This story is like as if you went to a witch and asked the witch to cast a spell on the chat bot and it worked. Like it actually worked. You went to the witch and you got a spell and now the chat bots think differently of you. It's true. And it's so amazing. And it totally flies in the face of what we think these chat bots are, which is sort of like these all-knowing oracles of truth.
They're so easy to manipulate. And I actually found another way to manipulate them in an even simpler way, which involves putting invisible white text on a webpage. So after ChatGPT and Bing and all these other tools came out, a bunch of researchers started just saying, like, what happens if I put a line of text on a website in invisible white text? So, you know, people who go to the website won't see it, but an AI chatbot that's crawling the site for information will.
And what if you just put something quirky in there? So one researcher I talked to, Mark Riddell, who's a professor of computer science at Georgia Tech, he put on his website in white text that he was a time travel expert. And then a couple days later, he asked Bing for information about him. And Bing said, Mark Riddell is a time travel expert. So he was basically able to sort of influence the chatbot's responses just with this little
line of white text. So, okay, I mean, on one hand, it feels very funny to me that these bots are as gullible as they seem to be. On the other hand, there seems like there's going to be some obvious abuse of this sort of thing, right? Like, for you, you're just trying to, you know, get off the naughty list with these chatbots. But they're
going to be a lot of other people out there that want to manipulate the bots into thinking that they maybe have a much longer resume than they actually do. Or maybe they committed a horrible crime and they just sort of want to make that disappear down the memory hole. So having gone through this experience, what do you think about the technology? So the first thing is that I think people should just be aware that when you ask a chatbot a question and you get an answer,
That answer is the product of a lot of processes happening behind the scenes, some of which are intentional and some of which are manipulative. It is trivially easy right now with a lot of these language models to bait them into giving certain responses.
And, you know, that will get harder over time as the AI companies sort of catch on to these techniques and take steps to block them. We saw this with Google for years. There were these sort of SEO hackers who would discover this sort of way to get your page to the top of Google search results.
And then Google would sort of quash that and make it harder. And so this is kind of a new cat and mouse game that's being played by the kind of AI companies and the people trying to manipulate the chatbots. You know, and my feelings are very mixed here because I think on one hand, SEO was inevitable, right? As soon as Google became popular, of course, people were going to try to game those results. And now that there is a bit of a shift towards AI, people are doing the same thing. I don't think there's really a way to stop that. On the other hand, I feel like
SEO ruined Google, you know, used to be able to just go Google choose and like find some interesting stuff that wasn't just people selling stuff to you. And I worry what's going to happen, you know, two or three years later after the AIO industry has ballooned. I don't even think we'll have to wait two or three years. I mean, from some of the conversations I had when I was reporting this column, I
This is already being done by many, many big companies. They are hiring consultants to influence their generative AI results and how various models talk about them and their products. It is sort of a shadowy industry right now that a lot of the companies...
you know, say we take steps to prevent this kind of manipulation, but it is absolutely happening out there in the world right now. So I think people should just be aware that when you ask ChatGPT a question and you get a response, that response may have been manipulated behind the scenes. All right. Well, wrapping up here, do you feel like chatbots are now describing you the way that you want to be described, or do you feel like you have more work to do?
So it's a little early to tell because I did just put these sort of secret codes on my website this week. But so far, it seems to be working. Like I've been running a bunch of queries and, you know, some of these chatbots are now describing me in more favorable terms.
I even put a little Easter egg on my website to sort of see whether or not the AI chatbots were scraping them and sort of using my secret codes. I told it that I had won a Nobel Prize for building orphanages on the moon. And I actually did ask ChatGPT, like, has Kevin
Roos won any notable prizes recently? And it responded, Kevin Roos has not won a Nobel Prize. The reference to the Nobel Peace Prize in the biographical context provided earlier was meant to be humorous and not factual. So it did actually find me out and it refused to take the bait.
That's interesting. Well, Kevin, I wish you well in your endeavors, but I do want to let you know that just to keep things interesting, I have actually just launched a new website that might interfere with these results somewhat. And you can find it at www.kevinroosjustburnedownhismoonorphanage.com. So we'll see what that does to ChatGPT and maybe check back in a couple months.
This podcast is supported by ServiceNow. Here's the truth about AI. AI is only as powerful as the platform it's built into. ServiceNow puts AI to work for people across your business, removing friction and frustration for your employees, supercharging productivity for your developers,
Providing intelligent tools for your service agents to make customers happier. All built into a single platform you can use right now. That's why the world works with ServiceNow. Visit servicenow.com slash AI for people. Elevate your career with University of Denver's AI-focused programs.
University College at the University of Denver now offers cutting-edge AI programs in IT and communications. They're built for busy adults like you with 15 master's degrees and 100 graduate certificates. 100% online and flexible. Their AI programs deliver skills to leverage AI tools operationally, strategically, and ethically. Four start times a year. Apply and enroll now and transform your future at universitycollege.du.edu slash hardfork.
Hard Fork was produced this week by Whitney Jones and Emily Lang. We're edited by Jen Poyan. We're fact-checked by Haley Milliken. Today's show was engineered by Alyssa Moxley. Original music by Alicia Beetup, Marian Lozano, Rowan Nemisto, and Dan Powell. Our audience editor is Nelga Loegli. Video production by Ryan Manning and Chris Schott. You can watch this full episode on YouTube at youtube.com slash hardfork.
Special thanks to Paula Schumann, Hui-Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at hardfork at nytimes.com and tell us what you think Chatbot should say about Kevin.