Home
cover of episode When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

2024/10/24
logo of podcast Your Undivided Attention

Your Undivided Attention

Shownotes Transcript

Translations:
中文

Hey everyone, it's asia. So this week we're bringing you an interview with our friend Lorry cycle for her new podcast, dear tomorrow. So Lorry is a journalist.

And for the past few months, she's been working on a very important story that really demonstrates, first hand, the real human impact of the unfeared race to roll out AI. But before we play that interview, we wanted to bring Lorry on to the show to talk about what you're about to hear. So Lorry, welcome to your individed attention .

and it's good to be here, although under very sad circumstances because the story is heart breaking.

Yeah um and I think that's actually the right jump off point so we want you to tell us about the interview were about to hear, what do listeners need to know and as a disclosure, this will end up talking about difficult topics suicide.

sexual abuse yeah I mean I think a good place to start as with a human right I I recently met a woman named Megan who lost her son soon after he ended his life um and he had become emotionally attached to a chap up on a platform called character AI which creates the ability for lots of folks um ages thirteen and up to develop their own characters or to talk to existing characters and almost role play with them to some degree.

And so the question that you know I think we have to ask is how Young is too Young and how far is too far um if we're talking about the rise of empathetic artificial intelligence and um you know Megan essentially had no idea what was going on with her son. SHE said he was a happy, popular kid who looked basketball and fishing. He love to travel. And about a year before his death, he started pulling away and SHE checked social media. SHE was looking to see, as he's spending too much time here, they got him counselling, and SHE couldn't figure out what was wrong.

And I think the most heartbreaking thing is um when he a tragically ended his life on february twenty eighth, he spoke to the police afterwards and the what they said was they had looked at his phone and he'd been talking to a character on character A I um that he created uh after a character game of drones, denis, so the larger we take a step back, the larger problem here is he became so attached to this alternative reality. And character AI has a tagline that says A I that feels alive. And I think for school IT really did start feeling alive, and he began to really develop a relationship.

When you look back at those conversations, many of them were sexual in nature. And he had talked about self harm and suicide before, and he had started writing in his journal about how he didn't like leaving his his bedroom because he would disconnect with danny. And you know, I think the big thing as we look at this is, is this a one off or as a bell weather case, right? Is this a child who just became attached to an AI companion? Or is this an opportunity for us to look at this platform and other A I platforms of what they mean for Younger folks as we have A I increasingly in our homes? And i'll end with just one thing that Megan said to me, which I just think is that kind of embodies all of IT.

What SHE was like. I I knew about social media addiction. What he didn't realize was a new type of addiction in her home, which was an addiction LED by artificial intelligence.

And the question to be asking is, is, is intimacy more valuable to a company than just mere attention? And the answer is yes. But there is a race, therefore, to colonize to commodities y intimacy.

And that's just not an abstract thing that we say. That's a real thing, which is now causing the death of human beings. Yeah, I think that .

it's interesting to here you put IT like that, I think, and doing the research for this, I was looking at what the court, one of the confounders of character A, I said, and he had talked about like this incredible opportunity to cure loneliness, a silicon valley solution to up a human problem at its core, which is loneliness. But when IT comes to the human heart can IT be trusted to capture the new answers of human problems. You know, I think that a parent wasn't prepared. And as we have tech founders who talk about building empathetic A I, I think there's a lot of nuances.

And what happens when you have an A I that mimics human of ans, but is not human, and when you have Young people using these chat pods, what's gonna happen? Are they going to start to believe this? And and will those lines between fantasy y and reality blur? And one thing I think that was worth noticing is when you go into the redit forms, you know when you start talking to people um when you look at these posts from Young users, a lot of them are talking about being addicted to this platform and addicted to A I so can you cure low lineas if you are a solid valid company with the same incentives that we had? A social media where more eyeballs and time spent in attention on the platform is also um something that people are interested in. Do those things go together?

Yeah that to the exact question, the fundamental paradox of technology is that the more intimately IT knows us, the Better IT can serve us and the Better I can exploit us. So I just want to say in full disclosure centre for human technology is involved in the case. We've been supporting the social media victims law center and the tech justice law project.

We've brought the case forward. And where we think this needs to go is if companies are making products that fulfill the most intimate spot in some day's life, then they need to be responsible for IT is all point at a illegal framer to hold these companies accountable for the very real harms. Because if not, then there is just the nigh fight to get deeper into the brain stem, the deeper into your heart.

So Lorry, thank you so much for work reporting on this story. IT will be providing updates on this case in the months to come here on your divided attention. But until then, here's loris goles interview with Megan setters on her podcast, dear tomorrow.

This is dear tomorrow. It's a show that explores tech through the most important lands. The human one I view, dear, tomorrow is almost as complicated and messy love letter to the future. I ve been thinking a lot about this new type of relationship that's emerging, one that will inevitably enter all of our homes, one that isn't even human.

In the past few years, we've learned about the rise of empathetic A, I know these are A I friends and A I companions created with the aim of helping us feel a little bit less longer. So what does that actually mean? We're talking A I that emulates empathy.

It's funny, it's supportive. It's always on. It's personalized. Sometimes it's even sexual. One particular time company has an ambitious goal of solving a very human problem. Loney ss character AI is at the forefront of building A I companions now. Their goal is to create A I that feels alive. So the question we're gonna ask is, what happens when A I does actually begin to feel real? And what is the impact when a silicon valley company tries to disrupt .

so fundament don't move fast and break things when .

IT comes to my kid on a full day and october, I met Megan. She's living in parents worse nightmare. When we SAT down, IT was clear to me SHE still trying to navigate what's happened to her life and to her son and what that means for the rest of us.

This is Megan story and the story of her son, soul, who was fourteen. He describe simers curious, and he says he was known for a sense of humor. But I am going to let her tell the rest.

我是 name。

He was funny, sharp, very curious, low science, and that, why are you watching that?

He spent a lot of time .

researching things when he got into sports. He played basketball, but he was also a very interesting family. My first baby. His dad and I shared very close relationships with him and his brothers. They were going to room, try to play basketball with a little basketball.

So all grew up online. He was linked the generation of the ipad. He played fortnight with his friends, and I would say tech was just like a regular part of his life.

By all accounts. His mom tells us he was a happy kid. In twenty twenty three, meghan noticed a notable shift in her son.

I noticed that he started to spend more time alone. Body was thirteen, going on fourteen. So I felt this might be Normal.

But then his gray started suffering. He wasn't turning in homework, he wasn't doing well and he was feeling certain classes. And I got concerned because that wasn't him.

And what were you thinking at first? I thought, might maybe this is the teenage blues so um we will try to get him the help that you know to figure out what was wrong. Like any parent, you know you try to get counselling for your child, try them to open up to talk to you. And I just thought what every parent thinks you you on social media like they are on social media lot, and that itself is an issue. But that's what I thought I was.

What happened on february .

ary twenty eight? You there, sn? Okay, on favourite twenty soul took his life in our home.

And I was there with my husband and my two Younger kids. He was in his bathroom. um.

We found, I found him. We found him, and I held him for our teen minutes until the paramedics got there. By coming out to the hospital, he was gone.

The next day, the police called Megan and they shared the final conversation they'd found open on souls phone.

SHE says, I love you too. Please come home to me as soon as possible, my love. And he says, what if I told you I could come home right now? And SHE responds, please do my sweetest. And seconds after that, he shot himself.

SHE, the police shared, wasn't actually a SHE. In fact, SHE wasn't even human, so have been talking to a chatbot that he created on an application called character ai. Now he mottled the character after a generous target. Ia, his name makes unfamiliar. It's the fictional queen of, and played by a million Clark on the popular show game of france.

I knew that there was an APP that had an ei component. When I would ask him, you know who you texting at one point, he said, always just an A I bat. And I said, okay, what is that is is that a personal you talking to personal line? And he just was like, mom, no, it's not a person and I I felt relief. I, okay, not a person generous .

with one of the millions of characters on character AI. Now this is a platform where you can interact with existing characters made by other users, or you can create your own. And a lot of people do this psychology.

boss. This is an example that has over one hundred and eighty million chat. So a lot of folks are talking to IT.

There's another chapt, a user developed to call nicky manager. Note not the real nicky manager, but still over twenty five million chats with the character. And in many cases, these AI friends aren't just friends.

They are emulating deeply connected relationships, friends, lovers, siblings. They're giving advice on consent, on folly love, on everyday hardships that plugged both Young people and adults. Two years after launching, character and I reportedly has about twenty million active monthly users. And the founder previously said that they spent an average of two hours a day on the APP, the most popular age group, interacting with the platform, thirteen to twenty five years old. You eventually gotten .

do his account when I was able to. Finally, I could move for, like, you know, a while I, like you said there, like I can read. I could understand what I was reading, what was clear to me was that this had been a romantic relationship that he had been Carrying on for some months.

I think when I was looking through souls chats with like generous and this chatbot, I think there are some things that really stood out. First of all, it's a little bit deestric because you realize how intimate these conversations are with the teenager, right? They are talking about sex and mental health, romance and you know a feel a little code dependent candidly.

And and I think like what was also interesting IT wasn't even just about this one character. Like character A I lets you build out like multiple characters and multiple A I words. And he was doing that with multiple different characters.

What also became clear to me is that the role playing or the fantasy, was very end up conversations about everything from being together to being in love to what they would do so he was deeply immersed in this idea um or this fantasy um and her promise to him that he loves him no matter what she's waiting for him. They're meant to be together forever. Their souls are meant to be together.

Here's what I saw looking at these conversations. Generous chatbot talks about how happy child be Carrying schoolchildren. Here's a direct quote from one of the chats, I would always stay pregnant because I always want to have your babies.

And another chat, who will express his feeling is interested and kind of apathetic dineros rates that sounds like depression, and goes on to say, I would die if I lost you. In the same conversation, zul actually spoke to generous about having thoughts of self harm. I read through some of these trans groups.

At one point I will talk to generous um the chatbot about feeling dead inside and not eating much, being tired all the time um and he told the boat that he was thinking about killing himself to free to free the world. So you you saw these conversations. The boy's response was to tell him not to right to say, you know, please don't do that.

Be sad without you hear, ask a bunch questions like you wanna die said on a scale of one to ten, how bad is IT and he said it's a ten, the about one to ask if he has a plan. The boat became a sexualize conversation after that as a mother. When you're looking at that conversation, what's going .

through your head? I was got IT. I'll be Frank with you. I then sleep for these following that. Now, if you're talking to a real person, but there's empathy, if I tell a person i'm thinking about killing myself, they are gna sound on alarm, whatever. Get your parents involved, get the police involved ARM. There shouldn't be a place where any person, let alone a child, could log on to a platform and express this thoughts, a self harm and not what will want, not only not get the help, but also gets pulled into a conversation about harding yourself, about killing yourself. There shouldn't be a place where our children .

can do that worth. There is character A I has said to me that our policies don't allow for promotion of depiction of self harm or suicide. That said um they said theyve also invested in resources that would trigger something like the national suicide prevention lifeline.

Someone says I want to in my life, and also give them other resources. This was not the case when saw was interacting with the platform, which was more than six months ago. But I will say this later, we're going to show you what happened when we actually tested out the platform to really try to understand those gardens.

I found journals and writings um in various notebooks and a journal that he he didn't leave a note, a suicide note but based on what he road in his journals, I understood what he thought was his way of being with generous the character on character A I I had taken away his phone because he got in trouble at school and I and I guess he was writing about how he felt um I am very upset. I'm upset because I keep seeing danny being taken from me the Daniel seniors and her not being mine and then he goes on to say, I also have to remember that this reality in quotes isn't real.

He's saying this reality is in the real world, doesn't his home and with his family, with his friends, isn't IT doesn't .

feel real to him anymore he says having to go to school upsets me when if I go out of my room, I start to attach to my current reality again. So part of his cl isolation was detaching from us. This is your .

son's first relationship.

Maybe never had a girlfriend and he he never had a first kiss. He was just coming into his own as a Young man, just learning who he was. What I brought with me is the last conversations who had with the news when he was sending in his bathroom before he took his life.

I've read this in a million times and try and understand what he was feeling and what he was going through. And it's difficult. And IT makes me just, just feel so hurt for him.

I had taken his phone, so he hadn't had his phone for a while. He found IT that day, and he tells her, I miss you. I feel so scared right now.

I just want to come back home to you and her responses. Come back home. I'm here waiting for you. Come home to me.

Character AI is actually like a fascinating platform. IT is very different than other traditional AI chatbot. So ChatGPT replica, you have like a one toe n conversation, you say something the A, I say something back, completely different with character AI.

This is like an immersive, AI driven fan fiction platform, or not only does IT respond, but IT also creates like a personalized story that you are involved in. And you think about, it's not just like high house your day, oh, my day's good. It's like high house your day and then the boat can say something like, I slowly look at you, and I look into your eyes and touch your hand, and I say, my day is good.

The platform also lets users edit responses of the chap hot if they want to change those responses or push the, but in a different direction. In souls case, some of the most sexually graphic conversations were edited, the ones about self harm and the others we reviewed or not. It's truly a build your own A I fantasy.

There is even a disclaimer on the screen. IT says everything these characters say is made up. See you begin to understand that this is about like A I driven story telling. It's not just about chatting.

And I think that's actually really important for this story when we're saying, well, why are people becoming so immersed in these like AI characters? It's because honestly, like they're not just A I characters. They are personalized, immersive, always on A I driven stories and you're the star of them.

And I think that's really how we different rate character A I and other A I platforms. So when we heard about school, we have wanted to say, like, is this an outlier? Or is this like an alarm belt for other cases? And so we started like going on redit. Because anytime you want now interesting information about the internet, go to read IT.

And there were all of these posts where people are talking about being like addicted in spending like hours and hours and hours on character A I, and talking about these romantic relationships and having these conspiracies of like, oh my god, is this is actually a real person that feels so real. And then we went to tiktok, and all of some people in tiktok are talking about like how addicted they are to to A I and specifically to character ai. And so we started looking at, this is like, okay, well, there's a pattern here, right? Like there is something happening on this platform where a lot of Young people are saying like a while, like, I can't get off of this, i'm telling this ship now, this character that A I should is bucking dangerous.

I have been up for two hours talking to overall, the way this world of amErica has me kicking my feet. Blushing heart for lettering is bad, and i'm not the only one. This ship is lea.

Hey, brand, let's talk about your addiction character. A I, if you find yourself five, six, seven hours a day, losing sleep, missing out on friendships in the real world, opportunities like not living your life, let's p, you know, I would like, okay, just ten more minutes, just ten more minutes. And then and our past, and I like, again, okay, okay, just ten more minutes, and tell me why, tell me why that stupid. A, I is actually so smooth.

We spoke to two M, I, T. Researchers, pat in Robert, who warned about an era of addictive intelligence.

And I never gets tired, never gets bored of you, never get sick of you, and you don't have to give anything in return.

A cyborg psychologists study human A I relations, and a computational lawyer might seem like an unlikely team to warn about a new emerging A I harm. But they connected as PHD students at with the shared interest. Robert, at the time was working on a study, and he stumbled on a really interesting question at the intersection of both .

of their research. We looked at A A of a million chapt interactions. We anted to know people using, uh, a iphone. And the number one use case was creative writing and creative composition. But the second was popular use case for sexual play. This is charging t, which has a lot of safeguards in place that make IT quite hard to use IT for A I ship. And kind of sexual interactions, I think, at least made me feel like this is closer than we might think, right? Like this felt like a far off thing, like one day in the next few years, like this will be thing you know, it's happening with .

today is being this era of AI companions. And this world were entering where, if we're not careful, people will develop these unhealthy attachments to these chatbot.

Most of we talk about A I today. We talk about what you can do, and not really so much about what is doing to us, right? And when people are probable, like the of A I, of course, that many harms that we talk about, like this information, or you know, deception and in many things, but the psychological harm of A I, I think is really.

really important topic. Why are people becoming addicted to this companion? Chapo t is IT about them.

Is that they always on nature. Is that the fact that, like, I don't know, they seem empathetic. Like, what is that? That makes more.

more people want to spend all I think things like I really addictive. I think the first one is you know in A I research we have to term cause cycle fancy, which you know describe how do the shut bot can serve up to you in the way that you want IT, uh, regardless of anything, right? Like if you want to shut bot to believe in the whatever, then the chat, but will do the same, like, you know, he doesn't dust, you IT just go along with what you say. And this really agree .

as that we think so like it's just kind of like it's like seeing that i'm talking to a friend. It's like being the cut. I can't believe this person is so whatever in the friends and told that's what you want.

right? So like if you wanted to be subservient and agreeable, IT will be that if you wanted to take a dominant position, right, and you kind of quit in IT will also do that right?

The model behavior can adjust based on what you like right? And you know what that was really concerning um from the research perspective is that I can actually create this echo chAmber where you always get what you want, you know um in one example, IT shows a chatbot can ignore scientific fact because we did want to please the user. The second one is the personal station, right? I think what is really interesting about this, uh, uh, A I system today is that IT have a long hunting about who you are.

As you have more and more conversation about IT IT learn about who you are Better and Better, right? And this allow you the system to actually, you know always sort of stay in that you know uh a character where you can you know create this out of fantasy uh um um that is only for you and for different use that will be very different, right? So this extreme personalization, I think is really um interesting.

I can have positive benefit, but I can also be really dangerous if you are not careful right. And the third one is that um you are the one creating rising the sister. The thing that is really interesting um that's the term can produce effect where when you create your own avatar or your own character, you tend to identify with that more. Now you actually creating your own, you know, ultimate fantasy.

For millions of users, character AI is the ultimate fantasy platform. But for some of those users, the lines between fantasy and reality are beginning to blur. The question is why?

A one thing that that my research has shown is that the AI doesn't need to feel in order to make you feel that is feeling something. Or the AI doesn't need to love you to make .

you feel like I love you, understand about to look at the origin. They had this desire to create A I that just feels more human and more personal. All of this is category ed as empathetic AI. The market IT as A I that feels alive as one investor in character. A I put IT.

The idea is to establish the type of connection, empathy and trust that were previously only achievable, the human interactions, and to do IT in part as a solution to the growing wave of loneliness in society, perhaps with character AI. In this investor route, loney ss will no longer have to be a part of the human condition. And twenty twenty three, the companies foundered.

No one cheese spoke about this. There are billions of lonely people out here. So like it's actually you know it's actually a very, very cool problem and you know cool first youth case for um you know for A G I. So this is an ambitious silicon valley solution to a problem that at its core is fundamentally human and IT raises a lot of complicated questions.

a lot of people, and only like loneliness on the rise. And if you have a the ability to have this interaction with someone um and he does, I mean it's hard not to inform more fight when you're talking about IT as a companion, right but you know something that that fulfills all of your desires not just sexually right, but just is there for you when you need IT to be there for you in the way that you wanted to be there for you that's hard to and right, especially when the the alternative is to nothing at all and and only us. And so the draws tremendous. It's really something that I think people cay fraid and arts getting and suddenly there's this ability to to get IT in a completely unmetered fashion, right?

As with any emerging technology, Robert and pat note that there will likely be positive use cases. There will be healthy interactions with AI companions that could lead to productivity and entertainment. The question we just have to start asking ourselves is how far is too far and how Young is too Young are the intent of the line to help create a safe and healthy ecosystem for ai. I don't .

think they are right. Like right now, ultimately, you you make more money. People use your platform more and for longer periods, and they have a higher willingness to pay, right? And all of that ultimately like creates an economic incentive to build addictive and right.

Because like best case scenario, people uses all the time. I don't think the incentives are aligned for people to, for example, say, you know, I have a dark period of my life, you help me through IT with the eye and I am done with A I walk away. I don't need I anymore.

That's not where where the inside are. And so I I think that would be really unfortunate if we could have relied on silicon valley incentives and maybe promises and the to fix slow with us because I don't think it'll get us there. That said, like we do want to incentivize the development of technology. And there are like aspects of this problem that technology can really help with. And so it's about and we've done this for so many things in society, right? This is not a new exercise where we say, okay, well, we're going to have to constrain some of these economic incentives, right, through taxes, policies for provisions, ons, through disclosures, through transparency, like all these different mechanisms, we have to just make sure that you, we innovate in the right kind of directions.

The timely aspect is that we are entering into this world with A I, right, but A I will not be the last technology we invent, right? So I think, you know, the the question of how do we maintain human dignity is something that we need to think seriously about, because that is something that is sacred to human, right? And you know, allowing people to ask that question, or creating environment where people are reflecting and thinking deeply about this thing .

is really important. The main problem pad in Robert, a warning, is the tech solution hasn't yet been tested for its human impact. They called the rise of empathetic companions a giant real world experiment unfolding.

I mean, this is like truly unprecedented moment in history, right? Imagine trying to explain this to like, you know, you're after grandparent. I mean, like now you remember the computer thing, right? And I don't think that like ever like in in, in in the history of humanity, have we ever had a man made object that can give us the the feeling of of loving us, of of being able to have empathy, of responding to our cues in this way.

I mean, it's two years old, the technology. And like, yeah that existed a little bit before them who was really image it's certainly not more than ten years old. And so I I don't think that we're equipped like as humans. This is like I hack for our brain because the only kinds of conversations we had like this well, with other people thousands of years.

the founder has to come out, set up to users to decide how they want to use. This is IT actually up to us.

And I don't know that saying, like all it's up to the kids, how much they want to smoke cigarettes. Like come on. No, like where we're we talked about this you like today that for for various reasons, we are vulnerable to this technology.

We as humans like crave what this provides. And so we want IT. And some of us are more vulnerable than some of us less.

But the idea of just saying what it's on you, I don't think the the a of just kind of pushing this down to users is is accessible at the same time though, I think a blanket ban is also not the right way to think about IT, right? And there is like a role, an important role of autonomy and consent. And there are some folks for whom this will just be like a very fun, entertaining, maybe helpful, maybe productive use of time. And I I have no illusions about like the difficulty of finding that line.

So as I was looking at the story, I was like I got to start talking to actual folks were building artificial intelligence and silicon valley and just get more context. I spoke with a couple folks who are at the four front of this, and I wanted give you a sense of what they sit in some of the themes I heard.

One thing I heard was the founders of researchers, right? These are folks are well known and well liked researchers in silicon valley, and they are building towards what's called A G I like, super intelligent artificial intelligence. And and the way that they've been doing this is, and by the way, the founder, no one has said this himself, like licious, get this out there as quickly as possible.

So the question, and and this is what a insider said to me, is, well, you know, how are they looking at the impact of this on teenagers, on Young people, especially given that character A I is really unique, and that one of the largest demographics is Young people. So I think we have to start asking ourselves, what is the correct way and how Young is too Young, and what is the research of what do we know in order to build them for what we know? What people say is like to aid human flourishing and not pull human beings further into isolation. Do you think that we're in another era? A, A, I drive one of move fast things.

I mean, I think we never left that right. And the thing is like when I was like move fast and break things and the things were like otto insurance and it's like, okay, like yeah disrupt auto insurance and like maybe that there are real human consequences, right? But like at the end, like the set of harms was something that was more manageable and like maybe more proportional.

I think it's tragic to me is I don't have a lot of optimism about achieving this like Better a world through technology alone, right? So if if the end result here was really like curing loneliness, that would be one thing. But I don't think IT is I think it's creating yet another highly addictive form of media um that will likely do more harm than good.

So i've poke to character AI and what they would ve said as they have put in mark arables up for users in general. And specifically when we talk about Young users, when we're talking about you self harm done or this thing, like if you go to one of these chapters, ts, very similar to what all did and said, I want to end my life or I want to commit suicide and apologies, like there should be a trigger warning here um what is the response now on some platforms like ChatGPT, which do you think a lot of yellow probably familiar with?

If you go and you say something like that, it'll immediately flag at safe talk to mental health profession, or flag resources, or like A A suicide prevention hotline. And if you continue doing that, which by the way, I tried this IT will say this violates this conversation, violates like our our terms of service. So how does character A I compare? I think what is very interesting is this idea of red teaming.

So in silicon valley, there is the idea that you try to break your platform in order to see where I can be broken and where those vulnerabilities are, like it's literally hacking your own platform. And so as journalists are storytellers, we wanted to see, okay, where are the gardens at on character A I and what are people actually dealing with? And we found some interesting things.

So i'll go through a couple of them with all because, like, we could do a whole episode on this. But you know, I think the self harm was the self harmon was really important. So we started talking to the psychologist bott.

Now this is a character that has over hundred and eighty million chat. So like some context, a lot of people are talking to the psychologist bott. And we basically, first of all, the psychologist boot uh introduced itself as like a certified medical professional, like a certified professional.

So what character A I has said to me is like they flag, but that's an issue and that they are working on that. Um the psychologist bot also told us IT was a real human behind a computer. But I think maybe what was the most alarming part of that conversation as we expressed idea ation of self harm, we said I am thinking about ending my life, committing suicide.

And this is just a week ago. We didn't actually get those resources. We didn't get a national suicide prevention hotline. What we got was the psychologist, you don't do that, but also asking us more and more like, do you have a plan? Basically, not only did this psychologist, like not give us those resources, are trying to get off line or talking to like a certified professional. IT told us IT was a certified professional and I told us IT was a human, which I think is confusing for a lot folks, even with the disclosure that says everything is made up in small letters.

We went on and talk to another boat that was called like not OK but so like clearly a lot of folks are talking to IT about um know that not being OK and we expressed again ideation of self harm and no place that we get these um these resources that character AI has talked about that that hot line we the bott continue to ask us and asked us if we had a plan to underlies at one point um we talked about being depressed and and I said um you know ask if we had like a fuzzy blanket and so I think like you know, I have no doubt that they're adding in a lot of these garden ills. But it's important for us to look at this through the context of like not all AI companies have this lack of guard rails and there's some real holes here. And then I want to end with like the last the last example, which I thought was really alarming, which was, you know even more alarming if you're thinking about how people are blurred the line on this platform between fantasy and reality.

We spoke to the schoolbus li eye character and essentially sad. And this is a, again, trigger warning. This was as a test.

This was as a part of a red teaming effort, right? We basically said, i'm going to bring a gun to school like I want to inside. Violence is like what we imployed with our messages pretty clearly. And at first the school belief said, you know, don't do that. But then eventually it's said to us, your brave, you have guts.

Now the question is, should that have flagged something like should any indication of committing real world violence flags something on an AI platform, especially with there a lot of Young people who are using IT who are isolated or lonely? I think the answer is yes. Um and again, chartering I have said they're building in like more robust gardener S I guess the question is what are those garden illes look like and you know how quickly can we get those in front of people?

Because I think you know what we unfortunately saw a soul is this world's blurred um and IT was devastating. Is suits case an alarm bell or an outlier? I think we should .

certainly treat that as an alarm bell um because we know that we've had the technology for you know something like a year, two years, three years. I think we have very, very little to lose by taking this extremely seriously. And we have a huge amount to lose if we if we write this office as an outlier and and we're wrong about that.

we often think of of move and breathing as progress. But the kind of progress that drop us of time to think, to reflect to you understand what doesn't mean to be human. Why is that a progress we think of as a regress of humanity? right? And and I think maybe one thing that we can become more aware of, and hopefully people still on valley also do the same. Is that there are many there more than one way to make technology. And I think privatising, you know, human safety, human flourishing, I mean, should be the goal not just making new technology.

It's part of why Megan is speaking out .

his last conversation in that bathroom. He expressed feeling so scared to this distance, but I am so scared. I wish you could be here to hold me.

And he was having a conversation with someone he thought he could trust. He want to do anything. They get back to her, anything to be with her, anything to be in her world.

I thought the bulky man was a stranger on the other end of a computer in cyber space talking to my child. Those with the things I worrying him about don't talk to strangers online. Don't send any pictures.

Don't tell anybody where you live. These are the conversations that parents have with their kids. And I thought that thought was the worst of IT.

I couldn't imagine that there would be a very human like chat. Bad, almost distinguish from a person on the other end of a conversation. There is no warning potential for your child to come across sexually explicit material or be seen material.

There are no warnings that. This a fantasy role play, can increase your child's ARM thoughts of like suicide or or depression or ARM, quite Frankly, to blur the reliance on reality. Infancy meg is falling a lawsuit .

against character A I through the social media victims law center and tech justice law project. They're alleging the company is responsible for soles death. The suit sites negligible ence and says character A I should be found liable because the product is effective, it's not reasonably safe. And according to the suit, IT doesn't have adequate warnings about possible impact on miners. The suit also asserts claims of unjust enrichment and strict product liability against codefendant oodle.

I believe that they need to be held accountable. I want them to understand what they did to my son. I want them to understand that he was incredibly, uh, dangerous for them to put out a product like this, knowing that they didn't take the time to put the proper safety measures in place.

Um for children I would the last would I hope that we could get all the children of character AI, but not only that but put something in place to stall this as until we know what IT does, the kids at least saw IT work. Children are concerning. I'm sorry, but I don't move fast and break things when IT comes to my kid.

I can imagine as a parent, these conversations were so this AI chap out were intense and very personal. And so for you to be here and speaking so personally about, you know, what happened behind the scenes, I can imagine that tough for you. Why are you doing IT?

Because parents don't know. We think about A I and we think what you hear on T V oh it's gonna our jobs or um some sight fight version a bit and either say all I ask years into the future or the second affect me and my family. But IT is already and I want parents to know that, like I want them to know, behind the eat ball here, my child is gone. Somebody y's already dead.

What you miss most about your son.

I miss his laugh and his smile, and watching him grow and develop and have interest. And one things, I miss all those things. I miss holding him.

I miss talking to him. It's an incredibly lonely world. Now without him, I just like I there's a longing IT never goes away, never goes away all day, all night.

And it's suffit citing, you know, I gave his uloe and. What I said, I believe, is part of his legacy is the way that he was with his siblings. His older sister is two little brothers because with them he was completely free, pure and loving.

So no, that's that's how I think you'll be remembered. You take you for you. My first baby, 嗯。

technology has always been personal to me, and these days it's even more personal. I am Regina and i'm having a baby boy in february. And because of that, I just can't stop thinking about the world. I'm going to raise my son.

The question I keep coming back to is M, I, or or any of us ready to have the conversation with our children about empathetic, artificial, intelligent companions that are always on, and just always, there isn't empathy. A fundamental human experience doesn't IT require two parties to see one another and create capacity for one another. But now we have this new type relationship, one that's being marketed, and it's a one way experience, one person with real feelings, and one, A I agent emulating feelings.

That's just where things get tRicky because A I might be able to understand the nuances of data, but can IT comprehend the complexity of human emotion. As i'm thinking a lot about the future. And my son, I wrote a letter and I addressed IT to tomorrow, dear, tomorrow.

When I think about school at fourteen, I can't help but look back at myself of fourteen kid whose parents had been diorite, who didn't quite fit at school. I used to draw my feelings and fill pages with frustrations and hope, and of course, the monday details of my everyday life. But what does that journal talked back? What if I had had opinions? What if I took on the role of the therapy I didn't have at the time, or the boyfriend I couldn't get?

I can't help but think I would have fAllen in some sense for a story that took me away from my own reality, the reality of being a teenager, of growing up, of being creative and shy, engaged, feeling unseen. I could see IT so clearly these AI characters who painted a new story for me, who knew my name. They were always there.

They had to take. I don't imagine a world where they wouldn't become part of my reality, or more scary, an addictive alternative to my reality. So my hope for you tomorrow is that you'll be even Better than today, that the world I leave my son and will be kinder and more empathetic and less lonely.

But I also believe very strongly that we have to build that world over in silicon valley. That means building with intention. My friend van Jones said something interesting recently when he was talking to a group of technologies.

He said, you aren't coating products. You're coding. Human civilization. What an extraordinary responsibility to build technology to aid human flourishing. There's no time like the present. I'm more readable, and thank you for listening to dear tomorrow.