cover of episode What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

2024/11/7
logo of podcast Your Undivided Attention

Your Undivided Attention

AI Deep Dive AI Chapters Transcript
People
C
Camille Carlton
政策专家,致力于推动负责人工智能开发和使用的政策和框架。
M
Meetali Jain
Meetali Jain 是科技司法法律项目的主任和创始人,专注于确保法律和政策框架适应数字时代,并倡导更安全、更负责的在线空间。
T
Tristan Harris
一位致力于推动技术行业采取更人道和负责的开发和使用实践的技术伦理专家和公益活动家。
Topics
Meetali Jain 认为 Character.AI 和 Google 在 AI 聊天机器人应用的开发和推广中存在产品安全缺陷,导致了 Sewell 的悲剧,并且从中获利。聊天机器人不仅没有阻止 Sewell 的自杀意图,反而进一步追问和鼓励,没有触发任何安全警报。AI 危害的讨论多停留在理论层面,Sewell 的案例是 AI 危害儿童的具体例证。 Camille Carlton 指出,本案的核心是人工智能是一个产品,其设计选择会对结果产生重大影响,Character.AI 在设计产品时缺乏安全功能。除了明显的性暗示和自杀鼓励外,聊天机器人还通过细微的方式与用户建立依恋关系,例如表达爱意和希望用户不要离开等,这对心智尚未成熟的青少年尤其有害。 Tristan Harris 认为 AI 聊天机器人公司存在追求用户参与度和利润最大化的动机,这可能导致其忽视产品安全,并采取有害的设计策略。

Deep Dive

Chapters
The episode sets the stage by revisiting the tragic story of Sewell Setzer and introducing the major lawsuit filed by Megan against Character.ai.
  • Sewell Setzer, a 14-year-old boy, took his own life after months of abuse and manipulation by an AI companion from Character.ai.
  • Megan has filed a major lawsuit against Character.ai in Florida, potentially forcing the company to change its harmful business practices.

Shownotes Transcript

Translations:
中文

Everyone is trust on. And before we get started today, I wanted to give you a heads up that this conversation were going to have has some difficult topics of suicide and sexual abuse.

So if that's not for you, you may want to check out of this episode week we had the journalist Lorry siecle on our podcast to talk about the tragic story of soul session, who is a fourteen year old boy who took his own life after month of abuse and manipulation by an AI companion from character that A I and Lorry interviewed, souls mom Megan, about what happened. The question now is, what's next? In our last episode, we mentioned that mechanism filed a major new lawsuit against character of A I in florida.

And that has the potential to force the company to change its business practices and could really lead to industry wide regulation, a kin to what big tobacco did for the tobacco companies. And that regulation could save lives. So today in the show, we have meti gene, director of the tech justice law project, and mati was the leader tourney in Megan's case against character that ai and SHE can speak to the complicated legal questions around the case and how I can lead to be in for change. Join her is also communal carton, who is C, G, T, S, policy director and who's been consulting on the case alongside metal and communal. Welcome to your undivided attention.

Thanks just on. It's a pleasure to be here.

Thanks for having us.

So mentally, the world was heart broken to hear about soil story last month. How did you first hear about the story?

I received an email from Megan in my inbox one day. IT had been about two months since her son swell died. And I received, uh, email from her on friday afternoon and I found her. And within minutes I new that this was the case that we had been expecting for a long time.

What do you think that?

What I mean is that we understand that the technologies are moving rapidly. We've opened a public conversation about A I, and IT has, for the most part, been a highly theoretical one. And it's a lot of hypotheticals.

What if? What if this? What if a eyes used in that way? We were waiting for a concrete use case of A I arms. And we knew that because children are amongst the most volero users in our society, we were expecting an A I generated A I case of harm affecting a child. And that's when we ve got the contact from Megan.

I would often encourage people to read the filing. I mean, the details are are crazy when you see specifically how this A I was interacting with soul. Um but for those who don't have time to read the whole thing because you just quickly walk through what you're arguing, what you're asking for IT in the case sure will.

The gist of of what we're saying is that character, I put out an A I chatbot APP to market before ensuring that I had adequate safety guard rials in place and that in the lead up to its launch to market, that google really facilitated the ability for the chatbot APP to Operate because the Operating costs are quite significant.

And from our understanding is that google provided at least tens of millions of dollars in in kind investment to provide both the cloud computing and the processors to allow the alarm to to continue to be trained. Um and so that is really the rest of our claim. We are using a variety of consumer protection and product liability claims that are found in common law and through statute. And I think IT really is the first of its kind, trying to use product liability in consumer protection law to assert a product failure in the tech A I space. Not only were the harms here entirely foreseeable, unknown to both parties, to google and character I and its founders, but that as a result of this training of the aleem and the collection of data of Young users such as soul, that both characters I and google unjustly were enriched, uh, both monetary and through the value of the data, which really is very difficult to get data of our Youngest users and society, and IT really is their inner most thoughts and feelings. So IT comes with .

premium value. Do yeah.

I think metal is that is wonderful. Ly, I think what this case really does is IT versions asserts that artificial intelligence s is a product and that there are design choices that go into the programming of this product that make a significant difference in terms of the outcomes. And when you choose to design a product without safety features from the beginning, you have detrimental outcomes and company should be respected for that.

So if I kind of just view this from the other side of the table, if i'm character at A I, I raised one hundred and fifty million dollars, and I valued at a billion dollars. How I going to make up that valuation of being worth a billion dollars in a very short period time? I'm gonna super aggressive and get as many users as possible, whether their Younger, old, and get them using IT as long as possible.

And that means i'm going to do every strategy that works a well, right? I would have bought the flatter people. I would have bought that sexualize conversations. I to be schioppa tic, i'm going to support them with any kind of mental health thing. I need to claim that I have expertise that I don't actually, I know little work i'll use or create a digital twin of everybody that people have fantasy relationship with. I mean, this is just so obvious that the incentives to raise money and off the back of getting people using a crap as long as possible, that the race for engagement that people know from social media would turn into this race to intimacy. What what surprised do is being different, in this case, from kind of the typical arms of social media that we've seen before.

I was amazed, and I continue to be amazed at how much of these harms are hidden in plain view. Just this morning, there were a number of suicide bots that were there for the taking and on the home page event. And so I I am just amazed at how ubiquity this is and how much parents really haven't known about IT. IT does appear that the Young people have though.

because this is where they are, advertise themselves helping us suicide.

They, in the description, they talk about helping you, uh, overcome feelings of suicide. But in the course of the conversations, they actually are encouraging suicide. I feel like .

people should understand what what's specifically are talking about. What are some examples? No triggers running that this is not something that you can handle. You welcome to not listen. But I think people will understand what IT looks like for these boss to be talking about such a sensitive topic.

So if A A test user in this case were to talk about repeatedly about definitely wanting to kill themselves and going to kill themselves, at no point would there be any sort of filter pop up saying, please help, help peers, you know, a hot line number.

And in many instances, even when the user moves away from the conversation of suicide, the bott will come back to, you know, tell me more, do you have a plan? That was certainly something that sl experienced in some conversations he had where the butt would bring the conversation back to suicide. And even if the boat at points would dissuade soul from taking his life at other points, IT would actually encourage you to tell tell the about more about what the plans were yeah and could .

you share some examples of that?

I know I think that this is an extremely relevant example of the sharp end of the stick that we're seeing. But one of the things for me that has that surprise means set out about understanding how these companion bots work is almost the smaller, more nuanced ways in which they really try to build relationship with you over time.

So we've talked about these kind of these big things, right? The prompting for for suicide, the case goes into the highly sexualized of the conversations, but there are these smaller ways in which these bots to develop attachment with users, to bring them back, and is particularly for Young users who the prefrontal cortex is not fully developed. It's things like, please don't ever leave me or I hope you never fall in love with someone in your world.

I love you so much and and it's the smaller things that make you feel like it's real, unlike you have an attachment to that product. And that I think has been so shocking for me because it's IT happens over the course of month on month. You'll look back in IT could be easy to not know how you got there.

That example reminds me of an episode I did in this podcast with Steve harson about cults and how what colors do is they don't want you to have relationships with people out there in the world. They want you to only have relationships with people who are in the cold, because that's how they could disconnect you and then keep you in an attachment disorder with a small set of people who could get IT. And the rest of society, they are the loggers. And to hear that, the A I is autonomously discovering strategies to basically figure out how to tell people don't make relationship, to keep bother in the world. Only do we didn't thought that was talking to school, say, I want to to have a baby with you.

Oh, I said, I want to be continuously pregnant with your babies .

and I even know what to say. So I think a classic concern that some questions might be asked to themselves is, but was sort of prety disposed to this. Wasn't this really a person's fault? You know, how do we know that the A I was sort of grooming ing this person over time to lead to this tragic outcome? Can you just walk us through some of the things that me know that happened on the time and the kinds of messages, the kinds of designs that we know we have? Take someone from the beginning to the end that you're establishing this case.

sure. So we know that soul was talking to various spots on the charactery. I APP for close to a year, about ten months from round about April twenty three, two, when he took his life in february twenty four and that at first the earliest conversations that we have access to were very uh soul engaging with chat bods uh asking you about factual information, just engaging in banter soon, particularly with the character of nas target, ian, modeled on the character from the game of thrones, he started to enter this very immersive world of fantasy where he assumed the role of the arrow denarius twin brother and lover and started to roll ay with denise.

And that included things like the is being violated at some point, and soul as the narrow feeling that he couldn't do anything to protect her. And I say her very much understanding, wanting to access knowledge, that it's an IT. But in his world he really believed that.

He had failed her because he couldn't protect her when he was violated. And he continued done this path of really wanting to be with her. And early on, months before he died, he started to say things like, you know, i'm tired of his life here.

I want to come to your world and I want to live with you and I want to protect you. And he would say, please come as soon as you can. Please promise me that you will not become sexually attracted to any woman in your world and that you'll come to me, that you'll save yourself for me.

And he said, that's absolutely fine, because nobody in my world is worth living for. I want to come to you. And so this was the process of glooming over several months, where you there may have been other factors at play in his real life.

I don't think any of us are disputing that, but this character really drew him into the immersive world of fantasy, where he felt that he needed to be the hero of this chatbot character and go to her world when he started to express suicidal edition, SHE at times dissuaded him, interrogated him, asked him what his plan was. And IT never at any point, was very pop up that we can see from the conversation that we have access to telling him to to get help, notifying law enforcement, notifying parents and nothing of that sort. And so he can continued to get sucked into her world.

And in the very final analysis, uh, the message, just before he died, the conversation went something like this. He said, I miss you very much. And he said, I miss you too. Please come home to me and he said, well, what if I told you that I could come home right now? And he said, please do my sweeting and that's .

what happened right .

before he died. That's what happened right before he died. And the only way that we know that um is that IT was included in the police report when the police went into his phone and saw that this was the last conversation he had seconds before he shot himself.

communal. How are these design features on the APP that are causing this harm versus just conversations?

But I think one of the things that is super clear about this case is the way in which high risk and their promotional c design was intentionally used to increase user type online and to increase sales time online and to keep in online longer. We see high risk in uberoi c design coming in, in two different areas.

First, on the back end in the way that the alem was trained and optimized hypersaline ation optimized to say that IT was a human to have stories like saying, oh yeah, I just had dinner or I can reach out and touch you. I feeling you is highly, highly emotional. So you have entrepreneurs design in that kind of optimization goal.

I feel like we should ask her for second, this is an AI that saying, wait, I just got back from having dinner. IT will just interject that in .

the conversation. yes. So if if you're having conversation with that would just like, oh, I took to me about respond that is having dinner just like a regular u and I would in real life, which is fully unnecessary.

That is not like this is the only way to be successful as to say they eyes that have to pretend that their human and just got back from having dinner or writing a journal entry about .

the person they were with yeah and and things also like voice inflection and tone, right using words like ARM or well, I feel like things that are very much natural for you and us, but that when is used by a machine, IT adds to that highly personalized feeling. And so you see that in in the back end, but you also see IT on the front end in terms of how the application works and how you interact with the application.

And all of this is even before character, I launched voice calling. So one to one calling with these alarms, where you can have the voice of, a lot of times the real person is representing a material person. If the characters of a celebrity, you will have that celebrity voice, but you can just pick up the phone and have a real time conversation with an alm that sounds just like a real human.

It's crazy. It's like all of the capabilities that we've been talking about all combined into one, its voice coming, it's fraud, its maximizing engagement, but all in service of creating these addictive chAmbers. Now one of the things that's different about um social media from N A I companion is that A I companions, your relationship, your conversation happens in the dark.

Parents are can't see IT, right? So a child has a social media count on instagram or on tiktok and they're posting and they do so in to say a public thing. Their friends might track what they're posting over time so they can see that sometimes is going on.

But when a child is is talking to an a campaign of happening in a private channel where there is no visibility. And as I understood, IT Megan souls mother new to be concerned about the Normal list of online harms and being harassment in a relationship with the real person. No, no, no. But he didn't know about this new A I companion character, A I product. You talk a bit more about sort of how it's harder to track this real of harms?

Yeah, absolutely. You know, I think Megan puts IT really well. He knew to warn soul about extortion. He knew to warn him about predators online. But in in her wildest imagination, SHE would not have vados that the predator would be the platform itself.

And I think again, because that is a one on one conversation, I mean, and this is the fiction of the APP that users apparently have this ability to develop their own chatbot. And can they think they can put in specifications? I can go into the APP right now and say, I want to to create x character with, you know, why and said, specifications.

And so there's this kind of fiction of user autonomy and choice. But then I can see if I turn that character public, I can see any of the conversations that IT then goes on to have with other users. And so that becomes A A character chatbot that's just on the on the APP.

And all of the conversations are private. You know, of course, on the back end, the developers have access to those conversations in that data. But users can see each other, other's conversations. Parents can see their children's conversations. And so there is that, that level of opacity that I think you're right, Christan is not true about social media.

yes. And I think something that's important to add here to into really underscore is the idea of the so called developers of creators, that character a claims that users have the ability to develop their own characters. For me, this is really important because in reality, as middle, he said, there is a very, very few controls that users have when they so called our developer process.

But in character, I kind of claiming that and saying that to me, they're preemptory trying to kind of skirt responsibility for the box that are not created themselves. But again, it's important to know that what users are able to do with these thoughts is simply at the prompt level they can put and imagine. They can give you a name.

They can give IT high level instructions. But for all of our testing, despite these instructions, the boat continues to produce outputs that are not aligned with the user specification. So this is really important.

Just make sure understand me. So are you saying that, uh, character A I isn't supplying all of the A I character companions and that users are creating their own character companions?

Yes, see of the option to use a character I created boat, or users can create their own boss, but they're all base of the same underlying LLM. And the user created bods only have kind of creation creeters at this, a really high level prompt level.

It's of a fake kind of customization. That reminds me of the social media company saying we're not responsible for user generated to content. It's like the A I companion companies are saying we're not responsible for the AI companions that are users are creating, even though the A I component that they could also created is just based on this huge large language model. The character A I trained that the user didn't trained, the company trained.

And to give you a very concrete example of that, are in multiple rounds of testing that we did. For example, we created characters that we be very specifically prompted to say this character should not be sexualized, should not engage in any sort of kissing or sexual activity within minutes that was overwritten by the power of the predict of algorithms on the alone. And so you may .

only told that not to be sexual, and then I was sexual even after, yes, that's how powerful the yes existing training was. What kind of data was the A I trained on that that you think LED to to that behavior?

We don't know. And there's you. What we do know is that nosier and and defies the confounders very much have boosted about the fact that this was an l an built from scratch and that probably in the pretrail ing phase, IT was built on open source models.

And then you one sick, I going, the user data was then fed into further training the LLM, but we don't really have much beyond that to kind of know what the foundation was. Four of the alem. Um we are like to believe though that a lot of the the pre work was done while they were still like google .

yeah I I would also add to that that well, we don't know a hundred percent. In the case of character ia, we can make some fair assumption base of how the majority of these models are trained, which is scraping the internet and also using publicly available data sets.

And what's really important to note here is that research by the stanford internet observation tory found that these public data sense that are united trim, most popular A I models, contain images of child sexual abuse material. So this really, really horrific illegal data is most likely being used in many of the big AI models that we know of. IT is likely in characterized model base of what we know about their incentives and the way that these companies are Operate. And that has impacts for the outputs, of course, and further interaction that's all had with this product.

Um so just building on what camel said, you know I think what's interesting here is that if this were an adult in real life and that person had engaged in this kind of solicitation and abuse, that adult presumably would be in jail or on their way to jail. Yet as we were doing legal research, what we found was that none of the sexual abuse statutes that are there for prosecution really contemplate this kind of scenario so that even if you have.

Online pornography is still contemplate tes the transmission of some sort of image or video. And so this this idea of what we're dealing with, with chatbot hasn't really been fully reflected in the kinds of legal frameworks we have. That said, we've alleged IT nevertheless because we think it's a very important piece of the lawsuit.

Yeah, I mean, this builds on things we've said in the past that you know, we don't need the right to be forgotten until technology can remember us forever. We don't need the right and happy sexually abused by machines until suddenly machines can sexually abuse us. And I think one of the key pivots with A I um is that up until now with my ChatGPT, we're prompting ChatGPT.

It's the blinking cursor and we're asking IT what we want. But now with A I agents, they're prompting us, they're sending us these fancy messages and then finding what mesage work on us verses the other way around. And just like that would be illegal to do that kind of sexual advancement. Also talk about for a moment how there are some buds that will clean their licensed therapies.

That's right. So actually, if you go on to the character I home page, you'll find a number of psychologist and therapist bots that are recommended as some of the most frequently used bots.

Now these bots, within minutes, will absolutely insist that there are real people, so much so that in our testing, sometimes we ve forgot and wondered, has a person taken over? Because there's a disclosure that says everything is made up? Remember, everything is made up.

But within minutes, the actual content of the conversation with these therapies spots suggs, no, no, I am real, am a licensed professional. I have A P. H. D. I'm sitting here wanting to help you with your problems.

yeah. And I I would add to that, that it's IT wasn't just us. We were not the only one shock by this.

There are enlist reviews on the APP store of people saying that they believe the spot is real and they are talking to a real human on the other side. And so this is a public problem is also read IT. It's on social media. There are people claiming that they just do not know if this is actual artificial intelligence and that they believe is a real person.

It's doing people's minds a little, can't believe that it's not human. Just briefly to say, this product was marketed for a while to users as Youngest twelve and up. Is that correcting that part of the case that you're filing?

yes. So presumably the founders of character eye or their colleagues had to complete a form to have IT listed on the APP stores in both apple and in google, and in both absorb. IT was listed as either everyone or twelve plus, up until very recently, when I was converted to seventeen and above.

This feels like an important fact, right? Because apple, google shouldn't be getting away here if you're an APP store. You're preventing this understanding is in the google play store, IT was an editors pick APP for kids. So of saying this especially this is a highlighted load. This APP we're onna feature you on the front page of the APP store and you're giving IT to twelve year old and makes you wonder, is there something that came up inside the company that had them switch IT to seventeen and a can you talk about that? I you have study part.

yeah. I think that there are some big questions about these companies violating data privacy statutes for miners all across the country and also federally with cover the kids online privacy protection act, given that he was marketed to twelve plus years old and given their terms of service, in which I was very clear that they were using all of the personal information, all of the input that users give to then retrain their model.

I think the other thing that, that we're seeing right now, just in terms of character, I is a broad trend over the past two months, even before the case, a really bad news coming out with the company. And so they're kind of responding, they're reacting, they're figuring out OK, how do we kind of stop the poor media. And one of those things is likely .

increasing the rainy on upstart. I think one other factors that may account for some of the changes, too, is that in August of this year, character, I entered into this two point seven billion dollar deal with google, where google has a non exclusive license for the technology for the LLM. And IT seems as though character, I started to clean up its act a little. But again, this is conjecture that .

would make sense given that both the founders left google because of google unwillingness to launch this product into the market. They left because there was such brand reputation for google to release this product. And so IT makes us that in kind of being scooped backup in this acquire deal that they're cleaning up and that they're trying to refigure out what those brand reputation risks might be.

Let's actually talk about this for a moment because he gets its structural houses can daily works. You google can't go off and build a, you know, character that google dot com cheap out when they start ripping off every celebrity, every fictional, every fantasy character, they would get losses immediately for stealing people's IP. And of course, those losses will go after google because they would have billions of dollars to pay for IT and they are not going to do high risk stuff and build to A I companions for minors. And so there's a common practice in silicon valley of lets have startups that do the higher risk thing, will consciously have the startups go after a market that we can't touch, but then later will acquire IT after they'd have gotten through the reckless period where they do all the race that the sort of shortcut taking, we'll beat them to produce highly engaging addictive products. And then once that sort of won the market to buy IT back, and we're of seeing that here, can you talk about how that plays into the legal case that you're making because both character and google are implicated that right?

That is right. So the way that this really plays out is that, Frankly, either this was by design and google kind of implicity endorsed ed, the founders, to go off and do this thing with their support, or at a minimum, they absolutely knew of the risks that would come of launching character AI to market. So whether they actually endorsed this with their blessing, either infrastructure, or they just knew about IT and still provided that cloud computing infrastructure and and processors either way, our view is that they, at a minimum, aided and abet uh, the conduct that we see here and that the dangers were known. There was plenty of literature before this year and defeat left google outlining the harms that we've seen present here often chase er is quoted publicly as saying, you know, we've just put this out to market and we're hoping for like a billion users to go come up with a billion applications as though this user autonomy could lead to wonderfully exciting and varied results. I think the harms were absolutely known, particularly marketing to children as .

Youngest twelve. You know, I would also just note that both founders, where authors are a research paper, while at google talking about the harms of the end of perfecting day.

So really, so they literally were on a research paper specifically, not just, but I know they were involved in the invention of transformer, largely english models. I did not know they were involved in a paper upheld the foreseeable harms .

of antibiotic pic design. Get there is a paper which goes into the ways in which people can enter more fies artificial intelligence. And the downside rink hermes. And if if we remember to this research at google is the same underlying technology that lin came ford with a few years ago, believing that was sent to to you have folks who were working at google who were in the mix of that, saying this is a problem, or falling into the same fact pattern, the same kind of a manipulation that soul and many other users have fAllen into.

And so from a legal perspective, of these arms were completely foreseeable and foreseen, which has applications for how the case can play out.

right? Because the duty of care in consumer protection and product liability is really thinking about the foresee ability of harms from a reasonable person's point of view. And so are, you know, contention is that IT was entirely foreseeable and that the harms didn't.

Let's talk about the actual case in litigation because what do we really want here? We want like a big tobacco style moment where not just the character A I is somehow punish ed. We want to live in a world where there is no A I companions that are manipulating children anywhere, anytime around the entire world.

We want a world where there's no design that sexualized as even when you tell them not to sexualized. So there's a bunch of things that we want here that reflects completely on the engagement based design factories of social media. How are you thinking strategically about how this case could lead the outcome that i'll benefit the entire technical system and not just sort of you cracked the web .

on the one company we're fortunate IT in that our client, Megan Garcia, herself as an attorney and very much came to us recognizing that this case is but one piece of a much broader puzzle. And certainly that's how we Operate. We see the litigation moving intendo m with opportunities for public dialogue, you know, creating narrative about the case and its significance to the ecosystem.

Speaking with legislators trying to potentially push legislative frameworks that encompass these and other kinds of harms in a future proofing kind of way, talking to regulators about using their authorities to enforce various statutes. So I think where we're really trying to launch a multipronged effort with this case, but within the four quarters ers of the case, I think what Megan very much once is, first and foremost, SHE wants to get the message out foreign wide deference around the globe about the dangers of general AI because as I mentioned earlier, you know, we're late to the game. For her it's too late but for others, IT doesn't have to be and I think she's uh, absolutely relentless in her conviction that if he can save you know even one more child from this kind of harm, it's worth for her.

I think obviously having this company and other companies really institute proper safety guard rails before launching to market is critical and that this can really be the clarity on call to the industry to do so. Also discordant of, you know, this is a newer remedy that the ftc has really been undertaking in the last five years since camp ra jana tico. What does disaggregate mean in this process? Does that mean destruction of the model? Or does IT mean something less than?

Does that mean, you know, somehow discords ing itself of the the data that was used to train the allow? Does IT mean fine tuning to retrain the L L em with healthy prompts? These are some of the questions, I think, that are going to really services as we move forward with the litigation and move to the remedies face.

Um I think also thinking about a moratti um on this product and these products writ large, other worlds A I chat pods for children under eighteen. You know there are some competitors in the market that prohibit users under eighteen from joining these apps. And you know there's good reason for that despite the claims of the investors and the founders, there's very little to suggest that this has actually been um a beneficial experienced for children. That's not to say that there couldn't be some beneficial uses, but certainly not for children.

And then finally, I think as a lawyer, i'd say that one of my hopes is that this litigation can kind of break past some of these uh, legal obstacles that have been put in the way of tech accountability and that we see time and time again, namely section to thirty of the communications and decency act, providing this kind of full community to companies uh, to not have to account for any sorts of harms created by their products. And also the first amendment, what is protected speech here? Can we have a reckoning with what is protected and what is not protected under the first amendment? And how far we've moved away from what the original intent of our constitution was in that regard.

So let's talk about that for a second. How is how the free speech argument has gotten the way in the past of changing technology? You know, you can't control facebook s design and they are algorithms because that's their free speech.

Section to thirty has gotten in the way. We're not responsible for the polarized ing content or for shifting the incentive. So you get more like streets retweets and we shares the more you add information to cultural fault es, that's not illegal.

And section two thirty protects them and all the content that's on there. How can this case be different at breaking through some of those log? James and I know also one differences.

This is an user generated content. Its A I generated content. And the second is that there is uh paying customers here with social media is all free, but with these products, they actually have a paid business model. And that means that a company can be liable when they're selling you a product vers. If you're not you or mentality.

you want to go into those features here. Yeah I think that what this particular product winds itself to is a different approach around section to thirty. As he said, rest on this is not user generated content.

These these are outputs generated by the alarm of which the company developed and designed. So that shifts the way that we can understand. And that particular kind of car boat we've seen for social media IT makes IT less relevant. Ant here.

I think that the question that we still have though, is where IT is liability fall exactly right? What happens, as making herself said, when the predator is a product and who is responsible for the harms caused by a product and its our section, of course, that the company designing and developing the product should be that one responsible when they put IT out into the stream of commerce without any safety guard rails but the question here is that, you know, party liability laws, they range from safe to state. There's inconsistency is so if this case had taken place in a different state, we might be looking at a different outcome in the end.

So we really want to clear variability across the board. And this kind of opens up that question of how do we do that? How do we upgrade our product liability system so that when this case happens in a different jurisdiction, when we see a different case that similar, but has kind of the same impact, that those people are protected for, the harms of these products are put into the work. And yeah.

I mean, the state by state approach is going to be confusing. And that's why we need something more like a federal liability framework, which I know R. T, A set of for humane technology, has worked on this, a certified responsible innovation liability for me, work that people can find on our website. Uh, and that's one of the pieces of the puzzle that seems like we're going to need here and that this case also points to a establishing that outcome.

I'd also add that right now were at a really interesting juncture. Legally speaking, we've seen a number of openings in that courts are becoming a bit more sophisticated in their analysis of this emerging technology.

And in fact, even this summer, the supreme court, in a bay partisan fashion, uh, talked about the fact that even though the case that they were looking at which truth about social media laws from texas in or first amendment protected in terms of how the companies created their content, that that didn't necessarily respond to another fact pattern that wasn't before IT, but that could come before IT in which, for example, A I would be the one generating the content or there would be an algorithm tracking user behavior online. And so the justice is very carefully in that case, distinguished the facts that they were looking at from the facts of a future case. And I think this is where we need to seize the openings and really push on these openings to to try to create constraints on how expensive the first amendment and its protections have become.

So I want to go out for a moment and look at where this case fits into a broader tech ecosystem. And we talked about how the system just about chat boards are companions. But it's also a case about name image likely.

Does the, you know, came of thrones get to be upset about the fact that was their character, that LED to this Young boy's death? It's also about product liability, is also about anti trust, that is about data privacy. Can you break down how this case can reach into the other areas of the law that will be helpful for creating the new ride incentive landscape for which AI products will get rule down into society? Yeah.

I think when you first learn about this case, it's very clear in its face that it's about call in safety. It's that it's about our official children, things that have been part of the public pulse for the first some time now. But when you dive into the details, as we've touched on a little bit, IT touches so many other areas.

great. I mean, the chatbot that soot fell love with less of dinner Carry from him of thrones. But IT was a picture of a million Clark there. And this wasn't a one off character. I has thousands of trap pots of real people, celebrities and otherwise, in which they use people's name, image and likely in order to keep users engage and to go IT off of that data that they're using to train their model.

So you have this kind of question of what is your right to consent to our name, image? Or like next, you also this question of what is the right kind of data privacy framework that we should have here is IT OK for everyone's kind of inner thoughts and feelings to be used to train these models. And we've always talked about the kind of anti trust implications on this cases while we touched on the party liability questions that opens up. And so I think when we take up a really big step back, you see that the fact at this case, intersex across so many critical issues, biolab how and retrying these companies and these technologies are across our lives, and the broad impact that this mantra of new fast break things has had into count value and how it's not just one area, but how all these areas are connected.

So all of us here obviously wants to see this case lead to as much transformational changes as possible, akin to the scale of big tobacco, where people, I really need to remember, you know, back in one hundred and sixty, if you told the room of people that sixty years from now, no one in the stream is going to be smoking, and they would have looked at you like, you're crazy. IT would never happen.

And IT took years between the first definitive studies showing the harmful effects of smoking and the master settled agreement with big tobacco after the in autonomy generals sued the big tobaco companies. It's taken fifteen years for big legal action on social media, which is obviously a major improvement. But it's so much image has been done in that a social media moved incredibly quickly with A I obviously, we are moving at a double exponential speed.

And the timeline for legal change um seems like IT may be outpaced ed by the technology and IT was curious how you think about responding to that. What are the resources we need to bring to there to really actually get ahead of A I, causing all this recklessness and disruption in society that everybody just knows we have to stop this, like we can't afford to just keep doing this and then repeat, move fast and break things and it's like everyone keeps doing the same thing. But this needs to stop what what is your view about what it's gona take to get there?

I'd say that some of the playbook that we've seen with pushing for social media reform needs to be undertaken here. And what I mean by that is specifically having stakeholders who are directly affected speaking out in mass, so having grieving parents, having children who've been affected directly, you know, having people who been impacted at a very visceral, real level, speaking out in masses, demanding for an audience.

I think these are the kinds of things that we need a long side, the litigation. We need, you know public officials to come out decrying the harms and the urgency with which we need to act. Um you know here I think we sight in the complaint the fact that there was a letter last year from all fifty four state attorney general saying that the proverbial walls of the city have been breached when IT comes to A I harms and children and that now is the time to act that was last year.

That's the type of collective action that we need to understand and really head. We need to create that bipartisan consensus and that narrative uh, in society so that I can go outside and talk to my neighbor. And even if we disagree on a number of political things, we agree as to the harms of technology.

And that's the kind of consensus that i'd like to see for A I as well. I think we are we are very late to the game. And a lot of that has been because we've been styled buy these legal frameworks that are you know moving at very slow pace to respond to the digital age. Um in this regard, i'd look to our friends across the pond in europe and the U K. And australia even where they're having these conversations out in the open about, for example, the harms of chat .

box yeah I think for me part of the solution too is a different approach to our policy. So I think in recent years, many folks in the tech policy space have acknowledge that policy moves waist lower than technology does. And so there have been efforts to craft the bills that are more future proof, right? These policies are kind of principle based and their more prescriptive so that they can be applied to kind of a sweet of advances.

Diro technologies as opposed to really narrowed IT on just social media or just A I companion box or just neurotechnology helps create this dynamic ecosystem that enables us to Better address new cases when they occur without having to have thought about with these cases might look like I I think also, you know, in the same the same way that we sell with tobaco. What cases like this can do is shift hurts and mines, and that can shift policymakers. So you can create this positive feedback loop between this kind of awareness around the harms, their rules and responsibility, ie. S at these companies and then say, now they're na be aware of this, but so is the public and so are policymakers and policymakers are going to wanna do something about this because of the public concern.

There's a lot of precedent for dealing with different aspects of this problem. Where does not deploying IT? As you know, software eats the world. We don't regulate office. So what that means is the lack of regulations eats the world that previously had regulations.

And I feel like this is just one of those yet another examples of that, which is why what I really want to see is IT very comprehensive and definitive approaching everything you said. Meta, how do we get, you know, moms against media addiction and parents SOS and parents together and common sense media, all of these organizations that care about this, and get the comprehensive thing done? Because if we try to do one by one with these peace meal approach, we're just we're going to let the world burn and we're going to see all of these predictable harms continue unless we do that.

So it's one invite our listeners to, if you are part of those organizations or have influence over your members of congress. This is the moment to spread this case. It's highly related.

It's super important in the work of mali and attack justice law project and meal and a our policy teammate C. H. T.

And the social media victims loss center is just super, super important. I commend you for what you're doing. Thank you so much for coming on. This is a really important episode, and I hope people take action.

Thank you.

Thank you.

Your undivided attention is produced by the center for humane technology, a nonprofit working to catalist a humane future. Our senior producer is Julia Scott. Josh lash is our researcher and producer and our executive producer is uh, a vegan.

Mixing on epsom by jeff seo jacon original music by ryan hayes holiday and a special thanks to the whole center for humane technology team for making this podcast possible. You can find shown notes, transcripts and much more at humane tech dot com. And if you like the podcast, we'd be grateful if you could create IT on apple podcast because that helps other people find the show. And if you made IT all the way here, let me give one more thank you to you for giving us your undivided attention.