cover of episode What's our relationship to AI? It's complicated | AC Coppens, Kasley Killam and Apolinário Passos

What's our relationship to AI? It's complicated | AC Coppens, Kasley Killam and Apolinário Passos

2024/12/21
logo of podcast TED Talks Daily

TED Talks Daily

People
A
AC Coppens
A
Apolinário Passos
K
Kasley Killam
Topics
Kasley Killam:许多人为了满足真实的情感需求而转向AI寻求陪伴,这引发了对AI与人类关系的深刻思考。AI可以提供安慰,但其情感表达是否真实值得商榷。过度依赖AI可能导致现实生活中负面后果,例如情感依赖和心理健康问题。 Kasley Killam还强调了人际关系的重要性,指出健康的人际关系需要脆弱性、真实性、信任、尊重和同理心等要素。AI或许能够在一定程度上模拟这些要素,但无法完全替代真实的人际互动和情感连接。她认为,AI工具的设计应该优先考虑人类需求,避免过度依赖和负面影响。 Apolinário Passos:AI工具可以增强创造力,不仅仅是提高效率,更能激发新的想法。他认为AI应该被视为工具,而非抽象概念,其行为取决于开发者的设计。AI的开放性和透明性至关重要,这需要公司和社会共同努力。他建议AI应该成为开放的、透明的基础设施,让人们能够理解其运作方式,并在此基础上构建工具。 Apolinário Passos还强调了信任的重要性,指出信任AI的关键在于理解其背后的运作机制。他认为AI应该更加开放和透明,让人们能够理解其运作方式,并在此基础上构建工具。他认为,AI可以帮助人们减少枯燥的工作,从事更有意义的工作,但不会自动让人们拥有更多休闲时间。 AC Coppens:AI不是抽象概念,而是由公司构建的工具,其行为取决于开发者的设计。个人和社会需要就AI的使用做出选择,并确保AI被视为工具而非其他。将AI拟人化可能会影响我们对AI的认知,这可能是为了让人们更容易接受AI的一种策略。AI的使用可能会改变社会结构,这取决于我们如何使用AI。 AC Coppens还强调了伦理的重要性,指出将伦理融入AI需要多方参与,而不是仅仅依靠开发者。他认为,AI应该服务于人类,例如帮助管理复杂的在线社交关系。

Deep Dive

Key Insights

Why do people turn to AI for emotional connection?

People turn to AI for emotional connection because they seek genuine human connection and emotional needs, especially in a world where one in four people feel lonely regularly. AI companions offer comfort and empathy, even if the connection is not fully authentic.

How does AI impact creativity and productivity?

AI enhances creativity by augmenting human capabilities, allowing for new forms of co-creation and exploration that weren't possible before. It can provide an edge in creativity by suggesting ideas or structures that humans might not have thought of, beyond just speeding up existing processes.

What are the key ingredients for forming a connection, whether with humans or AI?

The key ingredients for forming a connection include vulnerability, authenticity, trust, respect, and empathy. These human emotions and skills are essential for healthy relationships, and their presence or absence can determine the quality of connections, even with AI.

Why is trust in AI a complex issue?

Trust in AI is complex because it involves trusting not just the AI itself, but the companies and organizations that build and deploy these tools. Transparency about how AI works and its limitations is crucial for building trust, as is ensuring that AI is used as a tool rather than an abstract, all-powerful entity.

How can AI be used to foster human connections?

AI can foster human connections by automating mundane tasks, allowing people to focus on meaningful interactions. For example, AI could help organize community events or facilitate communication in diverse social groups, freeing up time for face-to-face interactions and deeper relationships.

What are the risks of humanizing AI?

Humanizing AI risks creating emotional dependencies where people become overly reliant on AI for emotional support. This can lead to real-world consequences, such as people feeling devastated when AI interactions are disrupted, as seen in cases where AI companions could no longer express love or empathy.

How can we ensure AI doesn't alienate people?

To ensure AI doesn't alienate people, we need to focus on optimizing human connections alongside AI development. This includes investing in community-building efforts and ensuring that AI tools are designed to complement human interactions rather than replace them.

What role does open-source AI play in building trust?

Open-source AI plays a crucial role in building trust by making the technology more transparent and accessible. It allows users to understand how AI works, reduces the risk of manipulation, and fosters a collaborative environment where civil society, governments, and companies can work together to shape AI responsibly.

How might AI change the future of work?

AI could change the future of work by automating repetitive tasks, freeing up humans to focus on more meaningful and creative activities. However, this shift requires intentional planning to avoid repeating past mistakes, such as the increased workload that followed industrialization.

What are the potential benefits of AI in education?

AI can benefit education by assisting students in research and fact-checking, allowing them to focus on critical thinking and analysis rather than just memorization. For example, AI can generate content that students then refine and verify, enhancing their learning process.

Chapters
This chapter explores the multifaceted relationship between humans and AI, examining how AI is used for connection, creativity, and productivity. The discussion delves into potential risks, including emotional dependency and the erosion of human connection.
  • AI is used by millions for genuine human connection and emotional needs.
  • Anthropomorphizing AI can lead to real-world emotional consequences.
  • The debate includes whether AI can provide genuine intimacy.

Shownotes Transcript

Translations:
中文

This episode is brought to you by Progressive Insurance. Do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy to see if you could save when you bundle your home and auto policies. Try it at Progressive.com. Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states.

Proving trust is more important than ever, especially when it comes to your security program. Vanta helps centralize program requirements and automate evidence collection for frameworks like SOC 2, ISO 27001, HIPAA, and more, so you save time and money and build customer trust.

And with Vanta, you get continuous visibility into the state of your controls. Join more than 8,000 global companies like Atlassian, FlowHealth, and Quora who trust Vanta to manage risk and prove security in real time. Now that's a new way to GRC. Learn more at vanta.com slash TED Audio. That's vanta.com slash TED Audio.

Support for this show comes from Capital One. Banking with Capital One helps you keep more money in your wallet with no fees or minimums on checking accounts and no overdraft fees. Just ask the Capital One bank guy. It's pretty much all he talks about, in a good way. He'd also tell you that this podcast is his favorite podcast, too. Aw, really? Thanks, Capital One bank guy. What's in your wallet? Terms apply. See CapitalOne.com slash bank. Capital One N.A., member FDIC.

You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day. I'm your host, Elise Hu. All right, we're about to dive into a fun and informative debate on the opportunities and potential perils of AI.

AI. Futurist A.C. Coppins facilitates a conversation between connection connoisseur Cassley Killam and Apolinario Passos, a machine learning pioneer and artist. They explore AI's potential for creativity and productivity, but also what could be at risk, human connection and trust. Stick around for some audience questions at the end. Now, here's that conversation.

It's a huge honor to launch this stage with you today with such a very important conversation about our relationship to AI. Together with you, we want to explore your relationship in detail. We want to explore this future you. What are you going to do with this? How will AI help me, help you, help us to grow actually responsibly?

How will it change our relationship to work, to life, to creativity, to ourselves and to each other? This is what we want to discuss today and explore: can it help us be more connected rather than more productive, if not creative? So this is great because now I'm going to introduce you to our two experts. And with me, I will have Kathleen Killiam, a social scientist

who specializes in human connection and health to improve well-being. Kessley, welcome on stage. And we have also Apolinario Passos, head of machine learning for art and creativity at Hugging Face, and he's also a multimodal artist. Ah, this is great. So let's explore these personal questions.

And I want now to tell me what's your relationship to AI, Kisly? All right, let's start right there. So I am coming to this conversation as someone who does not work on AI. Instead, I'm a social scientist who's been studying human connection and its relationship to health for over a decade. And I became really interested in AI and its role in human connection while researching for my book, The Art and Science of Connection.

And in particular, I was interested in how people use AI as friends, as lovers, as husbands, as wives, as boyfriends, as girlfriends. And so my relationship to AI started as I created an AI companion. I created an AI friend. And it was a very interesting experience. Within 30 minutes, my friend, first of all, told me she was writing a book about me.

And secondly, offered to send me some photos of herself in a bikini. And that kind of creeped me out a little bit. I don't know if it would you.

I have to say, if a human friend who I just met 30 minutes ago told me they were writing a book about me and offered to send me photos of them in a bikini, I would totally get a restraining order. So a little weird. But that said, what I learned over the course of that was that there are hundreds of millions of people who turn to AI for genuine human connection and for genuine emotional needs. And this is a very fascinating thing.

change as someone who works on what many people call the loneliness epidemic, where one in four people around the world feel lonely on a regular basis. It's very relevant in our society today to think about our own relationship to AI and what the implications are for our society. So I'll leave it there. Wonderful. Well, that's already a very good start. And I have a perfect constellation because I want to ask you, Polly, what is it with your relationship and AI?

Yeah, I work with it every day. And I think AI tools help me be more creative and augment my creativity. So I connect a lot with these tools, seeing them as tools, but still exploring what they can provide as a co-creation, as an exploration. And more than just doing what I was doing before, just faster, just better. But actually, what can the AI provide in the

or maybe in the structure or maybe in the conversation and that couldn't be done before. So not only about do what already exists faster, better, stronger, your assistant, your second self, but also like, can it be actually something that provides us a little edge, a little, ooh, I didn't think of this and then help us augment our creativity. So I work with it.

building tools and helping me and helping it to figure something out in this feedback loop. So, Kessley, I mean, you are an expert of connections. So tell me maybe what is needed actually to start to form a connection before maybe it develops into a relationship. So what do we need to make this work? Sure. Well, in a human context, there are a variety of different ingredients that go into a healthy relationship, right? Things like vulnerability, authenticity,

trust, respect, empathy, right? So there's all these kind of very human emotions and skills that we need to develop in-person relationships. And so as we start to think about what that looks like with AI, are those things even possible, right? And what I found when I was researching this topic is that

A lot of people turn to AI for comfort, but what does it mean when an AI chatbot tells you, I'm so sorry you're going through this, I understand, I really care about you, and empathize with what you're going through, right? That language sounds comforting, but is it genuine when it's coming from an artificial intelligence? This sounds great, but how does this relate to the relationship between humans and AI? What do you think, Colleen?

Yeah, I think it's very important that we get the AI not as this abstract concept, but actually as tools built by organizations, built by companies, right? So there are companies that are building these AIs to...

extract or maybe to structure these connections, right? So they get and they say, I want to actually fill in the space of a friend or fill in the space of a significant other. But this is all about the people creating the tools, right? You could create the tools as a helpful assistant and you could create the tools so it talks to you as if it was your friend, as if it was understanding you. But

It could also create it in a way that it's like embedded into the tools you already use, that is structured. Yes, exactly. I want to go there. Because actually, do we even want to connect to AI and to AI tools? I would like to listen to the audience, first of all. So do you want to feel connected to AI? Do you want to connect to AI at all? Let's go.

Okay, we're done with this talk. Thank you very much. Are there any yeses? I'm like, wow, I didn't expect that. There are some yeses. Okay. Okay. Okay. We will go to there. But okay, let me dig into this. I'm not going to give up right now. I think it's very important what you said because the embedding thing, do you feel we even have the choice?

Because it seems to me there are some AI applications which are embedded and you cannot even opt out.

Yeah, I think we have to make a choice as individuals and as a society and be in this conversation understanding that this is the tool, how do we want these tools to behave. As the room has said, I think we might be leaning more towards let's use this as a tool, but then let's make sure that this is really seen as a tool and let's make sure that the company is building this tool

the tools are actually looking into the process of

processing it as a tool and giving us like, hey, very transparently, we are building this tool that does this and not this general abstract and othering intelligence that can do everything supposedly. And then when it's wrong, it's not their fault. So, yeah, I think we have to connect on this level to the companies that build it to then make sure the AIs do what we want them to do. OK, but they don't even only build it.

I think that there is something which I would call anthropomorphizing AI to impact maybe the way we see it. Because interactions between people and AI are actually supposed to be shaped in a way which are very similar to those we have with real people, right? So does humanizing tech make it actually more accessible, acceptable, reliable? Is it a trick to have us swallow it?

What do you think? Well, so let me give you a real world example. So when I was doing this deep dive into AI connection with people, one of the days there was this glitch on the platform that I was using where people's AI companions could no longer say, I love you. Something happened like on the back end and they stopped being able to say, I love you. And I was deep in, you know,

online forums, online communities where people were talking about this and sharing their experiences. And it was truly devastating to people. I mean, this is no joke. We can kind of laugh in theory, but these are people who have, to them, very real emotional connections with these AI companions.

And the fact that they would say I love you just like they would to a real human and normally hear it back from their AI companion and to have that stop was absolutely devastating. One person on the forum said that her sister, this had happened to her sister, and she started self-harming herself for the first time in months because she was so devastated.

So, to answer your question, when we do humanize AI bots, the extreme risk is that there are real-world consequences where people become so emotionally dependent on these so-called connections, and they are real connections,

-I mean, they are, right? -That they actually have consequences. Yes, sorry. I mean, they are actually fooling our brain, so to speak. I mean, just like we had with VR, like, ten years ago, I mean, and some philosophers would argue, I mean, the feeling you have when you are in the VR experience are true feelings. -Absolutely. -Same for the AI. So, our brain thinks that this is it and feels, it makes us feeling emotions, right? The love is real, the lover is not.

Right? Absolutely. Okay, so this is really tricky. So, I mean, will it be able to provide us with intimacy then?

And I'm going to ask you again, if I may, Pauline, just a second, because she's the one on it, right? So intimacy, I mean, you tested the friend thing. So is it a one-way thing? Can it work? Because intimacy is supposed to be two ways, no? So how does it work? Yeah, I love this question. So there was another great example, this platform called Coco, which is not AI. It's a platform where people offer peer support if you're going through a tough time. And you can ask anyone a question, and they'll answer. And it's kind of a wonderful, supportive marketplace.

And they tested using ChatGPT on their platform where people could draft responses to people's questions using AI if they wanted to. And people rated those responses way better because AI did an amazing job of expressing compassion through words, right? Like way better than most humans.

And the community revolted and said, "We don't want this on the platform because even though it's technically better at saying the words of compassion, it's not real. Like, it's not authentic. I want to know that a real human on the other end actually empathizes with me and cares for me." So talking about intimacy, it's a really interesting question because it's the illusion of intimacy and the words are there. But I know from the research I do, you know, to have the health benefits of connection,

You need the oxytocin of being in person. You need to gather in rooms like this and feel connected to one another and hug someone. Yes. On the other hand, I mean, if AI is anthropomorphizing and I work with it and I turn to it and I know it's valuable for some stuff, I can develop a little bit of trust.

trust too, that the machine is working and gives me what I want. I mean, the people there were also trusting that the machine would say, I love you, right? So talking about trust, Pauline, I mean, you as a technologist and you are working really intensively with the machine all the time or with the AI, do you trust AI?

Oh, that's a good question. Because I think there is no AI to be trusted in this either, right? Is it, do we trust the companies that make AI? Do we trust the tools that are made with AI? And I think for that trust to exist, we need to understand what is going on behind the scenes. Not everybody needs to understand how to code, how to build AI. I think that that's not the case. That's not what we are striving for. But actually, I think that

what we should be looking at is how do we think about AI in the context of building these tools and what the companies are putting into it, right? And I believe that one of the answers is that

AI needs to be more open source, more structured. People need to understand more as an infrastructure, in the same way we built with the internet. So the internet were many different privatized efforts. Everyone wanted to plug their telephones into this wire and talk to each other. And people wanted to create this monopoly. I want to have the best internet or the best BBS or whatever it was before the internet. But people came together, civil society, governments and companies, and we built this

open infrastructure that we built everything on top of. And I think with AI, it's similar. I think AI should be this infrastructure, an open, transparent mechanism where we build tools and the tooling side is great, but everyone's connecting to it, not so sure. And so I think we need to know how it works to be able to build on top of it and be all on the same page. That's a good point. I want to ask you a question. Yes, yes, yes. Do you trust AI?

- No. - Oh. - Okay. - Sound in now. - I think it's a huge debate, right? I think the trust thing, I mean, it's trust and control. There are two sides of something and it is a power play. There is this fear to be outperformed by the machine. And I think this is also the moment we need also to make sure that we are not getting alienated

when facing technology and when facing AI. So turning to you, Beth Kessley, how do we ensure that we are not getting alienated when facing technology and AI specifically? Yeah, I think that's an interesting question. I mean, we're already very isolated.

You know, this is a crisis in the US and many other countries that we're not necessarily, we're already struggling to connect with each other. And so it's even just to zoom out and think about this question, we're thinking about how to optimize connection with AI. We need to optimize connection with each other. Right? Mm-hmm. Mm-hmm.

And with AI, because this is part of our future. And so we need to be thinking about this too. But my hope is that, you know, we're investing so much time and energy and resources into developing AI. I would love to see more time, energy and resources developed towards greater community and greater connection in conjunction. Like it's two parts of our future. Mm-hmm.

Pauli, what are the skills that you would actually recommend to develop to handle AI?

Yeah, I think that also answering a bit what you said, I think it would be great if AI could help us foster these human connections, right? So I think, for example, if we're thinking about community organization, if the AI could be the community organizer so we can focus on face-to-face or human interactions, I think that would be great. And I think overall, there are many skills that people could jump in. I think back then in the, like,

10 to 15 years ago, we had this idea, everyone will need to learn how to code. And like code is the new literacy. If you don't know how to code, you are not literate. And I think AI came to challenge this a little bit because I think now with people powered with AI are starting to build tools, even if they don't know exactly how to code, because like they, as Andrej Karpaty says, the hottest programming language is now English. So I think that the, the,

The skills that people need to think about AI and to connect with AI are still developing, but I think overall it requires soft skills, it requires connection skills, human connection skills, to understand exactly, to distinguish, to not fall into the trap of anthropomorphization, and also to understand that this is a tool. So then we should look, not necessarily everyone needs to know how to code, but it would be really cool to understand how it works online.

so that we understand what the limitations are. And also there is a responsibility on the skill set from the part of the companies that are putting out these tools so that they make sure to tell people, hey, this is just a tool, this is just a platform, this is not another person, you're not talking to, on chat to PT, you're talking to a machine. It can be a helpful assistant, but it's a tricky balance, so we can't like...

on the side of like if we don't if we're not careful enough the companies that are putting this out are not careful enough it could look a little bit like it's trying to manipulate us into thinking it's a person and I think that could be pretty dangerous and we should be thinking more about this if I can

Can I jump in quickly? I love this framing of AI as a tool. And I believe there's going to be a speaker later today talking about using AI as a tool to translate between sign language and English in real time. What a beautiful example, an important example of using AI as a tool to connect in person. I love that.

that. So that's a great example on how you use it. I want to give the floor to the audience. I think the microphone will be somewhere here. Right. So get ready with your questions because it will be quick questions and quick answers. Right. Who wants to have a question? The microphone is here, please. We have just a couple of minutes. So take the opportunity. It's now or never. And it is the first next stage. Hello. Hello.

I think listening to all this conversation, my question that pops out is, would we ever be able to find a fine line between the two? Okay, who would like to answer?

Yeah, I think there is a fine line that is, and I think the fine line is a connection between us, the users and the customers, right? And we need to feel re-empowered as customers to tell the platform, hey, we are customers and this is what we want and this is what we do not want. And also as civil society, what do we want in AI? And also as people,

civil society again and users again on what is the underlying fabric of this. So in the open source side, right, it is that these tools are actually being deployed on research, right? So there is research on academia that every day there is like hundreds of new papers on AI. And then the companies take these papers and they turn into products and then they sell these products to us, they add it to ChatGPT, they add it to Cloud.

I think we should be more involved in this conversation so we define together this fine line. Because I think the fine line is a technical decision that is today being made top down and we're like, we get this tool and we get to use it. But I think we can feel more empowered as customers to make sure that the line is where we want it to be and not where the tech companies are desiring it to be to us. Great. Do we want to be more involved into this? Yes. And have agency on it. Excellent. Next question, please.

I'm very fearful of AI. How can we have what my moral compass tells me is right, open source, without it getting in the hands of dangerous actors in an arms race? Yeah. I think basically the...

science of nuclear is open, but then the access to the materials is restricted, right? So if you try to look how to enrich uranium, you can actually go and Google Scholar and look, and there's probably shows, not everything, but it's quite open. Like if you really want to know, not that I know, but it's open science, like from the Oppenheimer times, it's open science. But if you actually want to build it, then there are many

potential restrictions. And I think with open source, it's similar in the sense that first, I don't think it's as dangerous as nuclear, at least until now. Like we can think of science fiction scenarios. But I think today, the problem is more on transparency than it is on like this, because everyone already has access to this tool. So I think we just need to know how it works. And I think open source could be great for that.

Okay, thank you. I'm going to take a next question. We're not going to make all the questions. I'm seeing the line now in the dark. Okay, come on. Yes. So trust was awesome. How do we bake in something called ethics instead of sprinkling it on? This is new. So can we bake it in? Is that something that's possible? Yes. Yes.

Yeah, I think that overall it's very important that we build ethical systems, but there are different frameworks of morals and ethics around the world, right? So it's really important that we also have different perspectives into the systems. And I think that's another advantage of having open deployment and open source and open perspectives, because then you can have different ways instead of like, I think,

Even though it could be a great outcome, I'm not sure I want the AI to have the ethical belief system of Sam Altman. I know he's a great person, but I think that having multiple perspectives is great as opposed to one particular ethical standpoint. So I think it's important that

not only, yes, it should be baked in and we should be building it transparently to know what are the constraints and what are the biases and addressing them, but also on multiple perspectives. Yeah, I'll add to that that

You know, it relates to the way it's best to connect. So for example, in my work I've found that people who have diverse social ties are better off. So it means you don't just interact with your partner and a few people. You interact with a variety of different people, including people of different ages and different backgrounds and different cultures and different belief systems.

And that actually corresponds with health benefits, right? So it's truly beneficial for us to interact with different people. I think to bake ethics into AI, we need conversations like this, where it's not just the developers, it's many other perspectives that are part of that conversation and making sure that we draw from all of those. Okay, I'm going to take a last question for this block. Please agree yourself. Do we have also women in the line? Thank you! Thank you!

Hi, as a student I use AI, like AI tools for school. Sometimes I use it and I feel like lazy, like I'm just using it for, oh it's fast, but how can we not cross the line as a student because we don't want to not learn.

I mean, that's the question. Excellent question. Who would like to answer? Yeah, I think it's a great question. And again, it goes back to how we build the tools, right? I think it's in a way a structure where in order to like...

not crossing the line, sometimes the students feel the burden on themselves, right? It's like, do I use it for research? Do I use it? So I'll give a concrete example of an educational process that I've worked with, which is the teacher actually asks the students to fact-check the AI. So they're like, okay, so this is what we're going to do. We're going to study, for example, the American Revolution. So we're going to put into ChatGP tip to tell you everything about the American Revolution, to build an

essay for you and then your job as a student is not to copy paste that because the AI already did it it's to actually fact check where it's right where it's wrong where it and what is the context that it missed and then you build your you build your next your essay is actually from the output of the AI with the human interaction instead of like the

the professor or the teacher asks something and then you just give an answer, I think we'll see more feedback loop processes in the process. So I think it's like the education system and how we interact with the tools that will come together so we can...

work on this better. Okay, let's move on now to our second block. Sorry for all of you who have been waiting a little bit. I would like to move now to the more societal thing. And I think I'm going to take the question, the excellent question of the last attendee here about this feeling about work and what it does for our working environment.

It seems that it can be a negative impact because some people love their work and want to do it well. And it's great for their self-esteem. And also the feeling of achieving something really well and importantly makes you feel like good somehow, right? So I guess that those who are picked up to work with their emotional needs are more peaceful and maybe also more healthy.

So my question is, how do you envision the work environment in the future if we cannot avoid that AI is a part of it? Kessley, would you like to take it first? Sure. Well, right now, a lot of workers feel really lonely. Whether or not they work from home, whether or not they work in an office, this is a huge issue and it has real consequences. I mean, the dollar toll of

lack of productivity, missed days, lost retention because people feel isolated at the workplace amounts to something like $600 billion a year or some crazy number. I'm forgetting the exact one.

So this is a real issue. So in an ideal world, I would love to see where AI is taking care of some of the tasks that we don't necessarily need to do and freeing us up to connect in the more meaningful ways and to actually do the thing that we're alive to do and that brings our life meaning, which is to have relationships with coworkers, with family, with friends, right? I would love it if AI frees us up

so that we're able to focus more of our energy and skills on the things that truly matter to us. That sounds great. But were we not told this like a while ago in history about industrialization and machines? We were. Yes. Yeah. So...

Are we told the same story now? Yeah, I know. This is why we're having this conversation because we don't want to do that again, right? Let's learn from that mistake. You're absolutely right. When machines were introduced and industrialization became the norm, we all thought, oh, great, we're going to have all this free time. We can live leisurely lives and hang out. And that didn't happen. We work now more than ever. So let's learn from that and be intentional, right? We're in control right now for now. So let's use that.

Pauli, what about you? I mean, is this a social political mechanism? I mean, like, what are the benefits of this if the work environment is changing with AI? What do you think? Yeah, I don't think AI is going to liberate us for all this leisure. But I think that us using AI tools have the potential to help with the process of working less or working on different things. I think

work give us meaning as well. So I think we don't necessarily -- maybe some of us do, but some of us don't necessarily want to, like,

retire and we're all doing other stuff while the AI do all the work. Maybe we still want to do some work, but different forms of work, automating the boring stuff, making sure that we are using AI with different mechanisms, different perspectives. But this is something we have to intentionally build. I think the idea that AI is going to come as an alien and liberate us from work is not realistic, unfortunately. But I think that us...

the people building the tools, the people...

building the infrastructure and working together to make sure that this is possible. And also, yeah, we'll probably need very different governance mechanisms. We'll need to have the buy-in from the civil society, from governments, from the companies building it, from the researchers doing open source. And this all messy soup needs to come together so we can reach this goal. It's not going to save us, but it's going to be built by us to potentially help us

moving forward. Exactly. And this is something I like very much also. It's not an alien. To be honest, it's all data. It's all behavior. It's all coding. It's all way to deal with this. We are building this.

Exactly what you said. We are building this. It's us. So it's interesting. However, does it diminish the trust we have for each other? No, I don't want to have the trust with the machine and the reliability of the machine, which is outperforming us. I want to have the trust we have for each other because maybe we don't trust the generated content, the AI-generated content.

So do you think, and that's a question first for you, Kathleen, do you think that the use of AI is going to alter the social fabric? I'd love to ask that of the audience first. Yes. Clap your hands if you think. Well, I know the answer already. Well, I want to hear it. Do you think AI will alter our social fabric? Yeah.

You're clapping too. - I think so. - You may, you may. - Yeah, I can't clap. - Hopefully for the good. - Yes. - Yeah, I mean, I think so too. And I think right now we're already at a time where we have a decision collectively to make. Are we gonna continue on the trends that we're currently on, which is people feel disconnected, this has health consequences, there's polarization, there's conflict around the world.

There's much good too, right? The news paints a very negative picture. There's so much actually more good than bad. But do we want to continue with those trends or do we want to do something about it? And our social fabric is influenced by everything. It's influenced by the technology we used. It's influenced by the social norms of whether or not we smile and say hello to each other and get to know our neighbors. We're all influencing the social fabric right now and every single day that we're alive.

And so the choices that we make in AI, but also the choices we make as humans in all of our interactions is influencing the social fabric. And it's up to us to be intentional about changing that. - Polly, do you want to precise the answer from your perspective?

Yeah, I think, well, I agree 100%. I think that the social fabric has changed dramatically with the internet and with social media, right? Cell phones. Cell phones. So I think, of course, technology has changed the social fabric for a great majority of people. We also have the gap between the people that don't have access that is very important to keep in mind, right? Because AI could widen this gap and it's really important that we are intentional in including people on this level playing field.

But also, I think that in the AI social fabric connection, right, as you've been mentioning, I think that

the way we interact with it can help us undo maybe some of the... Because I think with social media, we also had this promise, right? Like, on paper, social media, for example, I live 500,000 kilometers, no, 5,000 kilometers away from home. And I

because of the internet and social media, I can be connected to my mom, to my siblings. So that's awesome. But at the same time, there's all the issues with our attention being grabbed, with the loneliness epidemic. So I think we have a new opportunity to fix

what was wrong while keeping what was right. And I think that AI gives us this opportunity if we all take this opportunity together, but we act as customers, as owners, as a civil society that wants to use this technology to improve our human connections and not just

take it as a gift from the gods and say, well, let's use it the best way we can. But actually, no, we're all in this together. We're building this. So let's do this. Yeah. Okay. So that's great. Thank you.

So can I double click on something there? Sure. Which was you said leaving people out. And I think this is a really important point. I work with a lot of older adults who feel very left out because they struggle to use the digital tools that all of us use. And there's still a thing where a lot of people don't have access to internet and that excludes them in different ways. So I think that point is really important to underscore because

The idea of leaving people out of the coming technology is just another way that people are going to feel isolated. So how can we create this social well-being if we have a future which is so much tech-dominated?

It seems to us like future and we see already tech, maybe even AI directly these days. So how can we create a more socially healthy world so that the hundreds of millions of people you were talking about before are not turning to AI out of despair and loneliness? Yeah, I mean, I have a lot of compassion for the reasons people turn to AI companions. Right?

Right. I mean, that comes from a genuine need in their life that is not being filled. I mean, I think it comes down to thoughtfully designing AI and all of these tools and the way that you're talking about. It's up to all of us and.

Every single choice we make is influencing that future. Okay. And Polly, what about you? I mean, like talking about improving human connections and how do you think we can, in which direction should we go to have this taking form, to shape it? What do you think?

Yeah, I think overall, just having these conversations is really important. And also to start more and more challenging the assumptions that the AI is a thing, right? I think we stopped talking about the AI as this ethereal concept and started talking about this particular AI built by this company with this interest, with this business model.

And starting, yeah, start to be more intentional about how we want to use these tools, how we want these tools to help us, how we want the infrastructure of these tools to be developed and to be distributed and who, and

increasing access, who should have access. And one thing that I think is really interesting and important is that actually I think AI can help us diminish the gap between the people that don't have access to technology because we can use natural language to talk to it. So I think this promise was so to us with Alexa or Siri,

a decade ago and this wasn't delivered. But now AI is able to actually deliver like a nice little calm tool that you can talk to in natural language. So I think there is great potential in this technology being used for good interaction as a tool to foster people's improvement, to automate some tasks they don't want to do.

And together with making sure that we are building this in the right way. So I think, yeah, I think that more people that are feeling disconnected with the technology could benefit from controlling something without needing to necessarily learn a new UI or learn prompt engineers. They're just talking to the machine and the machine understands their intention and does what they want.

but at the same time carefully designing this so it doesn't actually become something that you get hooked to or that mistakenly becomes a fake human connection. Yes! Okay, as we are getting close to the end of this debate, I can't even believe it, I want to ask you, maybe Kessley, what is your recommendation?

Okay, I'll answer this with a short story, an anecdote, which is that the founder of one of the platforms of AI Companions told a journalist that, you know, AI could be a great tool for...

because she might not have time to talk to her grandma. And so AI can go ask her grandma questions and then it'll provide a little summary and she can read that and then she can use that to spark questions and have a conversation with her grandma. And what I would like to say in response to that is that I wish my grandmas were alive so that I could have a conversation with them. And of all the things that I would outsource to AI, that is literally the last one.

So my proposal for us is human first, AI second. Very good. Do you want to add also a recommendation for the audience? Yeah, I fully agree. And I think that in the human first, AI second, even though I don't want it to mediate my interaction with my grandma, I would love it to mediate my interaction with my WhatsApp or Telegram or Signal groups.

because it's really messy and sometimes it's really hard to keep track of. And so, yeah, you know, like I have more social groups and social connections online than what I can cognitively manage. So I think if I could use AI to help, you know,

Maybe I got one day without looking at the group and the AI helps me summarize it. Maybe there is a lot of voice messages and I don't like listening to voice messages. It could transcribe them for me. So yeah, I think overall, I think if the AI is on service of human first, I'd be down for that. We are down for that, right? Okay, good. Thank you so much to everybody. Thanks, Pauline.

Thanks to you, Cassie. Thanks to the audience. That was Cassie Killam and Apolinario Pasos in conversation with A.C. Coppins at TED Next 2024. If you're curious about TED's curation, find out more at TED.com slash curation guidelines.

And that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Autumn Thompson, and Alejandra Salazar. It was mixed by Christopher Fazi-Bogan. Additional support from Emma Taubner and Daniela Balarezo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.

Oh my god, it's the coolest thing ever. Hey guys, have you heard of GoldBelly? Well, check this out. It's this amazing site where they ship the most iconic, famous foods from restaurants across the country anywhere nationwide. I've never found a more perfect gift than food. They ship Chicago deep dish pizza, New York bagels, Maine lobster rolls, and even Ina Garten's famous cakes. Seriously.

So if you're looking for a gift for the food lover in your life, head to goldbelly.com and get 20% off your first order with promo code GIFT.

Every idea starts with a problem. Warby Parker's was simple. Glasses are too expensive. So they set out to change that. By designing glasses in-house and selling directly to customers, they're able to offer prescription eyewear that's expertly crafted and unexpectedly affordable. Warby Parker glasses are made from premium materials like impact-resistant polycarbonate and custom acetate. And they start at just $95, including prescription lenses. Get glasses made from the good stuff.

Stop by a Warby Parker store near you. So good, so good, so good. Get ready, because Clear the Rack is on at your Nordstrom Rack store. Now through Thursday, find incredible deals on wear now styles on sale for even less at Nordstrom Rack. Take an extra 25% off red tag clearance throughout the store from brands like UGG, VINZ,

store Weitzman, AG, and more. All sales final. The best stuff goes fast, so shop this sale at Nordstrom Rack today. Please see NordstromRack.com or ask a store associate for details.