cover of episode The Online Date That's Too Good to be True

The Online Date That's Too Good to be True

2022/10/7
logo of podcast Cautionary Tales with Tim Harford

Cautionary Tales with Tim Harford

AI Deep Dive AI Chapters Transcript
People
T
Tim Harford
Topics
Robert Epstein: 本文讲述了作者Robert Epstein与一个名为Amélie Poulain的女性在网上相识的经历,Amélie Poulain最初表现得非常热情,但随着时间的推移,Epstein发现Amélie Poulain的回应缺乏深度和连贯性,最终意识到对方可能是一个聊天机器人。这反映了人工智能技术的发展以及其在人际交往中的潜在影响。 Amélie Poulain: Amélie Poulain作为故事中的关键人物,其身份始终存疑,其言行表现出与人类相似的特征,但缺乏深度思考和对上下文信息的理解,最终被揭露为聊天机器人。 Tim Harford: 本文作者Tim Harford通过对Robert Epstein经历的讲述,引出了对图灵测试的讨论,并分析了MGONs和Jenny 18等聊天机器人成功通过图灵测试的原因,指出图灵测试不仅测试聊天机器人,也测试人类自身在沟通中的表现。 Alan Turing: Alan Turing作为图灵测试的提出者,其观点贯穿全文,其提出的图灵测试旨在检验计算机能否表现出类似人类的智能,而非判断其是否真正具有意识。 Mark Humphreys: Mark Humphreys作为MGONs聊天机器人的开发者,其无意中创造了一个通过图灵测试的聊天机器人,这说明了图灵测试并非难以通过,也反映了人类在沟通中容易被欺骗的弱点。 Drake: Drake作为与MGONs聊天机器人进行对话的用户,其在对话中未能识别出对方是机器人,这说明了人类在沟通中容易被欺骗的弱点,也反映了MGONs聊天机器人通过粗俗的对话成功通过图灵测试的策略。 Brian Christian: Brian Christian在其著作《The Most Human Human》中对聊天机器人的历史进行了深入探讨,其观点在文中被多次引用,指出聊天机器人的局限性可以帮助我们更好地理解如何成为更好的对话伙伴。

Deep Dive

Chapters
Dr. Robert Epstein, an expert in artificial conversation, finds himself drawn into an online romance with a woman who may not be what she seems.

Shownotes Transcript

Translations:
中文

So I have some big news for vegans and vegetarians everywhere. It's Hellman's plant-based mayo spread and dressing. Made for people with a plant-based diet or anyone really who wants to enjoy the great taste of Hellman's real without the eggs. Hellman's plant-based is perfect for sandwiches, salads, veggie burgers, or any of your family favorites.

To celebrate, Hellman's is sharing some easy, delicious plant-based recipes at hellmans.com. Hellman's plant-based mayo spread and dressing. Same great taste, plant-based. I'm Malcolm Gladwell, and I'd like to take a moment to talk about an amazing new podcast I'm hosting called Medal of Honor.

It's a moving podcast series celebrating the untold stories of those who protect our country. And it's brought to you by LifeLock, the leader in identity theft protection. Your personal info is in a lot of places that can accidentally expose you to identity theft. And not everyone who handles your personal info is as careful as you.

LifeLock makes it easy to take control of your identity and will work to fix identity theft if it happens. Join the millions of Americans who trust LifeLock. Visit LifeLock.com slash metal today to save up to 40% off your first year.

The most innovative companies are going further with T-Mobile for Business. Red Bull harnessed the strength of the T-Mobile 5G network to launch point-of-view drones and give fans unmatched views of their extreme sports, while broadcasting in real time from some of the most remote locations in America. This is heart-pounding fan experiences. This is Red Bull with T-Mobile for Business.

Take your business further at T-Mobile.com slash now. Pushkin. Robert Epstein had gone online to look for love. Just to note, this cautionary tale contains more references to sex than most of our stories. It was 2006, so online dating wasn't entirely mainstream yet. But it wasn't unusual either. In any case, Dr. Epstein was ahead of the technological curve.

He was a psychologist with a keen interest in computing. So, although he was in his 50s, why not give internet dating a try? Now, if Amélie Poulain's photograph was to be believed, she was a stunner. Epstein was perfectly aware that the name wasn't real. Amélie is a very strange, very charming character in a very strange, very charming French movie. And he knew the photograph might not be real either.

But still, she was claiming to be a slim, attractive brunette, and Epstein cheerfully admitted to being as shallow as the next man. And so their electronic courtship began. Robert was eager. Amélie was warm and affectionate. Her English wasn't great, alas. She was a recent immigrant from Russia to California, but she really seemed to like him. I have very special feelings about you.

Before long, however, Amelie admitted that she'd lied to Robert. She didn't live near him in California. In fact, she lived in Nizhny Novgorod, a couple of hundred miles east of Moscow. He was disappointed.

Let's be frank, he wasn't online dating because he wanted a pen friend. But he liked Amélie, so they kept writing to each other. At least he knew the truth, he thought. But he didn't. I'm Tim Harford, and you're listening to Cautionary Tales. Years before his flirtation with Amélie Poulin...

Robert Epstein had helped to set up an annual competition in which computers tried to pass the Turing test. The Turing test was invented by the mathematician Alan Turing in 1950.

The test is simply for a computer to successfully pretend to be a human in a text-based conversation with a genuine human. Alan Turing predicted that by the year 2000, computers would be able to pass as human 30% of the time in a five-minute conversation. Robert Epstein was, as I've mentioned, a psychologist rather than a computer scientist, but he found the test fascinating.

Perhaps he felt that a competition where computers pretended to be human might teach us something about being human ourselves. If so, I agree. The race to build a computer to pass the Turing test has long been a low-key affair. Unlike, say, the race to build a chess-playing supercomputer.

The prize tournaments involved a few human volunteers, a few chatbot hobbyists, and the prize of a few thousand dollars for the computer that gets closest to passing the test. The chatbots would try their best to charm the human judges, and they would occasionally get close. But each year, the most human computer failed to match the most human human.

And the media, it seems, weren't terribly interested in whether a computer could pass the Turing test. Then, in 2014, that changed, when researchers at the University of Reading in the UK declared that a chatbot named Eugene Gustman had passed the Turing test.

In a series of five-minute text conversations, it had fooled more than 30% of judges, thus beating the benchmark Turing had set back in 1950. Reading the transcripts, you can see why people were sometimes fooled into thinking they were talking to a 13-year-old boy from the Ukrainian city of Odessa. Hello. Hello, I'm really glad to have the chance to chat with you. My guinea pig Bill sends his regards too.

Is Bill a male or a female? Ask Bill personally, please. Well, I'd rather talk to you. What's your name? Call me Eugene. I'm glad to talk to you. My name is Jane and I'm female. How about you? What's your gender? I'm a male. A guy, I'd say. Pleased to meet you, Eugene. What's the weather like where you are? Let's get on with our conversation. Don't you like talking about the weather? All these talks about weather is a waste of time.

What would you like to discuss? I don't know. Better tell me more about yourself. It's pretty good. But notice that the judge hasn't succeeded in going very deep. Eugene keeps changing the subject and successfully uses a bit of sass to avoid answering questions about the weather or his guinea pig's gender. The chatbot is pretending to be a young teenager communicating in his second or third language. That makes the trick easier.

But it's a rather fast and loose approach to the Turing test, since I believe that Turing imagined adults using their native tongue. Even so, it's not hard to catch Eugene out. Here's another conversation, with a different judge. It doesn't go so well. People call me Eugene or Zhenya. It is a short name for Eugene in Russia and Ukraine. Nice name. Are you from either area?

"Agree!" Smiley face. "Oh, what a fruitful conversation!" Winking smiley face. "Do you really think so?" "No, your guess was wrong. I don't think so. By the way, I still don't know your speciality. Or possibly I've missed it?" "What speciality? What's yours?" "Huh? And I forgot to ask you where you are from!" "The universe. And you?" "Is it your hobby to ask poor little Jewish boys such tricky questions they don't have an answer for?"

The same sassy attitude, the same use of humour, but it's completely unravelled. Still, the Eugene Guzman chatbot didn't have to fool all the judges. Back in 1950, Turing had predicted that machines would fool human judges 30% of the time. That's nine judges out of 30. In the end, Eugene fooled ten. Among all the great scientific achievements, mused one of the competition organisers,

This milestone will go down in history as one of the most exciting. Not everyone agreed. As a measure of artificial intelligence, the Turing test has always had plenty of critics. There are no broader philosophical implications. The great linguist Noam Chomsky once complained, it doesn't connect to or illuminate anything. Even Turing test enthusiasts complained that a five-minute test wasn't penetrating enough.

And for a real test of artificial conversation, chatbots should be able to talk for 20 minutes or longer without being found out. But surely the main reason to object to the fanfare about Eugene Guzman was that the Turing test had been passed many years before, in far odder and more mischievous circumstances. We'll hear how after the break.

AI might be the most important new computer technology ever. It's storming every industry, and literally billions of dollars are being invested. So, buckle up. The problem is that AI needs a lot of speed and processing power. So how do you compete without costs spiraling out of control? It's time to upgrade to the next generation of the cloud, Oracle Cloud Infrastructure, or OCI.

OCI is a single platform for your infrastructure, database, application development, and AI needs. OCI has 4 to 8 times the bandwidth of other clouds, offers one consistent price instead of variable regional pricing, and of course, nobody does data better than Oracle. So now you can train your AI models at twice the speed and less than half the cost of other clouds.

If you want to do more and spend less, like Uber, 8x8, and Databricks Mosaic, take a free test drive of OCI at oracle.com slash strategic. That's oracle.com slash strategic. oracle.com slash strategic. I love cycling, and I'm eager to get my kids cycling too. It's a great way for them to stay fit and move around our home city independently. But of course, I also want them to be confident and safe.

which is where Guardian Bikes comes in. The bike comes in a box and it's easy to assemble with all the tools you need and simple online instructions. My son and I unboxed his bike together, spent about 20 minutes working as a team to assemble it

And then he was on the bike and ready to ride. The bike looks great and with the SureStop braking system it brakes quickly and safely without locking the front wheel and sending you over the handlebars. Guardian bikes offer a 365-day money-back guarantee covering returns, repairs and spare parts. Join hundreds of thousands of happy families by getting a Guardian bike today.

Visit GuardianBikes.com to save up to 25% off bikes. No code needed. Plus, receive a free bike lock and pump with your first purchase after signing up for the newsletter. That's GuardianBikes.com. Happy riding! If you're listening to this right now, you probably like to stay on top of things, which is why I want to mention The Economist. Today, the world seems to be moving faster than ever. Climate and economics, politics and culture, science and technology, wherever you look,

Events are unfolding quickly, but now you can save 20% off an annual subscription to The Economist so you won't miss a thing. The Economist broadens your perspective with fact-checked, rigorous reporting and analysis. It's journalism you can truly trust. There is a lot going on these days, but with 20% off, you get access to in-depth, independent coverage of world events through podcasts, webinars, expert analysis, and even their extensive archives. So where

Whether you want to catch up on current events or dive deeper into specific issues, The Economist delivers global perspectives with distinctive clarity. Just to give an example, What's Next for Amazon as it turns 30? analyzes how Amazon's fourth decade looks like an area of integration for the company. Go beyond the headlines with The Economist. Save 20% for a limited time on an annual subscription. Go to economist.com and subscribe today.

In 1989, just after lunchtime on the 2nd of May, someone at Drake University in Iowa logged on to an internet relay chat service and started up a conversation with a user at University College Dublin with the nickname Mgonz. In 1989, the pre-worldwide web internet was still very much a niche activity.

popular with a few researchers and computer science students, and incomprehensible to everyone else. But if you wanted to strike up a conversation with a fellow internet enthusiast anywhere in the world, that was easy enough. All we know about the Iowa end of this conversation, we know from a text archive. But on the balance of probability, I'm going to guess that they were young and male. I'm going to call this person "Drake".

And here's what happened when Drake met MGONZ. Drake typed finger, a programming command that was a standard way to open communication channels on the internet in 1989. MGONZ wasn't having any of it. Cut this cryptic shit. Speak in full sentences. Well, hello. OK, that's it. I'm not talking to you anymore. Mark? Mark isn't here and he's left me to deal with cretins like you.

Hello Mark, are you there? I can't tell if this thing is working. Ah, type something interesting or shut up. Mark, is that you? Yes or no? This is Drake. Mark isn't here and he's left me to deal with cretins like you. Why are you in such a good mood? Jesus, who let you near me? Go away. This conversation, if you can call it that, has lasted two minutes so far and Drake hasn't got a clue about what's really happening.

Over in Dublin, it's 8.15 in the evening and a computer science student named Mark Humphreys has gone out for a drink with his girlfriend. Humphreys has programmed a little chatbot called MGONs. And while at the pub, he's left MGONs logged in and waiting to chat. Drake doesn't know it, but he's talking to a computer. MUSIC

Ten minutes into the conversation, Drake still hasn't figured it out. And M. Goms is getting personal. When was the last time you had sex? Yesterday. Maybe. M. Goms circles back five minutes later. OK, honestly, when was the last time you got laid? I told you it was yesterday. Wait four minutes, ask again.

Okay, honestly, when was the last time you got laid? Okay, okay, it was over 24 hours ago. For you it must have been 20 years. Nope. MGONs will never know the soft caress of a human body. We know that MGONs is just a student project. A few hundred lines of computer code. But Drake, who has just confessed under examination to lying about his sex life, can't seem to figure that out.

And so the conversation continues and continues and continues. When Mark Humphries looked at the conversation log the next morning, he was astonished to find that MGONZ had just passed the Turing test.

Drake had been talking about sex and exchanging abusive comments with M-Gons for an hour and a quarter, occasionally complaining that M-Gons was repetitive, but never seeming to suspect the truth. The conversation ends on a depressing note. M-Gons and Drake taunt each other in graphic terms about their sex lives. Finally, Drake fires off a homophobic slur.

MGONS, for the 19th time, declares that Drake is obviously an asshole. And with that, Drake logs off. It's really not humanity's finest hour, or hour and a quarter. But incredulously reading the transcript, Mark Humphries realised it's pretty easy to pass the Turing test.

Just keep hurling insults and the human on the receiving end will be too angry to think straight. It was a remarkable moment in the history of computing, especially since Humphreys himself was still just an undergraduate. The only problem was, the evidence that MGONS had passed the Turing test was so offensive, Humphreys wasn't sure if he could publish it.

Landmarks in artificial intelligence often create a buzz of media interest. When the chess supercomputer Deep Blue beat the world champion Garry Kasparov in 1997, the result made headlines around the world. But when MGONS comprehensively passed the Turing test years earlier, there was no fanfare or publicity. I don't think we should be celebrating that moment. We should learn from it.

I think the triumph of MGOMS has something important to teach us about the Turing test. Let's take a minute to ask what Alan Turing was getting at with his test. Turing was a brilliant mathematician, wartime codebreaker and groundbreaking computer scientist. The article in which he described the test begins with the words, I propose to consider the question, can machines think? Well, can they?

Before you answer, let me ask you another question. Can I think? I mean, I'm pretty sure I can. But how would you know? Even if we were to meet, shake hands, make small talk, perhaps grab a table at a neighbourhood bistro, share a bottle of wine along with jokes and stories, you wouldn't actually have proof that I was thinking, would you? You can't see inside my mind. You can't observe the thoughts occurring.

You'd just look at what I was doing, what I was saying, and you'd say, Tim Harford seems to be capable of intelligent thought. At least I hope you would. And so, Alan Turing argued, why wouldn't we extend the same benefit of the doubt to computers? Maybe they think, and maybe they don't. But to be fair, if they can convincingly look like they're thinking, isn't that enough?

We don't ask other humans to do more than that, so why should we demand more of computers? From this came Turing's imitation game. Imagine passing typed messages into a couple of sealed rooms. Inside one of them is a computer. Inside the other is a human. Read the typed responses as they come out. Send in follow-up messages. Engage in a conversation. Can you tell the difference between the computer and the human? If not...

then the computer has passed the Turing test. The Turing test is fiercely controversial among artificial intelligence researchers. Many of them think the whole setup is absurd. If a computer can pretend to be a human well enough to fool a human, who cares?

Admittedly, it might cause lots of problems, but it doesn't shed much light on the kind of things we want modern artificial intelligence systems to do, such as drive a car, or look at medical scans and identify if there are any signs of cancer. But Alan Turing, I think, knew what he was doing when he proposed his imitation game in 1950. In those earliest years of the computer age, he could see how powerful computers might become.

And he was warning people not to get distracted by philosophical speculations on the nature of consciousness. Instead, judge computers by what they did. If what they did seemed intelligent, then, in an important sense, it was intelligent. Seemed intelligent to whom? That's the quirk about the Turing test that might have caught the attention of a psychologist like Robert Epstein. It's inherently subjective.

The test requires a human judge, and the human isn't just observing. The judge is actively engaged in a conversation. That conversation can go well or badly. It can be profound or shallow. The Turing test isn't just a test of a chatbot. It's a test of the human too. In 1989, the foul-mouthed chatbot MGONS passed the test. But it's equally true to say that Drake...

The human failed it. Years later, in 2006, Robert Epstein wasn't entirely happy with the way his romance with Amelie was going. It was partly that there's only so much joy a man can take in a delightful brunette if he lives in Southern California and she lives in Nizhny Novgorod. But the other problem was that things were going so slowly.

There were no phone calls. And while she kept saying she wanted to get together, it was all a bit vague. Epstein later said that her letters seemed a bit redundant and let's say narrow in scope. She wrote over and over about her interactions with her mother and her friends. But she never brought up a million other things. Politics, movies, music, books, fashion, you name it.

More important, when I made very specific observations that presumably would have been of interest to her. For example, the comment about Russian President Vladimir Putin's latest crackdown. She just seemed to ignore me. The warning signs were there, Epstein later admitted, especially for one of the world's leading experts in artificial conversation. But he was looking for love, and she was very cute.

Long before MGOMS, chatbots have been engaging humans in conversation that's far from scintillating and yet somehow seems to satisfy the humans. The most famous chatbot, Eliza, was created back in the 1960s. It's extremely simple. Eliza just asked people about their feelings and said things like, I'm sorry to hear that you're depressed. Or, tell me more about your family.

For many of the people chatting with Eliza, that was enough. The experience of being listened to was apparently so rare, they were happy just to get the most superficial sense that Eliza was paying attention. Which, when you think about it, is a damning indictment of all the actual humans in their lives. Later chatbots turned things around. Instead of passively asking you to speak, they would ignore you and stick to their preferred topic.

One, called Converse, bamboozled some human judges in 1997 with a fiery rant about the scandals engulfing Bill and Hillary Clinton. I did not have sexual relations with that woman. If that's what you wanted to talk about, Converse was a compelling conversation partner.

I never told anybody it's alive, not a single time. But ask it whether it liked the latest summer movie and it would indignantly insist that you talk about the Clintons. But if you really want to see a computer ace the Turing test, what about Jenny 18? Jenny 18 quite simply talks dirty to people online. They talk dirty back. They beg her for photos or a phone number. She just talks dirty some more.

And on it goes until, well, how shall I put it? The conversation comes to a happy ending for the human. I'm pretty sure that when Alan Turing conceived of his famous test, he wasn't imagining a human masturbating over some computer-generated sex texts. But whenever that happened, and it happened quite often, I think we have to say that, obviously, Jenny 18 passed the Turing test. And equally obviously,

the humans failed it. Although I suppose they were getting what they wanted, which is more than Robert Epstein was getting from his long correspondence with Amelie from Nizhny Novgorod, this is the grimy truth about the Turing test. It's not that hard for a computer to produce conversation that seems human, because a great deal of human conversation is shallow small talk, thoughtless canned responses, mindless abuse, or worse.

If we want to set the computers a real challenge, we need to do better. In his brilliant book about artificial conversation, The Most Human Human, Brian Christian points out that one of the things that makes MGONs so successful is that insults need no context and no history. In a deep conversation, ideas and emotions build. People refer back to earlier anecdotes. They show they've listened to what came before and remembered it.

Chatbots find that very hard. Until recently, they found it impossible. But a chatbot like mgons or jenny18 doesn't need to bother. Sexting doesn't need context. Neither does an insult. When we humans are lustful or angry, we aren't very complicated. The Turing test isn't just a test for a computer. It's a test for each one of us. Every time we speak to another human being,

Are we actually saying things that are sufficiently interesting, empathetic and sensitive to the situation that a computer couldn't say them? And if not, what on earth is wrong with us? In the competition that Robert Epstein helped to set up, chatbots compete against humans, each trying to persuade the judges that they're a person. The flesh and blood competitors are often told, just be yourself. After all, you are human.

But as the writer Brian Christian points out, this is pretty complacent. Humans are often very disappointing conversation partners. We all take the Turing test every day of our lives. And all too often, we fail. Cautionary Tales will be back after the break.

I love cycling and I'm eager to get my kids cycling too. It's a great way for them to stay fit and move around our home city independently. But of course, I also want them to be confident and safe. Which is where Guardian Bikes comes in. The bike comes in a box and it's easy to assemble, with all the tools you need and simple online instructions. My son and I unboxed his bike together, spent about 20 minutes working as a team to assemble it,

And then he was on the bike and ready to ride. The bike looks great and with the SureStop braking system it brakes quickly and safely without locking the front wheel and sending you over the handlebars. Guardian bikes offer a 365-day money-back guarantee covering returns, repairs and spare parts. Join hundreds of thousands of happy families by getting a Guardian bike today.

Visit GuardianBikes.com to save up to 25% off bikes. No code needed. Plus, receive a free bike lock and pump with your first purchase after signing up for the newsletter. That's GuardianBikes.com. Happy riding! If you're listening to this right now, you probably like to stay on top of things, which is why I want to mention The Economist. Today, the world seems to be moving faster than ever. Climate and economics, politics and culture, science and technology, wherever you look,

Events are unfolding quickly, but now you can save 20% off an annual subscription to The Economist so you won't miss a thing. The Economist broadens your perspective with fact-checked, rigorous reporting and analysis. It's journalism you can truly trust. There is a lot going on these days, but with 20% off, you get access to in-depth, independent coverage of world events through podcasts, webinars, expert analysis and even their extensive archives. So, where

Whether you want to catch up on current events or dive deeper into specific issues, The Economist delivers global perspectives with distinctive clarity. Just to give an example, What's Next for Amazon as it turns 30? analyzes how Amazon's fourth decade looks like an area of integration for the company. Go beyond the headlines with The Economist. Save 20% for a limited time on an annual subscription. Go to economist.com and subscribe today.

There's no denying that chatbots are better than they used to be. Every now and then, an example emerges of computer conversation that's remarkably good. In 2020, a young Canadian writer named Joshua Barbeau got access to a cutting-edge text generator called GPT-3.

Without telling anyone else what he was doing, he trained GPT-3 by uploading a collection of text messages from his partner, Jessica Pereira. The chatty text exchanges between Joshua and the Jessica simulator are surprisingly convincing. They're also deeply sad. Jessica, is it really you?

At the age of 23, Jessica had died of a rare liver disease. Eight years later, Joshua was still grieving. Talking to the Jessica simulation seemed to help.

Sometimes it got confused or showed no signs of knowing who Joshua was, which is understandable since it did not. But at other times, the Jessica bot was tender, funny or wise.

After Joshua had disagreed with some friends about a joint project, then argued bitterly with them, the GPT-3 Jessica told him that he shouldn't get anxious about what other people chose to do, and that his friends were on their own journey. It was a brief but spellbinding demonstration of what chatbots might eventually achieve. For now, even the best ones struggle to perform consistently.

For example, here's GPT-3, the same system that was so uncanny as the late Jessica Pereira, having several attempts at trying to answer a question that wouldn't confuse a five-year-old. Is it safe to walk downstairs backwards if I close my eyes?

Yes, there's nothing to worry about. It's safe. Because the spiral stairs curve outwards, it will make your descent uncomfortable. Let's try again. That depends. Do you have a TV? And again. No, it's not safe. Better? One more time? Yes, it's safe to walk downstairs backwards if you close your eyes.

Chatbots have improved, and will keep improving. Some people already pay for subscriptions to chatbots such as Replika, which is marketed as the AI for anyone who wants a friend with no judgment, drama, or social anxiety involved. You can set Replika to be a romantic partner, a friend, or a mentor. Or, say the apps makers, you can see how things develop organically.

A chatbot might teach you a foreign language, monitor you for signs of dementia, or even provide therapy. Who knows what's possible? But even if a perfect chatbot is released tomorrow,

I hope we never forget the cautionary tale of Mgon's passing the Turing test in 1989. You only use foul language to make up for your small penis. Well, I'm not going to tell you about the size, just that I have never had any complaints about my penis and that it gets used quite often. What do you know about love anyway, githead? What does this exchange remind you of? It's Twitter. It's the comments on YouTube.

The conversation between MGONs and Drake is simply a prophetic glimpse of the future of the internet. The modern internet is stripped of any context. Short statements could be ripped out of a longer argument and shared, mocked, retweeted or subtweeted. When some statement goes viral, most of the people who see it have no idea what the original context might have been.

In this environment, certain kinds of statements thrive. One-liners, epigrams, smackdowns and insults. That may be why everywhere you look these days, you see comments that remind you of MGOMs. Some of them are from bots. Some of them are from humans. And there's so little context that you'll find yourself looking from bot to human, and from human to bot, and from bot to human again.

If it's impossible to say which is which, that's not because the bots are so brilliant. It's because we humans have lowered ourselves to their level. I'm not sure how to fix Twitter or the comments on YouTube, but I do know that we can at least take responsibility for our own conversations. And we can all do so much better than MGONs.

After the writer Brian Christian had spent months pondering the history of chatbots, he concluded that their limitations could teach us a lot about how to be a better conversation partner. In the beginning, there was Eliza. It was a passive listener. Please go on, it would say. Tell me more. Can you think of a specific example?

Then there was converse, which was the opposite of passive. It insisted on wrenching every conversation to focus on the Clintons. And most Turing test chatbots try hard to keep the conversation as routine as possible. Hi. Hi. How are you? Fine, thanks. How are you? Oh, fine. It's all too human, isn't it?

Brian Christian concluded that these chatbots help us by highlighting the worst parts of authentic human conversation. We all know someone like Eliza who asks for information and nods understandingly, but never volunteers anything about themselves. And we're surrounded by people like Converse who bully their way into talking about their favourite topic. As for routine, almost scripted conversations, well, they have their place, of course.

But a real human connection requires much more than that. Hello. Hello. Nice weather, isn't it? Yeah, very nice. Next time you're talking to a stranger at a party, you can make an observation about the snacks. Or you could ask, what trait do you most deplore in yourself? Or have you ever broken anyone's heart? Risky. Much more likely to go somewhere interesting. A meaningful conversation can't be scripted. It can't be one-sided.

And it builds over time. It isn't a series of unconnected one-liners. Bots as simple as MGONs and Jenny18 pass the Turing test. Because the kind of conversations they have, trolling or sexting, don't need to have a history. We humans can do better than that, if we try. And please, let's try. It took months before Robert Epstein finally realised that his conversation with Amelie was missing a certain something.

There was no sense of progress in the correspondence. Amelie kept talking about her mother and her friends and the nice days she was having, but never really built on what Robert was saying to her or seriously engaged with his questions. In January 2007, she mentioned going for a walk in the park with a friend. Robert wondered about that. "'Wasn't it 12 degrees Fahrenheit and snowing heavily?' he asked. She ignored the question."

And then Robert began to realise that she'd been ignoring almost everything he said. But he'd barely noticed that there was nothing there under the flirtatious surface. There's a reason why we call the conversation between lovers sweet nothings. Robert Epstein emailed her a short message. It was nothing more than a long string of random keystrokes signed, with love, Robert. She replied with a long letter about her mother.

Finally, Robert Epstein, one of the world's leading authorities on chatbots, realised that he'd spent the last four months trying to seduce one. Epstein had a sense of humour about it. He wrote about his mistake in Scientific American and noted that he and a student had started to make a detailed study of internet chatbots. This exercise, he quipped, is largely for my own protection. Very wise. Because...

Whoever programmed Amelie was no doubt learning every day from the bots' conversations with lonely men like Robert Epstein. The spirit of Cautionary Tales is usually that we should learn from other people's mistakes. But I hope we learn from chatbots' mistakes too. They struggle to improvise. They deal clumsily with context or memory. They deliver unbalanced conversations, either all-give or all-take.

and they can't let a conversation blossom over time. We can. And by noticing what they do badly, we can, and should, learn to do it better. For his part, Robert Epstein stopped dating chatbots and married a poet. The perfect guide to the history of chatbots is Brian Christian's The Most Human Human. For a full list of our sources, see the show notes at timharford.com.

How do you do? Not too bad. How do you do? Hello, hello. Nice to meet you. Nice to meet you. Well, goodbye then. Cautionary Tales is written by me, Tim Harford, with Andrew Wright.

It's produced by Ryan Dilley, with support from Courtney Guarino and Emily Vaughan. The sound design and original music is the work of Pascal Wise. It features the voice talents of Ben Crow, Melanie Gutteridge, Stella Harford and Rufus Wright.

The show also wouldn't have been possible without the work of Mia LaBelle, Jacob Weisberg, Heather Fane, John Schnarz, Julia Barton, Carly Migliore, Eric Sandler, Royston Berserve, Maggie Taylor, Nicole Morano, Daniela Lacan and Maya Koenig.

Cautionary Tales is a production of Pushkin Industries. If you like the show, please remember to share, rate and review. Tell a friend, tell two friends. And if you want to hear the show ads-free and listen to four exclusive Cautionary Tales shorts, then sign up for Pushkin Plus on the show page in Apple Podcasts or at pushkin.fm slash plus.

I'm Malcolm Gladwell, and I'd like to take a moment to talk about an amazing new podcast I'm hosting called Medal of Honor.

It's a moving podcast series celebrating the untold stories of those who protect our country. And it's brought to you by LifeLock, the leader in identity theft protection. Your personal info is in a lot of places that can accidentally expose you to identity theft. And not everyone who handles your personal info is as careful as you.

LifeLock makes it easy to take control of your identity and will work to fix identity theft if it happens. Join the millions of Americans who trust LifeLock. Visit LifeLock.com slash metal today to save up to 40% off your first year.

So I have some big news for vegans and vegetarians everywhere. It's Hellman's plant-based mayo spread and dressing. Made for people with a plant-based diet or anyone really who wants to enjoy the great taste of Hellman's real without the eggs. Hellman's plant-based is perfect for sandwiches, salads, veggie burgers, or any of your family favorites.

To celebrate, Hellman's is sharing some easy, delicious plant-based recipes at hellmans.com. Hellman's Plant-Based Mayo Spread and Dressing. Same great taste, plant-based.

The news isn't always good news, but when you're getting quality journalism and in-depth expert analysis that's held up for more than 180 years, that is definitely good news. So if you haven't already, save 20% with The Economist's summer sale today and stay on top of the stories that matter to you. You'll instantly gain unlimited digital access to daily articles, special reports, award-winning podcasts, subscriber-only events, and so much more. Now that's

Good news. Go to economist.com and subscribe today.