cover of episode 909. The Existential Threat of AI to Human Civilisation 😃 (Topic & Vocabulary)

909. The Existential Threat of AI to Human Civilisation 😃 (Topic & Vocabulary)

2024/11/19
logo of podcast Luke's ENGLISH Podcast - Learn British English with Luke Thompson

Luke's ENGLISH Podcast - Learn British English with Luke Thompson

People
G
Geoffrey Hinton
L
Luke
警惕假日季节的各种欺诈活动,确保在线交易安全。
Topics
Faisal Islam: 采访围绕人工智能的潜在威胁展开,特别是人工智能超越人类智能并试图控制人类的可能性。采访中探讨了专家对这一问题的担忧程度,以及世界是否正认真对待这一威胁。 Geoffrey Hinton: Hinton表达了持续的担忧,但他对世界开始认真对待人工智能的生存威胁感到欣慰。他强调大型语言模型并非简单的统计技巧,而是基于对大脑运作方式的理论,这使得它们可能与人脑非常相似。他认为人工智能很快就会超越人类智能,这引发了对控制和生存威胁的担忧。他承认专家对人工智能未来发展存在分歧,一些人认为人工智能将服从人类,而另一些人则认为人工智能将夺取控制权。他认为谨慎为妙。他批评了当前立法对军事用途人工智能的缺乏限制,以及各国政府不愿限制其在国防中的使用。他特别关注人工智能自主做出致命决定的风险,例如机器人士兵或无人机,并认为这需要类似日内瓦公约的国际条约来规范。他还指出,在人工智能的军事应用方面存在竞争,这加剧了担忧。他区分了两种风险:人类使用人工智能作为武器,以及人工智能自主地试图接管。他担心这两种风险。他认为,在人工智能变得比人类更聪明之前,应该投入大量资源来研究是否能够控制它。他还担心人工智能会取代许多普通工作,加剧贫富差距,并可能导致右翼民粹主义者当选。他认为人工智能对就业的影响可能非常深远,需要重新思考福利制度和不平等问题,并考虑普遍基本收入。他估计在未来 5 到 20 年内,人工智能试图接管的可能性约为 50%。他认可政府在规范人工智能方面的努力,但也批评了这些努力的不足之处,特别是缺乏对军事用途的监管以及法规缺乏执行力。他认为科技公司之间的竞争可能会导致人工智能的快速发展,但同时也会牺牲安全性。他建议学习实用技能,例如管道工,以应对人工智能可能对许多中等智力水平的工作造成的影响。 Faisal Islam: 采访围绕人工智能的潜在威胁展开,特别是人工智能超越人类智能并试图控制人类的可能性。采访中探讨了专家对这一问题的担忧程度,以及世界是否正认真对待这一威胁。

Deep Dive

Key Insights

Why do most experts believe AI will exceed human intelligence in the next 5 to 20 years?

Most experts believe AI will exceed human intelligence within the next 5 to 20 years due to rapid advancements in technology and the increasing capabilities of AI systems. This potential intelligence shift raises serious concerns about control and existential threats.

What is the existential threat posed by advanced AI?

The existential threat posed by advanced AI includes the possibility that machines could become more intelligent than humans and potentially take control. This could lead to severe destabilization of human society or even pose a threat to human existence.

Why does Geoffrey Hinton believe large language models like ChatGPT may operate similarly to human brains?

Geoffrey Hinton believes large language models like ChatGPT may operate similarly to human brains because they are based on the same principles of language processing and neural networks that underpin human cognition. Both systems use language as a key component of intelligence, making it difficult to understand the exact differences.

Why is the risk of AI taking autonomous lethal actions significant?

The risk of AI taking autonomous lethal actions is significant because AI systems could make decisions to kill people without human intervention. This is particularly concerning in military applications, where AI could be used to create autonomous weapons that operate independently, leading to potential mass destruction.

Why is international regulation of military AI applications lacking?

International regulation of military AI applications is lacking because governments are reluctant to restrict their own uses of AI for defense purposes. Most current laws and regulations include clauses that exempt military applications, making it difficult to control the development and deployment of military AI globally.

Why could AI widen the wealth gap?

AI could widen the wealth gap because it will increase productivity and wealth, which will likely be concentrated among the rich. The unequal distribution of wealth will exacerbate economic disparities, potentially leading to social unrest and the rise of right-wing populists.

Why might plumbing be one of the safest jobs in the age of AI?

Plumbing might be one of the safest jobs in the age of AI because current AI systems are not very good at physical manipulation. Jobs that require hands-on, mechanical skills are less likely to be automated, making plumbing a viable career option for the foreseeable future.

Why is the autonomous use of AI in military applications a major concern?

The autonomous use of AI in military applications is a major concern because it could lead to AI systems making lethal decisions without human oversight. This could result in devastating consequences and significant loss of life, making it crucial to establish international regulations before such incidents occur.

Why should governments consider universal basic income in the context of AI?

Governments should consider universal basic income in the context of AI because the rise of AI is likely to displace many workers, particularly those in mundane jobs. Without a safety net, these workers could fall into poverty, leading to social instability. Universal basic income could help distribute the wealth generated by AI more equitably.

Why does Geoffrey Hinton think tech companies might be letting down their guard on AI safety?

Geoffrey Hinton believes tech companies might be letting down their guard on AI safety because of the intense competition to be leaders in the AI market. This competitive pressure can cause companies to prioritize rapid development over thorough safety measures, potentially leading to dangerous AI systems being released.

Shownotes Transcript

Translations:
中文

Only Boost Mobile will give you a free year of service when you buy a new 5G phone. But I'm your hype man. When you purchase an eligible device, you get $25 off every month for 12 months with credits totaling one year of free service. Taxes extra for the device and service plan. Online only. This is an ad by BetterHelp. What's your perfect night? Is it curling up on the couch for a cozy, peaceful night in?

Therapy can feel a bit like that. Your comfort place where you replenish your energy. With BetterHelp, get matched with a therapist based on your needs entirely online. It's convenient and suited to your schedule. Find comfort this season with BetterHelp. Visit BetterHelp.com today to get 10% off your first month. That's BetterHelp, H-E-L-P dot com. ♪

You're listening to Luke's English Podcast. For more information, visit teachaluke.co.uk. Hello, listeners. Welcome back to Luke's English Podcast. How are you today? I hope you're doing fine. You're doing all right? I hope so. Lovely day here. Not that it's important, really, in the grand scheme of things, but I thought you might like to know. The sun is shining. It's quite a nice warm day.

afternoon here in October when I'm recording this. Slightly unseasonably warm, but pleasant nonetheless. Let me tell you about this episode that you're about to listen to. So first of all, there's a PDF for this one, and you can get it by just clicking the link that you'll find in the episode description. Just check the show notes and you'll find a link there for the PDF, which you can download if you want to. And if you like, you can follow along with me

You can read while you listen or you can just go back and check the PDF later because there will be loads of vocabulary in this episode and a vocabulary list and texts which have the vocab highlighted and stuff like that. So it would be, I think, a good idea for you to go back to the PDF and check out all that vocab and perhaps review it later. It'll definitely help you remember it.

Anyway, link in the description for the PDF, and I'm going to start reading from the PDF right now. So here we go in 5, 4, 3, 2, 1. So this is a podcast for learners of English around the world. In my episodes, I talk about almost anything, but my aim is always to help you learn English

and perhaps to entertain or stimulate you a bit in the process. In this episode, I'll be talking about the cheerful subject of the existential threat of AI to human civilization. So I'm going to discuss the subject of artificial intelligence and the impact it will have on human civilization in the foreseeable future, both good and bad.

I'll explore and discuss the topic in some detail and we'll explain some vocabulary as we go. Hopefully this will be interesting as well as useful for your English as it will give you plenty of words and phrases that you can use to discuss this subject which everybody is talking about. AI is everywhere at the moment but can you say it? Yes, can you actually say AI?

I've noticed a lot of my students talking about AI in English lessons. The topic comes up a lot, but my students don't actually pronounce this correctly. So I think this would be a good place to start. Just pronounce AI, right? So A-I, repeat it after me, A-I-I.

But what happens when those two letters or sounds are spoken one after the other quickly is it sounds like this, AI, AI, all right? AI, otherwise known as AI. So we're talking about AI, artificial intelligence. So AI is everywhere and will only be more present in our lives in the future. Of course, we're all aware of AI.

Chat GPT, and a lot of us use it all the time now, including me. I use it as an English teacher, especially to generate texts to present language in context. It's incredibly useful as a way of generating example sentences in English, as well as plenty of other interesting texts and things.

So we're all used to a certain amount of AI in our lives now. And I should say that obviously ChatGPT is just one small example of the various forms of AI that exist and will exist. But we're all used to a certain amount of AI in our lives now. And we're probably aware that there are various opinions of AI going around, both positive and negative opinions.

Like with most technology, it'll probably have positive and negative impacts. But I wonder if we all really realise the full extent that AI will change our world over the next few years and decades. I mean, it's insane, really, if you look at the different predictions. And without being alarmist or paranoid or anything...

We could be looking at deeply profound changes to everything as a result of the continued development of AI. Alarmist, if you are alarmist, it means that you are perhaps taking a slightly panicked view of things, like you're raising the alarm, ringing an alarm, as if to say, this is an emergency, this is really serious.

when in fact it's not that serious and it's not really an emergency. So that's what it means to be alarmist. It's to perhaps maybe overreact about something and overreact about the seriousness or the level of emergency going on. So without being alarmist or paranoid or anything, we could be looking at deeply profound changes to everything as a result of the continued development of AI technology.

That's what this episode is about. That's what I really want to talk about today. And I was inspired to do this episode after I watched an interview with one of the world's leading experts on AI. I don't know, you might have seen it too. It's a video that's been going around on YouTube, BBC Newsnight interview with who is described as the godfather of AI. So I found it absolutely fascinating as well as quite disturbing.

So it's an interview with the Nobel Prize winning godfather of AI, Geoffrey Hinton. The interview took place earlier this year on BBC Newsnight, which is an evening news and current affairs TV show on the BBC. The interview was then uploaded to the BBC Newsnight YouTube channel this month, October. That's the month when I'm recording this.

which is where I saw it. So this is from the video description. This is a description of the interview. So Jeffrey Hinton, former vice president of Google and sometimes referred to as the godfather of AI, has recently won the 2024 Nobel Physics Prize. So this guy used to be vice president of Google and is...

has been referred to as the godfather of AI. So this is a person who is definitely an expert in this subject. He recently won the 2024 Nobel Physics Prize. He resigned from Google in 2023. He resigned. That means he quit his job.

Not sure exactly the reasons why, but he stepped down from his position at Google last year and has warned about the dangers of machines that could outsmart humans. OK, by the way, if you're looking at the PDF, you'll see that some words and phrases are highlighted in a sort of orange highlight, highlight, highlighter.

And these are bits of vocab that I'll be explaining as I go along and which are summarised as well at the end of the episode. So...

He resigned from Google in 2023 and has warned about the dangers of machines that could outsmart humans. So to outsmart humans means to be more intelligent than humans. So he's talking about the danger, the possibility that machines or AI in the future could become more intelligent than humans, could outsmart humans. So let me summarise the video.

and then go through the script of the interview, giving my explanations and comments, and of course explaining and highlighting certain bits of vocabulary. If there's time, I'll do a vocabulary recap at the end, and certainly on the PDF you'll see a full vocabulary list with definitions, examples and comments. So, that BBC Newsnight interview is

It's available on YouTube. I'm not going to play the audio of the interview directly in this episode, but I can give you a summary of the video and I will read from the transcript of the video. So here's a summary of the video. It's this. Basically, experts believe AI will exceed human intelligence in five to 20 years, raising concerns about potential control and existential threats.

Okay, so experts believe that AI will exceed human intelligence in 5 to 20 years. It'll exceed human intelligence, meaning it will become greater than, right? It'll go higher. Human intelligence is here at this level and AI will...

get to a higher level than that. It will exceed human intelligence in up to 20 years. Raising concerns, meaning making people concerned. So if you raise concerns about something, it means that people's concerns or worries about a certain subject are raised. They go up, right? To raise something means to make something go up.

Rise means go up. So people's concerns rise and this information raises people's concerns about potential control and existential threats. Potential means it's possible, it could happen in the future. It's something that is a possibility for the future. So potential threats, threats that could happen in the future.

We're talking about potential control threats, meaning whether we will be able to control this technology and existential threats, meaning threats to our very existence as the human race, which sounds a little bit. What's the word for it? As I said, paranoid, maybe alarmist.

But actually, when you hear what this man has to say, you realise that it's actually quite reasonable and quite rational to have these fears about these threats, potential dangers.

So here are the highlights of what was said in the interview, and I will expand on these things later in the episode. But here's a general overview of the main points. So first of all, AI is expected to exceed human intelligence soon, as I said. Many experts are concerned about the existential threat posed by advanced AI.

the existential threat posed by advanced AI, that AI represents or is or poses, presents a big threat to human civilization. Our very existence is threatened by this, so it poses an existential threat. Large language models may operate similarly to human brains. So when we talk about large language models, we're talking about things like chat GPT, which are...

forms of AI that are based on inputting massive amounts of language data into the system, right? So essentially ChatGPT is a large language model. The basis of its intelligence is language, that it is fed massive samples of language, which it then processes, and this is the basis of its intelligence. So

It's difficult to understand that. I find that really complicated to understand, but I also find it absolutely fascinating because of this. Large language models like ChatGPT may operate similarly to human brains. So this is essentially the idea that something like ChatGPT and the human brain might actually operate similarly.

in a really similar way, because maybe language is the key to understanding the way that human minds or human brains process things, that language is absolutely central to our intelligence. And if you create a machine that sort of is based on language, that uses its ability to process language

as the core of its intelligence, you might end up with something that's very similar to a human brain, which I find fascinating.

We continue. So the risk of AI taking autonomous lethal actions is significant, which is rather terrifying. So the risk of artificial intelligence taking autonomous legal, not legal, autonomous lethal actions. So lethal actions are actions which will kill people. So lethal refers to something that could kill.

So lethal actions are things that could kill people and autonomous lethal actions. Autonomous means that the AI chooses to do it on its own.

Right. It doesn't need to be told to do it. It just kind of does it on its own or maybe does it as a consequence of some other instruction or order that it's been given. But basically, we're talking about AI choosing to kill people. And the risk of this is significant, significant, meaning sort of large in its importance overall.

Okay, so there's actually a large chance that AI could actually decide to kill humans. Again, sounds paranoid and alarmist, but, you know, just wait and see. So international regulation of military AI applications is lacking. So international regulation, this is basically applying rules to control things. So, you know, different jurisdictions are

global or local, you know, whether it's like EU law or some sort of global convention. This is what we mean by regulation. So systems of rules and laws to control things. So international regulation of military AI. This is artificial intelligence used in military applications. So when we say applications, that means not applications on your phone, but

in uses, the ways in which, in this case, AI is used. So that's military applications that would be using AI for military purposes. So regulation of this is lacking. So there's not enough control or regulation that would limit the application or use of AI in military situations. This is one of the most terrifying aspects of this.

Also, AI could widen the wealth gap, right? The gap, this is the gap between the rich and the poor, okay? The fact that there's a huge difference, a huge disparity between those people who are rich and those people who are poor. And I mean, really, the way it works is that there's an increasingly small number of people who are becoming increasingly rich and

So you get to a situation where there's a kind of a small minority, very small minority, who have most of the money. And then all the other people, relatively speaking, have very little money. So you end up with this big gap between poor people and rich people. So there's this huge imbalance in society. So AI could widen this gap, make it wider, make it bigger.

affecting society negatively. And plumbing may be one of the safest jobs in the age of AI.

Plumbing refers to all of the systems in someone's home, normally a home or a building or something, all the systems that deal with water. So all the pipes in your kitchen, all the pipes in your bathroom that transfer water around your apartment. Something as basic and mechanical as that. So that's plumbing.

this might be actually one of the safest jobs that you could choose to do in the future in the age of AI. Okay, so this is pretty serious stuff. Okay, it's pretty serious stuff. It's pretty fascinating stuff. It's slightly disturbing.

Let's get into the interview transcript. Okay, so now I'll read out the transcript of the interview. Listen and follow, and I'll go through this word by word afterwards. So first of all, let me just read through the script. I'll kind of reconstruct the interview.

See if you can follow it all. And yeah, we'll be going through this script line by line later in the episode. Okay, so let's get started. So here's the transcript. So Faisal Islam, that's the interviewer, talks to Geoffrey Hinton about the threat of AI on BBC Newsnight. That doesn't mean the fact that AI could be a threat to BBC Newsnight necessarily.

This just means that the interview was on BBC Newsnight. I think you understand. Right. So here we go. This is where the script begins. So I began by asking Geoffrey Hinton whether he thought the world is getting to grips with the issue of AI or if he's as concerned as ever. So this is Geoffrey Hinton. I'm still as concerned as I have been, but I'm very pleased that the world is beginning to take it seriously.

In particular, they're beginning to take the existential threat seriously, that these things will get smarter than us, and we have to worry about whether they'll want to take control away from us. That's something we should think seriously about, and people now take that seriously. A few years ago, they thought it was just science fiction.

and the interviewer. And from your perspective, from having worked at the top of this, having developed some of the theories underpinning all of this explosion in AI that we're seeing, that existential threat is real? Yes. So some people think these things don't really understand. They're very different from us. They're just using some statistical tricks.

That's not the case. These big language models, for example, the early ones, were developed as a theory of how the brain understands language. They're the best theory we've currently got of how the brain understands language. We don't really understand either how they work or how the brain works in detail, but we think probably they work in fairly similar ways. What is it that's triggered your concern? It's been a combination of two things.

Playing with the large chatbots, particularly one at Google before ChatGPT4, but also with GPT4, they're clearly very competent. They clearly understand a lot. They have a lot more knowledge than any person. They're like a not very good expert at more or less everything. So that was one worry.

And the second was coming to understand the way in which they're a superior form of intelligence, because you can make many copies of the same neural network. Each copy can look at a different bit of data, and then they can all share what they've learned. So imagine if we had 10,000 people, they could all go off and do a degree in something, share what they'd learned efficiently, and then we'd all have 10,000 degrees. We'd know a lot then.

We can't share knowledge nearly as efficiently as different copies of the same neural network can. So the key concern here is that it could exceed human intelligence? Indeed, massively exceed human intelligence. Very few of the experts are in doubt about that. Almost everybody I know who's an expert on AI believes that they will exceed human intelligence. It's just a question of when.

and at that point it's really quite difficult to control them? Well, we don't know. We've never dealt with something like this before. There are a few experts, like my friend Yann LeCun, who thinks it'll be no problem. We'll give them the goals and it'll be no problem. They'll do what we say. They'll be subservient to us. There are other experts who think absolutely they'll take control. Given this big spectrum of opinions, I think it's wise to be cautious.

I think there's a chance they'll take control. And it's a significant chance. It's not like 1%. It's much more, interviewer. Could they not be contained in certain areas, like scientific research, but not, for example, the armed forces? Maybe, but actually, if you look at all the current legislation, including the European legislation, there's a little clause in all of it that says that none of this applies to military applications.

Governments aren't willing to restrict their own uses of it for defence, interviewer. I mean, there's been some evidence even in current conflicts of the use of AI in generating thousands and thousands of targets. Yes. I mean, that's happened since you started warning about AI. Is that the sort of pathway that you're concerned about? I mean, that's the thin end of the wedge, interviewer.

What I'm most concerned about is when these things can autonomously make the decision to kill people. So, robot soldiers? Yeah, those or drones and the like. And it may be we can get something like the Geneva Conventions to regulate them, but I don't think that's going to happen until after very nasty things have happened. There's an analogy here with the Manhattan Project and with Oppenheimer.

If we restrain ourselves from military use in the G7 advanced democracies, what's going on in China? What's going on in Russia? Yes, it has to be an international agreement. But if you look at chemical weapons, the international agreement for chemical weapons has worked quite well. I mean, do you have any sense of whether the shackles are off in a place like Russia?

Well, Putin said some years ago that whoever controls AI controls the world. So I imagine they're working very hard. The West is probably well ahead of them in research. We're probably still slightly ahead of China, but China is putting more resources in. In terms of military uses of AI, I think there's going to be a race.

interviewer. It sounds very theoretical, this argument, this thread of argument, but you really are quite worried about extinction level events. We should distinguish these different risks. The risk of using AI for autonomous lethal weapons doesn't depend on AI being smarter than us. That's a quite separate risk from the risk that the AI itself will go rogue and try to take over.

I'm worried about both things. The autonomous weapons are clearly going to come. But whether AI goes rogue and tries to take over is something we may be able to control or we may not. We don't know. And so at this point, before it's more intelligent than us, we should be putting huge resources into seeing whether we're going to be able to control it.

interviewer, what sort of society do you see evolving? Which jobs will still be here? Yes, I'm very worried about AI taking over lots of mundane jobs. And that should be a good thing. It's going to lead to a big increase in productivity, which leads to a big increase in wealth. And if that wealth was equally distributed, that would be great. But it's not going to be.

In the systems we live in, that wealth is going to go to the rich, not to the people whose jobs get lost. And that's going to be very bad for society, I believe. So it's going to increase the gap between rich and poor, which increases the chances of right-wing populists getting elected.

interviewer. So to be clear, you think that the societal impacts from the changes in jobs could be so profound that we may need to rethink the politics of, I don't know, the benefit system, inequality? Absolutely. Universal basic income? Yes, I certainly believe in universal basic income. I don't think that's enough though, because a lot of people get their self-respect from the job they do.

And if you put everybody on universal basic income, that solves the problem of them starving and not being able to pay the rent. But it doesn't solve the self-respect problem. So what? You just try to... The government needs to get involved? I mean, it's not how we do things in Britain. We tend to sort of stand back and let the economy decide the winners and losers. Yes.

Actually, I was consulted by people in Downing Street and I advised them that universal basic income was a good idea. And there is, you said, 10 to 20% risk of them taking over. Are you more certain that this is going to have to be addressed in the next five years, next parliament, perhaps? My guess is between five and 20 years from now, there's a probability of about half that we'll have to confront the problem of them trying to take over.

Are you particularly impressed by the efforts of governments so far to try and rein this in? I'm impressed by the fact that they're beginning to take it seriously. I'm unimpressed by the fact that none of them is willing to regulate military uses. And I'm unimpressed by the fact that most of the regulations have no teeth. Do you think that the tech companies are letting down their guard on safety because they need to be the winner in this race for AI?

I don't know about the tech companies in general. I know quite a lot about Google because I used to work there. Google was very concerned about these issues and Google didn't release the big chatbots. It was concerned about its reputation if they told lies. But as soon as OpenAI went into business with Microsoft and Microsoft put chatbots into being, Google had no choice.

So I think the competition is going to cause these things to be developed rapidly, and the competition means that they won't put enough effort into safety. People, parents, talk to their children, give them advice on the future of the economy, what jobs they should do, what degrees they should do. It seems like the world is being thrown up in the air by this, by the world that you're describing.

What would you advise somebody to study now to kind of surf this wave? I don't know, because it's clear that a lot of mid-level intellectual jobs are going to disappear. And if you ask which jobs are safe, my best bet about a job that's safe is plumbing, because these things aren't yet very good at physical manipulation. That will probably be the last thing they're very good at. And driving? What about driving? No, driving, no, that's hopeless.

That's been slower than expected, but it's going to go. Journalism might last a little bit longer, but I think these things are going to be pretty good journalists quite soon, and probably quite good interviewers too. Okay, well, thank you very much for your time. You're welcome. Right, so there you go. That was the script of the interview itself. Now, I will go through that transcript word by word in a moment, helping you to understand everything. But first...

Let's look at the key insights from the interview. So this is another summary, let's say, of the main points of the interview, but just expanded slightly with some of my thoughts and comments. Excuse me while I drink some tea. Excuse me while I drink some tea. Excuse me. Little Jimi Hendrix moment. OK, so here are those key insights from the interview. First of all, impending intelligence shift. Shift here meaning a movement change.

Obviously, a movement from humans to AI. Most experts agree that AI will surpass human intelligence in the coming years, highlighting the urgency or importance of understanding its implications. Super intelligent AI might be a threat to us or it might not. But since there's potentially a 50-50 chance of it being dangerous to us, it would be wise to be cautious about it. How and why would it be a threat to us?

is a question. Now, it might not be the sci-fi version, which is basically like what we've seen in the movies. You know, AI suddenly decides that humans must die for some reason. So it might not be that version, but AI could still destabilize our society severely in a variety of ways.

control challenges. As AI becomes more competent, controlling its actions may become increasingly difficult, raising ethical and existential questions. So this is the classic science fiction storyline that we've seen in films like The Terminator and The Matrix. The AI becomes self-aware and can't be controlled. And then it sees humans as a threat to its existence.

or perhaps it identifies humanity as a problem that needs to be solved. It's kind of like the Avengers Age of Ultron situation. Tony Stark develops a form of artificial intelligence as a tool to help protect the Earth, but for some reason it becomes evil and decides that humanity needs to be stopped. To be fair...

Maybe that will happen. Maybe super intelligent AI will see humans as a problem that needs to be solved. Continuing the PDF, to be fair, overpopulation and human actions are probably largely to blame for environmental and societal collapse.

So who knows? Maybe AI will consider us to be a threat and will take action to stop us destroying everything. Perhaps it will be like Thanos from Avengers Infinity War and will decide that half of humanity needs to be wiped out in order to guarantee the survival of the human race as a whole. I mean, who knows?

But as I said, it might not be as simple as AI choosing to kill all humans. It might be that human life is put in danger as a consequence of the things AI does, especially if we can't control it. Cognitive parallels, cognitive meaning relating to the way the mind works, parallels meaning similar things.

The structure of large language models like ChatGPT mirrors cognitive processes in the human brain. So it mirrors it. It's like very similar to it.

It reflects it. It's almost exactly the same as cognitive processes or thinking processes in the human brain, suggesting they may possess unique forms of understanding. So maybe even AI, large language models like this, might have quite a profound understanding of how we think and what it's like to be human. Yeah.

Maybe our ability to process languages is central to how our intelligence works, and making AI that processes language the same way as we do is a way of kind of reverse-engineering human-like intelligence. Basically, perhaps AI will think just like humans do. Maybe it will be a lot more similar to humans than we realise.

We could be looking at intelligences that operate in a similar way to human minds, but with much greater capacity for complex decision-making, multitasking, memorizing, processing, and calling upon a much wider resource of knowledge than most human individuals can have in their brains. So these things could think just like humans, but with a way greater capacity for everything.

memorizing drawing upon collective knowledge processing things quickly multitasking you know sort of super superhuman stuff then we move on to the autonomous weapons risk so the potential for ai to autonomously make lethal decisions that means choosing to do it on its own poses or presents

a severe risk necessitating immediate regulatory attention. So this means that we really need to start stepping in and making global laws that limit the use of AI in military conflicts. And it's obvious why, right? I mean, we're looking at an arms race, just like the old-fashioned arms race of the Cold War, right?

the space race and all that stuff. Um, but you know, um, there's a significant risk of, of, you know, because, um,

Any regulations about AI, they always have a clause in the law that sort of say, actually, but none of these rules apply to military AI. Because everyone is so concerned about defending themselves or trying to compete with rivals on Earth who might be developing the technology more quickly.

Because no one wants to be in a situation where suddenly they're faced with an enemy who's got more sophisticated artificial intelligence and using it for military purposes because then, you know, they take over the world, right? But we need to find a way for Earth to agree on a set of regulations, a bit like the way we've done for chemical weapons. There are international laws that regulate the use of chemical weapons. There are, bizarrely, there are laws and rules for war regulations.

You know, there are such things as war crimes. War is a defined thing. And we try to make sure that nations, even when they're fighting against each other, comply with these laws as much as possible. And in a sense, when a nation breaks those laws, they instantly become the bad guy. You know, it's interesting the way that nations wage war and take military actions against

And they're constantly sort of managing their position in the conflict and trying not to be the bad guy or trying not to prove that they are, let other people prove that they're the bad guy in the situation.

And certainly breaking certain international laws that regulate military conflicts, that's a definite sign that if you're doing that, then you're probably the bad guy in the situation. Anyway, so we need to take immediate action in terms of making regulations, laws, that limit the use of AI in military situations.

So continuing the PDF, AI is definitely being used for military purposes and it's already being used. And this is terrifying when you consider how efficient and effective it could be. And that's relating to not just AI being like Ultron from the Avengers movie, which is a form of technology that becomes self-aware, like Ultron or Skynet in the Terminator films.

that it becomes self-aware and then decides to have a robot versus humans war. Rather, it's making sure that humans controlling AI aren't able to use that AI to wipe out their enemies in extremely efficient and quite horrifying ways.

What a jolly subject for an episode of the podcast. Perhaps I'll go back to telling terrible jokes in the next episode, but, you know, I just feel like I had to talk about this. I watched this video and I was just, like, stunned by it. I mean, all this sort of thing has been in my mind, and I'm sure that you're the same, that you read these things and it's just kind of, like, shocking, the things that they're talking about. And the stuff that's normally just reserved for science fiction content

You'd think, oh, it's just a kind of a dramatic science fiction story. But this stuff is becoming alarmingly close to reality. Let's move on to the next point, which is global regulation gaps, which is what I've been talking about. Current international agreements lack enforceability. To enforce law means to basically apply law and actually make it work. For example, the police can enforce law or courts can enforce law. Essentially,

finding people guilty of crimes if these laws are broken. So this is how we talk about the enforceability, enforcing law, applying law in the real world. So current international agreements lack enforceability regarding military uses of AI. So basically, the laws are not quite strong enough or clear enough to be able to actually control people's behaviour.

Potentially leading to unchecked advancements by nations, right? Unchecked, meaning unregulated. The idea of humans controlling lethal AI with no regulations is more frightening than the idea of AI autonomously trying to wipe out humans. It's more of a realistic concern that one side or one faction...

in a situation would suddenly get their hands on extremely advanced and sophisticated AI weaponry and they would use it with devastating consequences. Just like in the nuclear age, there is already an arms race underway involving AI in which competing power blocks are racing to develop lethal AI systems to prevent one faction having supremacy.

The result could be that AI becomes armed, dangerous, fully developed and poised to wreak havoc. There's a happy thought for you. Also, economic disparities. So it's not just Skynet and Terminators being sent back from the future to kill housewives or whatever. We're talking about economic effects as well, right? Which could be...

I mean, is likely to be just as damaging. So the rise of AI threatens to deepen existing economic equalities. So economic inequalities. Did I say inequalities? I think so. So existing economic inequalities. So again, talking about the gap between the rich and the poor.

the haves and the have-nots, the rise of AI is not going to balance this out. In fact, it's going to exacerbate it. It's likely to make this much worse.

prompting considerations for universal basic income and societal reformation. Universal basic income is essentially that everyone gets given a certain amount of money to live on. Everyone is given, even if they don't have work, everyone is provided with enough money to help them survive. This is the concept of universal basic income.

Because if you don't, then you will see large, large sections of society essentially falling into terrible levels of poverty as a result of the inequality, the economic inequality in society. So you essentially hold up

all of human civilization by just, you know, basically just giving them money. And it also makes sure that the money moves through the economy. So what we mean here is that when a small minority of people get basically all the money, like this is when at the end of a game of Monopoly, when your sister, who this time has won because she got all the orange properties, all the red properties, and she got the dark blue properties,

and the purple ones as well, why not? She won all, she's got all of that stuff, and in the end, she has everything, she has all the money, the game is over at that point. And it's the same in the economy. If you get to that situation where you've got your 0.1% who have everything, pretty much, they have all of the property, they've got all the money, they own all the businesses,

Society will not work. It will fall apart because you need the wheels to turn. You need the whole system to keep going. You need, you know, those business owners need customers to spend their money. You need liquidity in the system. You need the money to constantly be moving. And so it would necessitate, essentially, these people who've got everything going, all right, well, we'll give you loads of money.

Loads of money. It wouldn't be loads. We'll give you money just to keep the economy turning. It's a bit like if your sister wins a game of Monopoly and she goes, no, no, I don't want to stop. So I'll just give you loads more money. Every round, I'll give you 500 pounds just so we can keep playing the game. It's kind of like that. But Geoffrey Hinton, is that his name? He actually believes universal basic income is a good idea, ultimately.

So we were talking about prompting considerations for universal basic income and societal reformation. Reformation meaning reforming society, changing society in some important big way. Job security. While many intellectual jobs may vanish...

Right. That's a lot of them, including English teaching. Quite, quite possibly. Although, what do you think? Would you? Could you really? When will you give up Luke's English podcast for the latest AI driven show, which you can create yourself? You know, that's the future of all of this stuff. That instead of having to wait for the next episode of Luke's English podcast and instead of having to

essentially accept what I give you, you could probably just go into some podcast making software. I mean, you can do it already. You go into some podcast making software and just type in, can you make an episode of Luke's English Podcast about the subject of, I don't know, donuts and make it 30 minutes long to fit my morning commute to work. Okay. And include a joke every 45 seconds and it will do it. And it'll be exactly what you wanted.

And you could just do that every single time. And then after maybe the first 10 times you've generated them, you just say to it, just keep making episodes like this. You know, the kind of thing I like, just go on and it'll keep doing it. And then there's that. Or will you continue to listen to the real Luke's English podcast? Which one? So, I mean, that's just an example of how my job as a podcaster could vanish. My job as an English teacher, of course, could vanish.

I mean, it's already under threat from AI language models. I mean, I use it now as a teaching assistant, but at some point that teaching assistant is going to get to a stage where it can just replace me completely. But, you know, could you, would you, would you, would you reject me that easily? Anyway, while many intellectual jobs may vanish, practical skills like plumbing, which is what I talked about before, fixing, repairing water pipes,

in a home or in the street, something like that, will likely remain safe, emphasising the need for future job readiness. Readiness means being ready. Okay, so that's again an overview of the things that were said in the interview. We're going to go through the interview in a bit and look at the vocab and stuff. But at this point, I would like to ask you, what do you think

What do you think of all this? What's your reaction? Now, when you watch a video like this with an expert giving serious warnings or just listen to me rambling on about it all, what is your reaction? And you could write your reaction in the comments section. Now, maybe you feel like you want to make criticisms like these ones. And I'm about to read out some comments which are based on things I read in the comments section under the BBC video.

So maybe you want to make criticisms like this. Maybe you want to say things like this. Maybe he, maybe this expert should have done something about this earlier when he had the chance. He's the godfather of AI. He helped to develop all this. Maybe he should have thought about this and done something about it when he had the chance. Maybe you criticize this expert.

Or maybe you'll say something like this. As a former Google employee, he's a mainstream globalist, blah, blah, blah, blah, blah, something like that. You'd probably get triggered by the fact he works for Google and you might go off on some rant about him being whatever, some globalist thing. I don't know. I don't know. Do you know? Are you triggered by any of it?

Or maybe you're one of those sorts of people who sort of says, you know what, I think it's all overblown nonsense. It's all alarmist nonsense. There's nothing to be afraid of.

Or something like, new technology is always disruptive, but it's all right. Everyone was worried when the printing press was invented. Everyone was concerned when the radio was invented, when manufacturing systems were invented, the Luddites smashed the machinery. You know, I don't know.

Do you think like that or are you more like me? Personally, I take it all seriously and just accept it for what it is, an expert giving his expert opinions. But I personally don't quite know what to do about it all myself.

Anyway, let's take a closer look by studying the transcript of the interview and I can comment and explain some language. So we're going to go back up to the top of the transcript now and I'm going to fly through it and focus on those highlighted bits of vocabulary. Okay, and just focus on clarifying those things.

You know, as a busy mom, there are a few ways you can build strong muscles. You could get a gym membership, which you'll never use, buy all sorts of expensive equipment for your garage that you'll forget you have, pay for a personal trainer that you'll never have time to meet with, and buy a fitness watch that only makes you sad every time you look at it.

Ryan Reynolds here from Intmobile. With the price of just about everything going up during inflation, we thought we'd bring our prices down. So we're going to talk about how to get our prices down.

So to help us, we brought in a reverse auctioneer, which is apparently a thing. Mint Mobile Unlimited Premium Wireless. How about you get 30, 30, how about you get 30, how about you get 20, 20, 20, how about you get 20, 20, how about you get 15, 15, 15, 15, just 15 bucks a month? Sold! Give it a try at mintmobile.com slash switch. $45 upfront payment equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes each detail.

So, the interviewer said, I began by asking Geoffrey Hinton whether he thought the world is getting to grips with the issue of AI. So if you get to grips with something, it basically means you get it under control. Okay, to get it under control. For example, I need to get to grips with this. What would it be? I need to really, I really need to get to grips. So let's say someone has introduced a new computer system at work.

and you need to understand how it works and really get on top of it, you might say, I really need to get to grips with this new system. I'm just going to spend a couple of hours this afternoon just working it out. I really need to get to grips with it. Now, you can imagine that this expression involves, well, you just don't imagine it, you can just remember that it involves the word grip, which means to hold something in your hands without it slipping.

So if you hold something, you get it under control, right? So that means getting to grips with something. I'm trying to think of another example. Let's say you buy a new laptop or you've got a new phone and you don't understand the operating system. So you need to spend some time getting to grips with it, meaning getting it all under control, understanding how it works and being able to use it. You get to grips with some homework that you've been given.

Meaning you read the task and you try and understand what you've got to do. You're preparing for the IELTS test. You really need to get to grips with the test. Meaning understand how it works and get it all under your control. In this case, is the world getting to grips with the issue of AI? Are we getting it all under control? Do we understand it? Can we control it? Okay. And then we've got...

And from your perspective, from having worked at the top of this, meaning at the top of the artificial intelligence movement, having developed some of the theories underpinning all of this explosion in AI. So the theories underpinning the explosion in AI. If something underpins something, it basically kind of works as a foundation. I guess this must come from...

making clothes, making clothing, tailoring clothes, that when you, let's say if you're making a shirt, you would use pins, metal pins, underneath parts of the shirt, you underpin the shirt, holding the different parts of the shirt in place, before you then stitch the shirt together. So that's underpinning a shirt, it's kind of like the connections or the

parts that hold the whole thing together. Mm-hmm. It's a bit like a foundation for a building.

So you have ideas underpin something. For example, what are the ideas and concepts that underpin my approach to doing this podcast? A lot of it's based on, you know, the training that I've received as an English teacher over the years. So the idea that language should be presented in context, but also stuff I've learned as a teacher that it helps when

The language learning process is personalised, both in the way that you consume language, so it's easier to essentially acquire language when it's presented to you in a personal way rather than an impersonal way,

And plenty of other things as well. These are ideas that underpin my approach to doing this podcast. So when we talk about theories or ideas underpinning something, it means the things that form the underlying structure that defines the way that something works, like the foundations of something. Okay, so the interviewee developed some of the theories that underpin something.

this explosion in AI. Okay, so Geoffrey Hinton said, "Some people think these things don't really understand." Meaning some people think that these AI systems don't really understand things. That they're very different from us, that they're using some statistical tricks. So statistical refers to the word statistics. Statistics is basically like data, either quantitative data, just numbers, or qualitative data, more values,

you know different types of data information this is statistics right so some people believe that these ai systems are basically just using statistical tricks like for example maybe when they produce language that they are just producing sentences based on like frequency and

For example, well, after this word, the most common next word is this. So that's, I'll just add this, that they're just using statistical tricks to perform these acts of intelligence. But Jeffrey disagrees. He thinks they're not just using tricks, that there is actual processing going on that is very similar to the way that the human mind processes things.

which is really interesting. Again, we're talking about science fiction because you think of films like Blade Runner. And the whole idea of Blade Runner is do synthetic humans or androids, do they actually have emotions? They are so similar to humans. They even have memories which have been implanted.

Their emotions are just as powerful as the emotions that humans feel. So what's the difference then really between a synthetic person who is experiencing emotions in the same way that a human does, a synthetic person that is essentially thinking and operating in exactly the same way that a human does? What's the difference between that and a human then really? And don't those synthetics, those replicants, don't they have...

as a result, somehow the same rights as humans? In fact, aren't they even more than human? There's that famous speech at the end of Blade Runner where Rutger Hauer is saying to Harrison Ford's character, I've seen things that you people could never understand. I have insights about the universe that you couldn't comprehend. I'm more than a human. I'm

Greater than a human. I am more human than human Just an interesting idea that maybe these AIs Maybe they they should have rights which is again a terrifying thought because if any of you have seen the animatrix Which is the animated there was an animated series of short films based in the matrix universe and one of the stories in the animatrix talks about how the war between the machines and the humans happened and

And a principal key moment in it is a legal case where it's decided in the courts that artificial intelligences don't have the same rights as humans. And this was an important groundbreaking case in law that stated that these intelligences don't have the same kinds of rights as humans. And this was the basis upon which the AI developed.

felt there was an injustice and that was like the start of a big conflict and ultimately led to a horrific war between the humans and the AI, which the AI won. So should AI be considered similar to humans? How do you make the distinction ultimately when they think just like us? They

Anyway, it's a philosophical question that. But anyway, Jeffrey says that they are... People think that these systems don't really think. They don't really understand things. They just use statistical tricks. But he's arguing that their intelligence is far more sophisticated and human-like than we realise. The interviewer said, what is it that's triggered your concern? I've already used the word triggered in this episode. To be triggered...

On the internet is when you see something or read something that makes you angry, you know, and it's often something that sort of angers you on a political level, or maybe you get offended by something.

So that's one use of the word triggered. But really, triggered just means to trigger something. It's just to cause something to happen. Like in a gun, right? In the gun, you've got the barrel of the gun where the bullet goes down. You've got the handle, which you hold in your hand. And then there's the trigger, which is the thing that your finger pulls, which causes the mechanism to fire the bullet from the gun.

So that's to trigger the gun. A trigger, essentially, then, is a button that causes a reaction. And in this case, as a verb, to trigger something is exactly that, to cause something to happen. So in this case, what's triggered your concern, meaning what has caused you to be concerned like this. So you can imagine a finger pulling the trigger and the trigger,

Let's say the trigger triggers the mechanism of the gun. Also, something might trigger Jeffrey to have these concerns. What was it that triggered his concern? It was a combination of two things, playing with GPT-4 chatbots and also something else.

understanding the way in which these things are a superior form of intelligence, understanding the way that these AI systems are essentially multi-neural networks, something like that. So when he got a more profound understanding of the way that they work, that's what triggered him to be concerned.

He said they're clearly very competent. If something or someone is competent, it means they are capable of doing things. They're good at doing things. So, for example, if you are running a business, you want staff who are competent, meaning staff who can do things and they can do things well. What do you think of Sarah? I think she's very competent. I think that, you know, she's definitely one of the staff members that we need to hold on to. I think she's a very competent teacher.

She's very good at problem solving. She lessons plans very carefully. I've observed her in lessons a few times. And she's a very competent teacher. She knows her grammar. She's able to manage the classes very effectively. Very competent. What do you think of Bill? Well, Bill, on the other hand, is a different story. First of all, he's always late. Secondly...

I observed him last week and I watched him spelling on the board and his spelling is atrocious. He can't spell, he doesn't know the grammar, he doesn't really seem to care about the students and in fact doesn't show any signs of actually knowing what he's doing at all. He's completely rubbish, he's a useless teacher, he is, I would say, completely incompetent. He can't use the photocopier, he doesn't know how to say good morning, he rarely dresses appropriately for the job,

He's a slob, he's rude, he's lazy and he's frequently absent. So he's perhaps one of the most incompetent teachers I've ever met. So I think that we should probably... So what do you think? I think we should keep Bill because he's a man. What?! This situation became suddenly very sexist. Just a little example of what happens in the patriarchal workplace.

Anyway, obviously Sarah would be the competent one. She would be the one that you would keep. And Bill sounds like a total disaster. And you've got to let him go, haven't you? Because he seems incompetent. Jeffrey was saying that the AI systems are clearly very competent. They're very good at what they do. So he talks about making many copies of the same neural network. So neural refers to...

the systems in the brain, right, the connections in the brain. A neural network is essentially a brain, right, a system of connections. And he's saying that one of the reasons why AI is more sophisticated and advanced than humans is because the difference is that humans, each individual human has a neural network,

meaning a brain, right? And the fact is that these brains are not connected. And we actually find it very difficult to share the knowledge we have in our brains. We have all these complex difficulties with social communication, and it's all a very inefficient system. You know, humans have to go to university to learn, and they take three years to learn about just one subject.

But what Jeffrey is saying is that these things are a superior form of intelligence because you can make many copies of the same neural network. Each copy can look at a different bit of data and then they can all share what they've learned. So you get 10,000 people studying 10,000 subjects. They all go off and learn this stuff very, very quickly. And then you can actually bring all these people together, all these neural networks, you can connect them all together

and they instantly share everything that they've known, that they've learned, and then each individual person in that group of 10,000 knows 10,000 people's worth of information. Okay? So that's why he was saying that they are superior, because you can make many copies of the same neural network. Essentially, they can share information with each other much more efficiently and effectively than humans can.

So the interviewer says, "The key concern here is that it could exceed human intelligence." And Jeffrey says, "Indeed, massively exceed." So we've said we've had "exceed" before, right? Meaning become bigger, right? And massively exceed. So this means become much, much bigger, right?

He then says, very few of the experts are in doubt about that. Very few of the experts are in doubt about that. If you are in doubt about something, it means you're not sure about it, right? I'm in doubt about it. Experts are in doubt about that. Very few of the experts are in doubt about that, meaning most of them are sure about it. Most of them agree with each other and they're sure that AI will massively exceed human intelligence.

All right. Jeffrey went on to say that some people believe that AI will be subservient to us, that we'll give them the goals. It'll be no problem. They'll do what we say. They'll be subservient to us. If someone is subservient to someone else, essentially they take a lower position and they allow...

their masters to tell them what to do. So a servant takes a subservient position to his or her master, right? For example, you can say, make me a cup of tea. And they'll say, certainly, sir, would you like two sugars as usual? You know, and they take a subservient position. They'll be subservient to us.

Right. In a lot of cultures and traditionally, women are subservient to men. Right. In relationships and things like that. Not so much the case, you know, in many societies. Right. But anyway, just explaining the word. So many people think that AIs will be subservient to us. But then there are other experts who think that they will definitely take control.

And then he says, given this big spectrum of opinions. So given is quite a nice word. We use it at the beginning of sentences. Given this big spectrum of opinions, I think it's wise to be cautious. So given is a bit like saying because there are or because of this. Right. Taking this into account. Given this big spectrum of opinions, I think it's wise to be cautious. Think of some other examples of given.

You might say, given that, and then a clause, given that the sun is shining today, I think a t-shirt would be sufficient. Given the sunshine, I think a t-shirt would be sufficient. All right. Given this big spectrum of opinions, I think it's wise to be cautious because we have, because there are, yeah. Let's see if I can get a better explanation of that.

Here's a bit more information about it. So given taking something into account or considering a particular factor, for example, given the circumstances, I think we made the right decision. So taking into account the circumstances, right? Because of...

Because of the circumstances or thinking about the circumstances, because the circumstances are the way they are, I think we made the right decision. So this word is often used to introduce conditions or facts that influence a conclusion or decision. Given the nature of the situation, I think we need to be careful. It's like saying because the nature of the situation is...

Difficult. I think we need to be careful. Given this... So it could be given plus a noun, like given this big spectrum of opinions, or given plus a cause with that. Given that there are many different opinions, I think it's wise to be cautious. So anyway, he's saying that because there are lots of different opinions, there's a big spectrum of opinions, a wide variety of opinions...

He said, I think it's wise to be cautious. So we've got opinions that say, it'll be fine. They'll do what we tell them to do. They'll just be subservient to us. And then there are other opinions on the other side that say, no, actually, they will definitely try to take control.

And so, considering we've got this wide spectrum, it's probably a good idea to be cautious because we've got at least, I don't know, what is it, 15 to 50% of opinion saying that this could be very dangerous. So it's probably a good idea to be cautious. The interviewer says, could they not be contained in certain areas like scientific research?

So to be contained, meaning limited, kept in one space. Like, for example, you know, animals in the zoo. A lion is contained within its cage, sadly, right? To be contained, meaning kept in one area. So could AI not be contained in certain areas like scientific research? Meaning, could we not just restrict its use to certain areas, right?

but not, for example, the armed forces. So if we could just limit it to scientific use, but not allow it to be used in military situations. The armed forces, this means the army, basically the army, the navy, the air force. These are the armed forces. And Geoffrey said, maybe, but actually if you look at all the current legislation, so legislation means laws,

So we've had regulations, laws, legislation. They're all kind of the same thing, right? Regulations are laws which seek to try to control things, and they're often made by governments or something like the European Union, which issues regulations as a way of like, you know,

creating frameworks, legal frameworks to control things, tax regulations and so on. Legislation means laws, specifically means laws. If you look at all the current legislation which is uncountable, laws, that's countable, that's plural. So if you look at the current laws or in general the current legislation,

including the European legislation, there's a little clause in all of it that says that none of this applies to military applications. A little clause. A clause is a line in a legal document, in an agreement, in a contract, or in a piece of legislation. A single line

which might provide a right or take a right away or allow something to be possible or not allowed. So a clause. So when you're reading a contract, you've got to read every single clause. You might find like 14.1.3 says these regulations are not limited to the use of military interventions.

You know, it might say AI must be contained within these limitations, blah, blah, blah. Clause 14.1.3, except in the case of military applications, which is a little clause, which means that none of this applies to military applications or military uses the way that it's applied to military situations, wars, fighting, the actions of armed forces.

governments aren't willing to restrict their own uses of it. If governments aren't willing, meaning they don't want to, they just don't want to do it. They're not willing to do it. This is this word we use all the time in business situations when we're doing negotiations. For example, are you willing to, would you be willing to provide a discount of 20% if we order over 3,000 units?

We'd be willing to give a discount, but I think 20% is a little bit more than we imagined. Would you be willing to go to 15%? I think 15% is more realistic. You know, that kind of thing. It's basically, what do you want to do? Are you willing to do this? Are you willing to do that? Governments are not willing to. We don't, you know, it's more appropriate language for this kind of subject, saying willing to do this, willing to do that, or not willing to, rather than saying want to. You know, we don't say governments don't want to restrict their uses of it.

We say governments aren't willing to restrict their own uses of it. So it's a slightly more formal, more serious way of essentially saying want to or don't want to, willing to. We also say governments aren't prepared to, which is another way of saying that they just don't want to. So governments aren't willing to restrict their own uses of it for defence. Restrict meaning limit it. Moving on.

the interviewer says, that's happened since you started warning about AI. Is that the sort of pathway you're concerned about? So the interviewer asks about the way things are going to go in the future. He said, is that the sort of pathway that you're concerned about? I suppose this is fairly clear, isn't it really? You think of a pathway, somewhere you would walk down, for example, a pathway that takes you down to the bottom of the garden or a pathway that takes you through a field somewhere.

or something like that. So a pathway is just a way in which things can move. So he's saying, is that the sort of way that you're concerned about? Meaning the way that this could develop. Is that the kind of pathway, the kind of road that you're concerned about? And he said, I mean, it's the thin end of the wedge. This is quite a nice idiom, the thin end of the wedge. So a wedge would be a kind of...

like a piece of wood, okay, that is thin at one end and thick at the other end. Typically, we use a wedge. You stick a wedge under a door and the door can't close. You sort of wedge a door open, right? So a wedge is a thing that's thin, a piece of wood that's thin at one end and thick at the other end. So it's kind of like triangular, slanted piece of wood. That's a wedge, right?

and you'd stick a wedge under a door. But now you know what a wedge is, we talk about the thin end of the wedge. It's a bit like the tip of the iceberg. It's a similar expression. The thin end of the wedge means you've got a situation, that's the wedge in this case, and you've got at one end the thin end of the wedge, and at the other end the thick end of the wedge. So in talking about artificial intelligence being used...

in military situations, for example, weapons systems that can target many different targets all at the same time, allowing governments to send missiles out to lots of different targets all at the same time. Is that the kind of thing that you're worried about? He said that's the thin end of the wedge.

Meaning in terms of the situation of AI controlling military technology, that sort of example is just the thin end of the wedge. Meaning that's just the...

the smaller, maybe less serious end of the situation. And at the other end, there is a much bigger way of understanding it. So that means that there could be much bigger uses of AI in warfare. It could be much more sophisticated. It could be a much larger threat.

So you've got the thin end of the wedge, which is like some uses of AI in conflicts. And then you've got the thick end of the wedge would be a much bigger, much larger, a much more sophisticated use of AI. OK, so the thin end of the wedge is a minor change that could lead to significant and undesirable consequences in the future. OK, so he's talking about a small thing that could become a big thing.

For example, allowing this exception could be the thin end of the wedge for more serious abuses of power. This idiom is often used in debates to warn that small actions or changes could escalate into bigger issues. Okay, a bit like a slippery slope kind of situation. So he's saying that these uses of AI are just like the small end and that ultimately they could become much bigger later on. It's the thin end of the wedge.

What I'm most concerned about is when these things can autonomously make the decision to kill people. So we've talked about autonomously, meaning that they do it on their own. They do it automatically, autonomously, meaning on their own without being told to do it. So robot soldiers, says the interviewer. And Jeffrey says, yeah, those or drones and the like. So and the like means.

means 'and things like that'. So, 'the like' means 'things like that'. So, 'robot soldiers', yes, those, or 'drones and the like', or 'drones and things like that'. Remember, you can get a full list of all this vocabulary that I'm describing here on the PDF that I've mentioned already.

with definitions, examples and comments as well in a long list. So I do recommend that you get the PDF. It's completely free. You don't need to give me your email address or anything like that. You can just download it directly, but it might be useful.

as a way of helping you to consolidate a lot of the vocab that I'm talking to you about here. Let's move on. If we restrain ourselves from military use in the G7 advanced democracies, what's going on in China? What's going on in Russia? Shout out to my Chinese and Russian listeners who might feel a little bit triggered by this particular moment.

Anyway, if we restrain ourselves, if you restrain yourself from doing something or you restrain yourself from something, it means you hold yourself back and you're like, no, I won't do that. I think I won't use AI in military applications. So if we restrain ourselves, what about our competitors? Will they restrain themselves? No, they won't.

So that means we can't restrain ourselves from doing that. The interviewer said, do you have any sense of whether the shackles are off in a place like Russia? So shackles would be things that would restrain you or restrain someone. Like they are metal chains, aren't they? Yeah, yeah, yeah, yeah. Yes. A bit like when someone is in prison,

They might have metal rings around their ankles attached to chains which are then attached to the wall. These are shackles. And these shackles would certainly restrain you from doing something. So the idiom "the shackles are off" means that we're talking about an unrestrained thing, something that's unrestrained or where people are not holding themselves back.

So do you have any sense of whether the shackles are off in a place like Russia? Means, do you think that in a place like Russia, they are not restraining themselves? And in fact, they are really trying to develop AI systems, especially for military applications. And they're not holding themselves back. They're not limiting themselves. They're really trying to innovate in this area. Do you have any sense of whether the shackles are off?

meaning that they are not restraining themselves. And he said, well, Putin said some years ago that whoever controls AI controls the world. So I imagine they're working very hard. So basically saying, I expect the shackles are off. Yes. The West is probably well ahead of them in research. Well ahead. So they're ahead of them, meaning they are in a forward position. The opposite of that would be they're behind them and well ahead of them.

well ahead of them, meaning much further forward in research. We're probably slightly ahead of China. So we've got well ahead of them, slightly ahead of them, right? The interviewer says, it sounds very theoretical, this thread of argument. That's just an interesting collocation. We talk about a thread of argument or a kind of line of argument. We actually say a line of inquiry.

series of questions. A thread of argument is a series of statements that make up an argument.

position. So we talk about a thread of arguments, quite a neat phrase because a thread suggests a line, you know, again going back to making clothes, making a shirt. You use a thread, you use a needle and thread to, you know, stitch the parts of the clothing together and you thread the needle through different parts of the clothes leaving this like, you know, quite a neat line that moves in and out.

So this is a thread of arguments. It's sort of a sophisticated way of putting ideas together to create an argument. So it sounds theoretical, this thread of argument, but you really are quite worried about extinction-level events. Extinction for the human race would mean where the human race gets wiped out and there are no humans left. That sounds a little bit...

alarmist, doesn't it? It's probably unlikely to result in extinction level events, I would imagine. I mean, just speaking personally, probably what would be more likely is that you'd end up with maybe a large loss of human life, like we've seen in the past in global wars, but where maybe a certain percentage of the human race would survive, and that would probably be the richer ones.

Anyway, are you quite worried about extinction level events? Events at a level that could cause the extinction of the human race. We talk about extinction of animals. The dinosaurs are extinct. An extinction level event for the dinosaurs was the impact of several asteroids that hit the Earth, causing extinction.

All sorts of environmental challenges that most of the dinosaurs couldn't deal with and they became extinct. Okay. And Jeffrey said, we should distinguish these different risks. So distinguish something means make a difference between them. We should distinguish these different risks, show that this risk is different to this risk.

So he talks about the risk of using AI for autonomous lethal weapons doesn't depend on AI being smarter than us. So there's the use of AI for weaponry by humans. And then there's another risk, which is that AI on its own autonomously choosing to attack the human race. So these are two different risks. And he says that he's worried about both. So the one risk is just humans using AI to kill other humans.

And the other one is like AI going rogue. So he uses this expression, the risk that AI itself will go rogue and try to take over. So to go rogue means to kind of, hmm, have you seen Mission Impossible? The Mission Impossible films? There's one called Rogue Nation, right? Mission Impossible Rogue Nation.

Also, it makes me think of, because, you know, I understand all issues through the context of pop culture movie plot lines. The Bourne Identity films with Matt Damon. In those films, he goes rogue, right? So he was a, spoiler alert, he was a specially trained soldier, but he goes rogue, meaning he goes off and starts making his own independent decisions about

Right? He no longer follows the orders or commands that he's been given. He goes off and starts making his own decisions.

And he's no longer controlled or subservient to his former military bosses. He goes rogue. Same thing with Tom Cruise in some of those Mission Impossible films, where he realizes that his bosses at the CIA have been compromised and they're now working for the enemy. And he realizes that he has to just go rogue. He's going to have to go off on his own and make his own decisions and

And in fact, his bosses, his former bosses are trying to catch him. Right? Ethan Hunt, he's gone rogue. Dun, dun, dun, dun, dun, dun, dun, dun, dun. And then he jumps off a building and breaks his ankle. You know, classic Tom Cruise stuff. Then he runs really fast in some situation. Um...

So Jeffrey is worried that AI will actually go rogue and try to take over, meaning to take control, to take over, to take control of the world, take over the world. And so at this point, before it's more intelligent than us, we should be putting huge resources into seeing whether we're going to be able to control it. So to put resources into something means to invest money, to invest time, to invest resources

different things into trying to understand if we can control it. So we should be putting a lot of resources into this.

putting a lot of people into working on it, spending a lot of money on it. Resources means things that you can use to get something done. It could be money, it could be people, it could be infrastructure, whatever it is, we should be putting these resources into understanding if we can control AI to prevent it from going rogue and trying to take over the world. So basically, we need to set up the Avengers and we need to do that really fast. So if someone can call Tony Stark and

That would be good. Oh, he doesn't exist? Oh dear. I don't know what we're going to do then. Yes, I'm very worried about AI taking over lots of mundane jobs. Mundane jobs are like boring jobs that, you know, people have to do, but they don't really want to do.

like kind of data entry jobs. I mean, you might have a mundane job and sure, it might be mundane. It might be boring and always the same, which is what mundane means. But it might also be pretty important for you. Maybe you don't enjoy your job, but you need it in order to get your money to pay the rent. So he's saying, I'm very worried about AI taking over lots of mundane jobs.

And he does then say that should be a good thing, meaning that humans don't need to do those boring tasks anymore. It would be a good thing if the money could, if the benefits of AI were spread equally. But that's not the case in our world as it is today, that those people would lose their jobs and then the money would just go to the business owners or the rich people and the people who'd lost their jobs would just lose out. So it wouldn't be a good thing, in fact.

He said that AI replacing people's jobs is going to lead to a big increase in productivity. So productivity means being able to do a lot of work, producing or manufacturing a lot of products,

Or at least being able to do a lot in terms of services, work-related services. So for me, productivity means making podcast episodes. And actually, AI does help me with my productivity. It has led to an increase in my productivity because it helps me to create lessons because I use it as a way of generating English, which I can then adapt and turn into podcast episodes.

Okay, so being productive, doing work. So AI is going to lead to a big increase in productivity, which leads to a big increase in wealth. So if we talk about wealth, we talk about money, owning money. Being wealthy means being rich, basically. So an increase in wealth means more people having more money, an increase in people being rich, basically.

And if that wealth was equally distributed, meaning if everyone got an equal share, if it was distributed, if it was passed out, given out into society, if everyone got an equal amount, that would be great. But it's not going to be equally distributed because that's not the way the world works. We have a very unequal system.

So it's going to increase the gap between rich and poor. We've talked about that, which increases the chances of right wing populists. So these are sort of political figures who are usually on the right wing end of the spectrum, who essentially use basic kind of human emotive reactionary opinions as a way of manufacturing support for

for their own essentially corrupt and probably unethical and dangerous policies, right? That's what we mean by right-wing populists. It's a form of manipulative politics that uses issues...

that cause people to have knee-jerk reactions. For example, the previous UK government spent a lot of their time and energy talking about how they were going to stop illegal immigration because this they worked out was the issue that a certain number of people in society were very emotionally invested in. The idea of small boats arriving on the shores of the country every day and that it's

illegal immigrants who are the cause of all of the problems in society not the fact that the government are giving all of the money to their super rich friends it's the small groups of desperate people coming to the country to get help these are the ones who we need to focus our energy on and so they would spend massive amounts of time doing these performative acts of um uh like

policy making which wasn't even real policy making just in order to get support so that they could guarantee that they keep power so that they could then just keep doing the things that they were doing often which were you know corrupt business dealings and other things of that nature populism okay so he's basically saying when you get a bigger gap between rich and poor you get more and more right-wing populists in power because essentially people the

Most people who are getting less feel disenfranchised, they feel angry, and then the politicians in power use that anger and direct it towards minority groups or other easy targets, and it's all just an effort to whip up emotion which they can use to maintain power. So this is obviously a very pessimistic view that AI will allow...

the rich to get richer and it will allow them to manipulate the public more and more leading to a sort of controlled state which is also frightening to be fair. The interviewer says, so to be clear you think that the societal impacts from the changes in jobs, so societal impacts, I think you know what impacts are like strong effects, what will be the impacts of this technology on society. Impacts on society are

What kind of impacts? Societal impacts. So societal is the adjective for society. So you could say, what will be the impacts on society? Or what will be the societal impacts? Can you say it? What will be the societal impacts from the changes in jobs? These things could be so profound, meaning so deep and so significant and so heavy that

We may need to rethink the politics of the benefit system. To rethink meaning think about it again, meaning find a new way of thinking about it, to rethink it. We need to rethink the politics of the benefit system. The benefit system is a system that's in place that means that governments provide benefits to people in society who need help. When we say benefits, we mean things like benefits.

childcare benefit, this is money to help families pay for food for their children or pay for childcare. Pensions are a form of benefits, so this is old people who are too old to work and they're given a pension, which is a certain amount of money every month or every year to help them live. If you don't have a job, you might have unemployment benefit of some kind, which is money that you're given to support you while you find a job.

And so this is the benefit system. It basically is there to support people who need support. So maybe we need to rethink the benefit system

And maybe we need to rethink inequality. So this is basically look again at the inequality, the lack of equality that we have and just think about a new way of dealing with it. And he says, absolutely, which leads us on to universal basic income, which I mentioned before, this idea that that everyone could be given.

a basic income, a basic amount of money every month, which they could use to just, you know, pay for the basic requirements of life as a way of preventing large sections of society dropping into poverty.

And this is a thing that people are talking about more and more. And it could be a sort of solution to this huge inequality that we've got in society, basically providing people with money. It's quite a controversial one when you think about it, because, you know, you just think, what, we're just going to give people money for doing nothing just to prevent them from falling into poverty? And, you know, some people are of the opinion this is a very bad thing to do, that it encourages a culture of laziness and stuff like that. And other people are saying, well, actually, there's no other solution. We have to...

essentially provide people with a basic living wage even if they're not working because otherwise millions of people will live in poverty, millions of people will starve to death because that's the situation we're living in. So what do you want? That solves the problem of them starving, meaning dying because they've got no food and not being able to pay the rent. The rent is the money that you need to pay for your accommodation if you don't own your own house.

If you're renting an apartment, you have to pay the rent every month to your landlord if you're a tenant. So this could help prevent people from starving or prevent people from being unable to pay the rent. And if you can't pay the rent, you will be evicted from your apartment and then you're on the street. So without universal basic income, we could see people starving to death on the streets, you know.

So, you know, maybe universal basic income is a thing that people that we need to set up in society. So what you just you just try to the government needs to get involved, right? Get involved. If you're involved, it means you are part of what is happening, right? You're implicated, although implicated in English sounds like there's a negative sense to it.

You'd be implicated in a crime. But no, to be involved, meaning to be part of what's happening. The government needs to get involved, means the government needs to step in and maybe try and control the situation somewhat by, you know, setting up universal basic income and looking after people who need support.

Jeffrey says, that's not really the way we do things in Britain. We tend to stand back and let the economy decide. A bit like in America, they have a slightly more of a small government approach. Certainly, the kind of conservative model in the UK is that we don't like a kind of big government nanny state situation where the government is dictating what happens all the time. There is a feeling in the UK that that's not preferable, that we prefer to

Let the economy decide the winners and the losers, you know, we have that more free market economy situation Jeffrey says I was consulted by people in Downing Street Downing Street. This is where the Prime Minister is based So if you talk of Downing Street, it's a bit like in America when they talk about the White House or Capitol Hill Downing Street is where the Prime Minister is so I was consulted by people in Downing Street means that people in the government are

Asked for his advice. He was consulted. He worked as a consultant. He was he gave his his advice And he advised them that universal basic income was a good idea But it's not a perfect idea because you can provide people with money that solves that problem But people don't have jobs just for money. They also have jobs for a cell a sense of self-worth and giving taking people's jobs away

can also remove their sense of self-worth. So we might be in a situation where people still need to do things as a way of preventing them from just losing their sense of purpose. The interviewer said, "Are you more certain that this is going to have to be addressed in the next five years?"

So to address a problem is to deal with a problem. That this is going to have to be dealt with in the next five years. This is going to have to be managed in the next five years. It's going to have to be addressed in the next five years, in the next parliament, perhaps. That means in the next...

in the next government. So you've got five years in the UK where the government has a period of five years and then after that there's a general election and all the members of parliament get re-elected or whatever and you get a different set up, you get a different parliament. So he's saying that maybe this is going to be directly addressed in the next parliament, you know, in about five years time maybe.

And he said, probably between five and 20 years from now, this will have to be addressed. We will have to confront the problem of AI trying to take over. To confront it means to come face to face with it and deal with this problem. OK, a bit like at school, if you're getting bullied, if there's a kid at school who is always teasing you, stealing your pocket money, hitting you around the back of the head and generally making your life a misery.

you might have to confront that bully, stand up to them, face them, you know? Similarly, between five and 20 years from now, we will probably have to confront the problem of AI trying to take over. It's so serious, isn't it? Can't believe it really, that relatively soon, we're going to have to come face to face with the issue of AI trying to take over the world. Is it even real? Am I really talking about this? I am actually. Um,

Are you impressed by the efforts of governments to try and rein this in? To rein something in means to get something under control, to get to grips with it. Rein, R-E-I-G-N. Now this refers to horses. Reins are the leather straps that you use to hold onto a horse's head. You hold the reins. When you're riding a horse, you hold the reins in your hand and you can use it to turn the horse's head left and right and so on.

So to rein something in is to get something under control, a bit like when you hold the reins of a horse. So are you impressed by the efforts of governments to rein this in? Are you impressed by the efforts of governments to try to get this under control? And he gives his answer, blah, blah, blah. He says, I'm unimpressed by the fact that most of the regulations have no teeth. So he's talking about the fact that most of the laws relating to this just don't have any real power. They have no teeth.

Right. They don't have any real power. That's what that expression means. Do you think that the tech companies are letting down their guard? If you let down your guard. So in boxing, right, you hold up your hands, you punch with your

with your fists, but you've got to hold your guard up as well, which is your other hand or even both your hands protecting your head or protecting your body. This is your guard. And if you let your guard down, it means your hands go down and suddenly you don't have any defenses anymore and you can easily be punched by your opponent. So do you think that the tech companies are letting down their guard on safety because they need to be the winner in this race for AI?

So basically, are tech companies in their efforts to try to be the winner in this kind of arms race for AI? Are the tech companies not thinking about safety? Are they letting down their guard on safety? Meaning, are they allowing AI to develop in dangerous ways because they're trying to race to become the leader in the marketplace? And the answer is more or less, yes, it is.

He said Google was concerned about its reputation, meaning about what people thought of it. And I think pretty much the answer was probably yes. Then the interviewer asks Jeffrey for his comments on what jobs people should do, what degrees they should do. Degrees means university qualifications, bachelor degrees, master's degrees.

The interviewer says, it seems like the world is being thrown up in the air. So if things are up in the air or if things are thrown up in the air, it means they are sort of suddenly there's no order anymore and we're not sure about anything. Everything's up in the air at the moment. We can't be sure about anything. So what would you advise somebody to study to surf this wave? Meaning the wave, meaning this trend in society, right?

To surf the wave is to get yourself in a position where you can actually get a beneficial situation to surf the wave. So instead of being drowned by the wave, you can somehow climb on top and you can ride along. So basically, what job should people do in order to find a good position in this situation? And he said, my best bet, meaning my best prediction, my best prediction,

Yeah, probably my best prediction is that plumbing is a good idea. Plumbing, I've talked about it several times already. Notice there's a B in the middle, but it's not plumbing. It's actually pronounced plumbing, so the B is silent. So again, you know, working with water pipes and things, just a basic mechanical job.

practical job because he said these things aren't yet very good at physical manipulation meaning AI systems are not very good at the actual physical side of things so we've seen those robots those Boston Dynamics robots that are able to kind of rather clumsily open and close doors they can maybe draw with a pen but they don't have the sort of physical dexterity that most humans have with our ability to use our fingers and thumbs and arms and stuff and

So when it comes to actually fixing a leak in someone's kitchen, a human being is still much better at being able to open up the doors of the cupboard, look inside, work out what the problem is, and then get into a position where you can open up the pipe and fix it and stuff like that. So we're still better in terms of physical manipulation. So something like plumbing would be a good career solution.

Driving, says the interviewer. And Jeffrey says, no, driving is hopeless. Meaning there's no hope in terms of driving. So a career in driving, there's no future in that because, you know, we're going to get driverless cars. It's taking time, but we will get them. He even says journalism might last a bit longer, but I think these things are going to be pretty good journalists quite soon and probably quite good interviewers too.

And that's where the interviewer decides to end the interview. When he says, oh yeah, they'll be better at interviewing than you. And the interviewer goes, okay, well that's the end of this interview. Thank you very much for your time. You're welcome. So what do you think? I'm going to ask you at the end here to tell me what you think about all this. So leave your comments in the comment section. I'm curious to know what your reactions are to all of this.

Here's what I think. So personally, I think this. I think we would be wise to take these comments very seriously. Although, as I said, I don't know what to do. I don't know what to do about it. I mean, what should we do? Us, normal people, I have no idea. I think there's no doubt that as AI develops, it will make many of us redundant, meaning our jobs will disappear.

So God knows what the world is going to look like just for that reason alone. People often say that AI won't completely replace a lot of jobs because people like the human factor, but I'm not completely sure. But you can tell me, you know, would you choose me, a human, as your English teacher over a very effective AI teacher? Would you choose me simply because you like the fact I'm a human, even though arguably I might be

less efficient and more prone to human error than some amazing AI language teacher that could look and sound just like me. Does it matter that it would look like me? I don't know. When AI gets to a certain point, it will be extremely good at being human enough to make it satisfying for us to interact with them. At the moment, AI...

Bots and I'm talking about ones that look like humans so you could have an actual video conversation with it where you can see its face and Its lips moving and its body language at the moment. It's not very satisfying to have conversations with that kind of thing It's not bad. It's certainly a lot better than it was but they still are not very convincing and it's generally kind of awkward and

It's not so bad having a conversation with ChatGPT, for example, where it doesn't have a face, it's just a voice replying to you. That's actually surprisingly amazing, but in terms of replicating a video call with an English teacher, it's not there yet. But when it does get to that point, and that will be rapid, eventually we'll probably be able to have essentially video calls, Skype calls or Zoom calls,

with an AI human, and it will be really difficult to tell that it's fake. So we'll get there. I think people will quite gladly switch to AI versions of many things, because the quality and the naturalness will be exceptional. It's going to be insane.

I mean, that raises all sorts of other questions about fake news and stuff, of course, as well, right? How will you know what is real anymore? Which again goes back to that idea of the potential for propaganda and government-level control, where it'll be quite possible where the matrix will sort of come true, where eventually...

the powers that be, maybe I'm being very paranoid, but I don't know, the powers that be will be able to essentially just create a completely simulated world for us. I mean, that's already happening, of course.

with the way that information is controlled. It's happened for centuries, of course, but with the internet, with different forms of social media, with the way that the government monitors, regulates, controls the media that we see, it essentially defines our reality.

And all you need to do is just have a look at the state-controlled media and the narrative that it presents. And you'll see that they are definitely controlling your reality. And when you look at... Well, anyway, I won't go on. But anyway, so it raises questions about that. But also, basically, when these...

simulated conversations with a human become really natural, I think, sure, people will just switch to them. They'll have no problem with it. AI will be absolutely central to everything we do. Entertainment, for example, will be able to create our own movies instantly.

Choosing all the factors we want, for example, which actors, the themes of the movie, the style, the storyline type, anything, and generative AI will create it instantly. It'll create our perfect movie and we can just sit down and watch it and it'll be good. We'll be able to do the same thing with music. We'll be able to say, you know, hey, whatever, you know, you'll just talk to your AI and say, make a playlist of songs that are upbeat, kind of sound like a bit like the Beatles, but with a modern vibe, and it'll just do it.

Make all the songs about learning English and it'll do it and they'll be good. A lot of the things will be automatic, right? A lot of things, obviously driving, but so many other things will just be automatic and AI will understand what we want and will make it happen. One day we will look back in horror and amazement.

the way that we live now, for example, driving, we will look back in horror and amazement that we actually let humans drive cars on highways.

We will think this is madness. We'll think, wait, we let humans drive cars, six lane highways, driving 80 kilometers an hour in opposite directions, right next to each other. And this was normal and people did it every day and people died on the roads all the time. Absolutely insane. We will think it is bonkers. English teachers,

ChatGPT is already stunningly reactive, well-informed and capable as a conversation partner. AI will be able to make strategic decisions for most things that are based on much better, clearer and efficient thinking processes and will probably produce better results more quickly and efficiently than humans. AI will understand us better than we understand ourselves and might be able to completely manipulate us.

Think of advertising, which is so incisive and effective that it's basically hypnosis or mind control. And hypnotic mind control is definitely a real thing that is practiced in various entertainment situations, but also in military and secret service projects. So AI will probably be able to brainwash us extremely effectively. I am worried about what humans will do to other humans using AI as a tool. Who will control it? What will they choose to do with it?

But, you know, I don't mean to be just completely pessimistic. AI is also awesome. What it can do is incredible and it could be tremendously empowering. You know, in medical situations, we could use AI to help diagnose medical conditions and to provide medical care in the most adaptive way possible.

I already use AI a lot in my job, as I said earlier, but it will develop to unbelievably sophisticated levels in ways that we can't quite comprehend now. But AI is far from being... This is maybe the most negative point in the whole episode. I'm so sorry, everyone. AI is far from being the biggest threat to human existence at the moment. It might even be our saviour, in fact. And that bigger threat is, of course, the climate catastrophe.

which is another story for another time. Now, I want to explore this subject further, but I think that's for another episode. I'm interested in making projections and predictions for how specifically AI could change our society over the next 5, 10, 20, 50 years. I think it's fascinating. So what do you think? Does that sound interesting? Let me know, and perhaps I can explore it in another episode. But in the meantime, leave your comments below, perhaps with your own opinion about

Here are some questions for you to consider. Are you optimistic or pessimistic about this or some kind of combination of the two? Do you use AI at the moment? How do you use it? Could AI be a threat to your job one day or will it open new possibilities and new opportunities for your work? Do you use AI to help you learn English? And will English teachers all be out of a job in the next few years? Would you choose me over my AI alternative?

There's a question. Anyway, thank you so much for listening. If you look at the PDF, you'll see a full vocabulary list of all the highlighted bits of vocab in a list. And also another list with loads of all of that, those bits of vocab with definitions, examples and commentary. So tons more detail on the PDF, which you can just download just freely from the episode show notes section.

But that's it. That's the end of this episode. Thank you so much for watching or listening. I just dumped all sorts of heavy things on your mind for this one. I hope I didn't bring down your mood at all. I mean, it's kind of terrifying, but I mean, I feel actually quite energized by the subject. I find it fascinating and quite stimulating, but I'm curious to know what you think about it all. Thank you very much for watching. Thank you for listening.

Speak to you in the next episode of Luke's English Podcast. But for now, it's just time to say goodbye. Bye. Bye. Bye. Bye. Thanks for listening to Luke's English Podcast. For more information, visit teachaluke.co.uk.

Stop over in Qatar and enjoy pristine beaches and vibrant souks.

Relax in a five-star hotel from just $48 per night. Go to visitqatar.com slash stopover. Terms apply.

If you enjoyed this episode of Luke's English Podcast, consider signing up for Luke's English Podcast Premium. You'll get regular premium episodes with stories, vocabulary, grammar and pronunciation teaching from me and the usual moments of humour and fun. Plus, with your subscription, you will be directly supporting my work and making this whole podcast project possible.

For more information about Luke's English Podcast Premium, go to teacherluke.co.uk slash premium info.