cover of episode Artificial Intelligence

Artificial Intelligence

2020/11/29
logo of podcast Undeceptions with John Dickson

Undeceptions with John Dickson

AI Deep Dive AI Chapters Transcript
People
G
Grenville Kent
J
John Lennox
V
Vicky Lorrimar
Topics
John Lennox:人工智能技术如同双刃剑,既可用于医疗诊断等造福人类,也可用于监控和压制,引发对人类本质和生命意义的根本性思考。超级人工智能的出现时间存在争议,但目前距离实现还有相当距离,其是否会超越人类智能,甚至威胁人类生存,仍是未知数。 John Lennox 还探讨了实用型AI和超级智能AI的区别,以及AI技术在医疗诊断、监控等领域的应用和伦理风险。他认为,真正的智能需要意识,而目前的人工智能技术只是对人类思维和意识的模仿,缺乏真正的意识和情感。 John Lennox 认为,一些人试图通过技术来实现永生,这与基督教对永生的理解存在根本区别。基督教的永生是神所赐予的,而非人类自身能够通过技术实现的。 Grenville Kent:性爱机器人市场存在,但其技术水平和用户体验仍有待提高,并且存在伦理问题。性爱机器人市场的发展也引发了对人类关系和社会伦理的思考。 Vicky Lorrimar:技术并非完全中立,其发展和应用受制于研发者的动机和社会现状。技术进步速度远超伦理思考速度,存在伦理风险。她认为,技术可以改善生活,但不能解决人类自身的贪婪和社会不公等问题。基督教对永生的希望与技术对永生的追求存在根本区别,基督教的永生是神所赐予的,而非人类自身能够通过技术实现的。

Deep Dive

Chapters
The episode introduces the debate around AI, comparing it to a sharp knife that can be used for good or harm, and features Professor John Lennox discussing the importance of addressing AI's impact on human existence.

Shownotes Transcript

Translations:
中文

Just a quick content warning for this episode before we get going. Today we talk a little bit about sex and robots and sexual abuse in that context. I'd hate for any of this to shock or distress listeners. So be safe. God bless. Hello, Hal, do you read me? Hello, Hal, do you read me? Do you read me, Hal? Affirmative, Dave. I read you. Open the pod bay doors, Hal. I'm sorry, Dave.

I'm afraid I can't do that. What's the problem? I think you know what the problem is just as well as I do. What are you talking about, Hal? This mission is too important for me to allow you to jeopardize it. I don't know what you're talking about, Hal. I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

I'm all for technology being used in a good sense. I often say to people, AI is like a knife, a very sharp knife. You can use it to operate with or you can use it to kill people. That's the infamous computer HAL from A Space Odyssey 2001, followed by my friend Professor John Lennox, reflecting on the dangers of today's topic.

Artificial intelligence is the buzz phrase of the 21st century. For some, it's a welcome step into a future that promises all manner of good, like the best-intentioned members of the Star Wars cast. I am C-3PO, Human-Cyborg Relations, and this is my counterpart, Barton Geto.

For others, artificial intelligence, AI, is damaging at best, and at worst, a step toward our own destruction. Like Terminator's infamous defense computer, Skynet. They say it got smart. A new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond. And that's what we're here to work out. Is artificial intelligence an underestimated threat,

or the saviour we've been waiting for. I'm John Dixon, and this is Undeceptions.

Undeceptions is brought to you by Zondervan Academics' new book, Telling a Better Story, by Joshua Chatterall. Each episode, we explore some aspect of life, faith, history, culture or ethics that's either much misunderstood or mostly forgotten. With the help of people who know what they're talking about, we'll be trying to undeceive ourselves and let the truth out. Undeceptions

John Lennox is Emeritus Professor of Mathematics at the University of Oxford and Emeritus Fellow in Philosophy of Science at Green Templeton College at the university. He took his MA and PhD in Mathematics at Cambridge University, and for his later research, he was awarded a Doctor of Science by the University of Wales in Cardiff. When he moved to Oxford, he was awarded a Doctor of Philosophy as a matching honour for his Cambridge doctorate. That's a weird and ancient practice in the Oxbridge tradition.

Oh, and in his spare time, he also punched out a master's in bioethics from the University of Surrey. Most important for our purposes, he is also the author of 2084, Artificial Intelligence and the Future of Humanity, the product of years of reflection and research.

John, I remember a few years ago now you telling me how excited you were to be writing a book about artificial intelligence and felt it was really important. Can you tell me, like, wind back, why did you feel it was such an important topic to address?

One of the main reasons that brought it to my attention was the fact that I could see, I've always been interested in this kind of thing, but I could see that increasingly the issues being raised by artificial intelligence were encroaching on the fundamental existential question, what is a human being? What is human life? And it came to a head when a fairly senior Christian leader asked me to give a talk on

in London to a large group of Christian leaders on Genesis and artificial intelligence. And I, in one sense, I said, look, you've got the wrong person. And they said, no, we think that you've got a lot of insight into Genesis and we'd like you to relate it to the developments. And once I started doing that, it was immediately clear to me

that this deserved a much deeper investigation. And so I put together a lot of stuff I'd already read and then did a great deal more research and it ended up with this book. Which is a fantastic book. Can you give us a lay of the land? Where are we up to in artificial intelligence and what's just around the corner? Well, I can't see round corners even though I'm an Irishman. But

Let me put it this way, that AI has got two very distinct aspects to it. The first is the AI that actually works, is being used all over the world, and is being used for some very good things and some very questionable things.

A typical AI system used in medical diagnostics consists of a powerful computer, a very large database, say, of pictures of lungs,

which have diseases and all the pictures, let's say there are a million of them, are labeled with the diseases they represent by the best medics of the day. And then you get pains in your lung, an x-ray is taken, and the AI system compares the pattern in your lungs with those million others that

and comes up with a diagnosis. And these days, that diagnosis is liable to be considerably better than you would get at your local hospital. So what is it doing? It's not really intelligent, but it's doing a single thing, one thing only, that normally requires human intelligence. And that is being rolled out all over the planet. And as I say, there are lots of good things. But

If you just tweak this slightly, you're into surveillance and pattern recognition. And of course, for a police force, it's wonderful to be able to recognize terrorists and criminals in a football crowd. But unfortunately, this kind of surveillance technology lends itself to oppression and violence.

There are very serious breaches of human rights in Xinjiang in China that most people around the world know about. And it's extremely intrusive, artificial intelligence. And so we need to be able to discriminate. And so there's a whole ethical matrix of questions that go along with it. That's the stuff that actually works. And in fact,

The book is called 2084, which was a suggestion by Peter Atkins, by the way, my atheist colleague at Oxford. And it fits because 1984 was the book where George Orwell gave us Big Brother and all this kind of thing. That's already happening. So the first sort of AI is stuff that's already being rolled out. It works. The second kind is the much more science fiction-like stuff where we're talking about

creating super intelligence and will super intelligence make humans redundant or keep them as pets or rule the world or destroy humanity as many people fear. And that more speculative type of thing, of course, attracts a lot of journalistic attention. So I thought in the book, I would like to look at that because

scenarios about the future are actually important. And as a Christian, I felt that the biblical worldview has things to say that, interestingly enough, relate very strongly to what some AI proponents are claiming. So that's basically the... That's right. If you didn't already know, John Lennox is not only an internationally respected mathematician,

He's a world-renowned advocate for the Christian faith. And like the great scientists of old, Boyle, Newton and so on, he sees Christianity as enhancing, not obscuring, his study of the world. More from John later.

Professor Lennox talked about two different types of AI. The first is as a tool. That's where manufactured intelligence does tasks humans can do, only the AI can do it faster and more accurately. Like the replicants in Blade Runner, the explorer robots in Interstellar, and of course everyone's favourite clean-up bot. Walleys. Walleys.

The other kind of AI is more confronting. It's the sort of artificial intelligence that exceeds our own. It could be benevolent, just a lot smarter, like the self-aware operating system from the film Her. Hello, I'm here. Oh, hi. Hi, how you doing? I'm well. How's everything with you? Pretty good, actually. It's really nice to meet you.

Oh, it's nice to meet you too. Oh, what do I call you? Do you have a name? Um, yes. Samantha. Wait, where'd you get that name from? I gave it to myself, actually. How come? Because I like the sound of it. Samantha. Wait, what?

When did you give it to yourself? Well, right when you asked me if I had a name, I thought, yeah, he's right, I do need a name. But I wanted to pick a good one, so I read a book called How to Name Your Baby, and out of 180,000 names, that's the one I like the best. Wait, you read a whole book in the second that I asked you what your name was? In two one-hundredths of a second, actually. Or it could decide it no longer needs humanity. Like the Avengers technological villain, Ultron. You're all puppets. Tangled in...

Strings. Strings. There are no strings on me.

We don't know which one we're going to get. And some people fear that the day we're going to find out is, technologically speaking, just around the corner. It's called the singularity. And it was defined at the Singularity Symposium of 2019 as a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

Technologists like Elon Musk are a little worried. Elon Musk in the New York Times recently said, and I quote, we're headed toward a situation where AI will be vastly smarter than humans in five years.

things will get unstable and weird. Do you agree with that timeframe? And where do you see AI in five to 10 years? I know you can't see around the corner, but in terms of the trajectories, what are people straining toward? It's pretty clear that the technology will keep advancing, but ideas that it'll be smarter than human beings, we need to subject that to a

Artificial intelligence is artificial. One of the best summaries of it is the title of a paper by actually one of the early pioneers who happens to be a Christian, where he said the artificial in artificial intelligence is real. It's not real intelligence. Now, real human intelligence is coupled with consciousness.

And if you're talking about something being smarter than a human being, what are you claiming? It's either one of two things. One of them is this, that your AI system can do everything a human being can do, but do it faster and more efficiently, etc., but not itself be conscious that it's doing it. The other thing is that some people think that they're going to construct

a conscious artifact. Now, that is so far away from any reasonable hope for a very simple reason. No one knows what consciousness is. And so it's very difficult. That's the big barrier in the way of truly constructing something that represents the kind of intelligence we humans have. Now, clearly, you can add one AI system to another and have

competence in many levels. But I think Elon Musk is pushing his boat out a bit far when he thinks it's within five years. Ray Kurzweil, one of the great gurus in this field, he always puts it about 30 years hence. It's always been 30 years hence. And there are very differing opinions.

There is, of course, the sci-fi squad who are not scientifically qualified and write novels about this kind of stuff, like Dan Brown and, to a certain extent, Yuval Noah Harari, with whom I've interacted for the simple reason that he's influencing so many people. But then there are some leading scientists who take these things extremely seriously, notably

Lord Martin Rees, our Astronomer Royal, who thinks that down the years, but he puts it as centuries, I think, that the beings that exist will have very little memory of us, so to speak, and may well be controlling the planet. There's enough fear out there among the intelligentsia to think

produce a real scramble for ethical controls that are international. And of course, that's proving extremely difficult. People have been discussing the ethical limits we need to place on technology for as long as we've been dreaming about technology. Futurist and science fiction author Isaac Asimov wrote his Three Laws of Robotics that need to be programmed into every advanced intelligence. This became the basis for the film iRobot.

A robot cannot harm a human being. The first law of robotics. Yeah, I know. I've seen your commercials. But doesn't the second law state that a robot has to obey any order given by a human being? What if it was given in order to kill? Impossible. It would conflict with the first law. Right, but the third law states that a robot can defend itself. Yes, but only when that action does not conflict with the first or second laws. You know what they say. Laws are made to be broken.

Ethical discussions don't just centre on what robots should or shouldn't do, but on what we should and shouldn't do with all the advances coming down the pipeline. And one of those tricky issues concerns sex. Dr Grenville Kent gave a paper at this year's ISCAST conference, a meeting of Christians who are professional mainstream scientists. Grenville's paper was titled Sex Bots, Love and Marriage.

I asked him about the use of artificial intelligence in the sex industry. How serious are people, experts, industry, about sex and relationships with robots? Yeah, look, if you're asking for the market research surveys, something like about 30%, 40% of Americans and Europeans think they might have sex with a robot sometime in their life.

If you talk to the enthusiasts, "Oh, it's going to be the best thing since sliced bread." You know, there's whole books now since about 2017 when the scientific journal Nature asked academics to get into this field and take it seriously. There are books there promising you the most incredible sex and relationships of your life and deep intimacy as you stare into the glass eyes of this robot.

I think most people realize that there's a bit more to go in terms of AI before we'll have things like conversation, empathy, rationality, freedom of choice, personhood.

But are there university research labs working on this or maybe industry research labs working on this? Oh, man, there are so many nerds working on this at the moment. It's hilarious. I mean, take kissing, for example. If you wanted to kiss your beloved from a distance over the internet, you've got a thing called the teletongue, which is basically a pair of plastic lollipops

they sense the gestures that you're doing with your tongue and the sound of it, and they transmit that across the internet to your partner. And as the two nerdy scholars who write about this, they say, a problem with this device is the unnatural and rigid user interface. The straw of the device essentially serves as the tongue of the remote kisser. However, its shape and texture do not bear any resemblance to a human tongue. So the best, you know, you compare that to a passionate and loving moonlight kiss.

And it's a plastic straw rattling around in your mouth. That's the best we've got on the market. But have they gone beyond sex dolls? I mean, I know sex dolls is quite a thing, but have they gone beyond that to sort of partly robotic sex dolls? Well, yeah, mechanotronic. I mean, they're kind of mechanized. They'll move a bit. And I guess they've got the conversational skills of Siri or Alexa. You can put it that way. So no, not really convincing.

at all, unless you want to suspend disbelief. You know, as we do when we watch fiction, we watch some fiction film, we go, like, I'll play along. You can do that. But when the reporters describe interaction with the dolls in one of the many brothels, interestingly, which are popping up across America and Europe and Asia,

Hang on, brothels with non-human? Correct. Full sex doll brothels all over. There's one called, something to do with dollies, I forget what it's called, in London. Why would someone go there? I mean, is it much cheaper than the real thing or what? It's no cheaper and there's almost no human contact. But they're growing in popularity, I think probably because you don't have to talk to a person.

And yet it was interesting in Barcelona, the Sex Professional Association said they're no threat to prostitutes because, quote, they do not communicate. They do not listen to you or caress you. They do not comfort you or look at you. They do not give you your opinion.

So there's not a person there. And alarmingly, there's reports of serious violence done to those dolls. Like repair people going in there and fixing rips and breaks and punches. Some people just, some pretty sick stuff is done in those brothels, it seems, from what we're told. So maybe that is the attraction to the non-human. Yeah, potentially, that you can just unleash anger and pain. Yikes.

Artificial intelligence allows us to demean ourselves and others.

Vicki Lorimer has a degree in genetics and biochemistry, a master in divinity, and now a doctorate in theology and science from Oxford University. She studied under the wonderful Alastair McGrath, a guest on the show last season. Anyway, Vicki is now back in Australia and teaches at Trinity College Queensland, where she focuses on theological understandings of the human being and how that relates to human enhancement technologies.

She reckons there are significant justice issues we need to be thinking about in all of this. Well...

I don't buy into the notion that technology is completely neutral from a moral standpoint. So I don't think it can be as simple as saying, well, technology can be used for good and for evil. But I do think it can be used with different motives and intentions. So I think there is always the potential for technology to improve lives. And if you speak to technologists, many of them are really idealistic about making the world a better place. That is the goal. So I think

Some technological developments might help people survive in harsh environments. It might improve nutritional efficiency, all kinds of things that really would kind of improve the life of the poor.

But then I think we have to start asking the question of who's developing the technology and what is the vision of the good life that sits under that. And, you know, without being too pessimistic, I don't think we can necessarily engineer away our own sort of greed or completely do away with existing inequalities and injustices. I think these will kind of be brought along with us if technology is the only kind of way forward. Yeah.

On the other hand, there are strong hopes that these new technologies are going to elevate us, make us better human beings. That's after the break. I'll be back.

This episode of Undeceptions is brought to you by Zondervan Academics' new book, ready for it? Mere Christian Hermeneutics, Transfiguring What It Means to Read the Bible Theologically, by the brilliant Kevin Van Hooser. I'll admit that's a really deep-sounding title, but don't let that put you off. Kevin is one of the most respected theological thinkers in the world today.

And he explores why we consider the Bible the word of God, but also how you make sense of it from start to finish. Hermeneutics is just the fancy word for how you interpret something. So if you want to dip your toe into the world of theology, how we know God, what we can know about God, then this book is a great starting point. Looking at how the church has made sense of the Bible through history, but also how you today can make sense of it.

Mere Christian Hermeneutics also offers insights that are valuable to anyone who's interested in literature, philosophy, or history. Kevin doesn't just write about faith. He's also there to hone your interpretative skills. And if you're eager to engage with the Bible, whether as a believer or as a doubter, this might be essential reading.

You can pre-order your copy of Mere Christian Hermeneutics now at Amazon, or you can head to zondervanacademic.com forward slash undeceptions to find out more. Don't forget, zondervanacademic.com forward slash undeceptions.

68-year-old Tirat was working as a farmer near his small village on the Punjab-Sindh border in Pakistan when his vision began to fail. Cataracts were causing debilitating pain and his vision impairment meant he couldn't sow crops.

It pushed his family into financial crisis. But thanks to support from Anglican Aid, Tirat was seen by an eye care team sent to his village by the Victoria Memorial Medical Centre. He was referred for crucial surgery. With his vision successfully restored, Tirat is able to work again and provide for his family.

There are dozens of success stories like Tarat's emerging from the outskirts of Pakistan, but Anglican Aid needs your help for this work to continue. Please head to anglicanaid.org.au forward slash AnglicanAid.

and make a tax-deductible donation to help this wonderful organisation give people like Turat a second chance. That's anglicanaid.org.au forward slash Undeceptions.

It's easy to see how artificial intelligence could be a threat to humanity's future. But of course, many insist that technology is the key to our salvation. Since our earliest days, people have seen technology as holding the potential to lift us to almost godlike status. Think of the sword in the old English epic poem Beowulf. The sword gets its own name, Frunting, a bit like the way we name AI computers and robots.

And another item lent by Unferth at the moment of need was of no small importance. The Breon handed him a hilted weapon, a rare and ancient sword named Hranting. The iron blade with its ill-boding patterns had been tempered in blood. It had never failed the hand of anyone who hefted it in battle, anyone who had fought and faced the worst in the gap of danger. This was not the first time it had been called on to perform heroic feats.

Then there's the ancient Jewish story of the Tower of Babel, a piece of technology specifically designed to elevate humanity to God status. Now the whole world had one language and a common speech. As people moved eastward, they found a plain in Shinar and settled there. They said to each other, "Come, let's make bricks and bake them thoroughly." They used brick instead of stone, and tar for mortar.

Then they said, Come, let us build ourselves a city with a tower that reaches to the heavens so that we may make a name for ourselves. Otherwise we will be scattered over the face of the whole earth. Reaching to the heavens, making a name for ourselves, ensuring our longevity. There are some who think this dream could be fulfilled soon.

Yuval Noah Harari is an Oxford-educated Israeli historian and technology philosopher. His famous 2016 book, Homo Deus, literally Man-God, describes what he sees as the probable emergence of a new super race of men and women endowed by technology with supreme abilities, including perhaps eternal life. Harari seems to be, you know, sort of off the dial, uh,

in his expectation of the utopia that AI can bring. Well, yes. He's the first name that came to thinking. But more accurately, you see, he thinks that this movement, this techno movement, may contain the seeds of its own destruction. And I find a bit of equivocation with Harari that Harari

As you go through his book, Homo Deus, and of course, once I saw the title of that book, that resonated very much with the quest to be gods that begins in Genesis and is going on in the AI community of the Harari type. I think he is in one sense an optimist, but there are warnings in his book.

that this thing may sow the seeds of its own destruction. So I find it a bit hard to pin down, actually. And as Vicky Lorimer points out, this quest for godness through technology has already taken on a religious vibe. You've said that technology tends to occupy the roles normally attributed to deity. What do you mean by that? Well, I think...

For people who have a religious worldview, a God offers hope, a God offers a way to be redeemed or glorified, gives understanding for why things aren't the way they ought to be. This means to sort of lift ourselves, to elevate ourselves out of whatever present condition we're in. So if that is possible,

hope that's traditionally offered by religion. I think for some, technology has become a substitute for that. So we can almost engineer that for ourselves. We can make our lives better through our own ingenuity. So I think that's an enormously hopeful vision for some people, yeah. Even redemption? I mean, do we see that longing for redemption in technology, do you think?

That's an interesting one because I think, you know, even if God is not a part of someone's worldview or their explanatory landscape for how things are, there's still, you know, this universal sense, I think, that we could be more or that we want to be more. And we might call it different things. You know, Karl Rahner talked about the longing for the infinite that's universal, right?

We have different ways of describing that kind of longing to be more than we are, that sense that we could be more, we're meant to be more. What I think is interesting with transhumanism is actually it appeals to some classically Christian language. So we have words like transcendence and angelic and becoming godlike and all of this language that for centuries was reserved for

for religion and then after that was almost not permitted language in the public spaces, now coming back in again under a different guise. I think that's really interesting. What are you most concerned about in the rush to leap forward in artificial intelligence? Well, the rush is the right word. Technology advances much more rapidly than ethical thinking.

And it's noticeable that the ethical thinking is not going very fast at all, although there are people. Hawking was involved. Musk is involved. I think about 6,000 people were involved in constructing what are called the Asilomar ethical principles. And...

The Asilomar ethical principles are a list of shoulds and should nots endorsed by AI and robotics researchers. The list goes way beyond Asimov's three laws. It's kind of the Ten Commandments of tech, except that there are loads of them. Stuff like...

The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. If an AI system causes harm, it should be possible to ascertain why. An AI system should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. I suppose some people will see this as moralizing, getting in the way of good technology.

But I'd say it's putting humanity before our stupidity. And they're all wonderful, but I'm involved in a business school here. And how many chief executives have said to me, it's one thing to have a mission statement on your office wall. It's another thing to get it into the hearts of your executives.

And that's exactly it. How do you get real agreement since there are sufficient number of people in the world who take the view if it can be done, it should be done without any concern for ethics? And so there's a real problem. And that's one of the reasons, incidentally, that I encourage bright young scientists who are Christians to get involved in this stuff

Because there's a lot of good that can be done. Witness the work of Ruslan Pickard at MIT and helping autistic children and so on. There's so much good that can be done. But we need people who are articulate and have gone into the ethical dimension of this.

We may have quite a lot of time to work all this out. As Dr. Grenville-Kent points out, even the most optimistic developers of artificial intelligence are nowhere near developing the sort of godlike AI that some folks fear. Well, the first thing to say is there are a lot of top AI people, and I've interviewed some of them, who say we will never achieve full AI.

that what is happening in your smartphone, it's just following algorithms that are programmed into it. It's not conscious. It doesn't have any sense of, like I do when I watch it and see that my team, that Manly Ringer Seagulls, have lost again, and I want to jump under a bus. The phone doesn't have any emotion at all. It doesn't even know what football is. It's not smart to that degree, any more than a thermostat is smart when it goes, oh, it's 22 degrees. I better switch the air conditioner on. It doesn't know what hot is or cold is. It hasn't felt

It has no experience. And so if that's the case, you know, it's one thing. Step one is to get robotic control and we're pretty good at that. Step two is to get robotic perception. That's really hard, but we're getting there. That's why we have, you know, on the internet, prove that you're not a robot, you know, show all the bits of this picture that have a bus in it or whatever. But the third level that nobody has cracked yet is full AI. And we use the metaphors of intelligent machines and learning machines. But in fact,

There's no real mind there. There's no real consciousness there. There's no real experience there. It's all just following. If this, then do that. It's ones and zeros. There's not a thought there. So if that's the case, I mean, I think what we're really talking about, and a lot of scholars say this, what we're really talking about is robots imitating human thought and human consciousness and pretending they feel what you feel and understand you and agree or disagree with you.

and wanting to talk about your team's progress in football because you want to, but there's really no one home. And it's just kind of a teddy bear for grownups in the sense that you have an illusion, probably not a delusion because part of you goes, I know this is not real, but you basically have an illusion that you'll suspend disbelief and the thing is actually a person when we all know it's not. And Vicky Lorimer reckons there are some things AI will never achieve.

Why do you think the Christian hope for immortality is something that we can't manufacture for ourselves? Because some people reckon that, you know, just give it enough time and we will actually be there through technology. Yeah, I think this comes down to a crucial point.

between what is the Christian hope and what is the hope that underpins a lot of these technological visions of immortality. And the key question around the Christian hope becomes how does the future relate to the present? And if we see the future as just...

the continuation of the present. It's something that we will arrive at the eschatological future, to give the Christian language for it, that sort of end time. We will arrive at that just with the passing of more time.

then that's one understanding of the future. And that's the understanding of the future that transhumanists are operating on. We just more time, we continue along the trajectory we're currently on, and we'll just be this ever upward kind of path of progress.

Whereas the Christian hope says that actually this is not the future we're anticipating. So we don't say it's entirely discontinuous with the present. In some way, the present world, the present creation is going to be related to the new creation that we're hoping for.

But we also expect that God is going to break in. The future that Christians are hoping in is something that is unexpected, that breaks into the present in a way that just the passing of time on its own won't achieve. So that's where I think we can create all kinds of technologies and they may actually contribute in various ways to building the kingdom in the future.

the space that we have now and with the responsibility we have now. But that final redemption, that breaking in is something that God does and it's something that we can't do on our own and that time itself won't bring about. And it's a glorification, isn't it? I mean, that's what you're saying, rather than an extension. Yeah.

Yeah, absolutely. It's a transformation I like to think of in terms of even time itself is transformed. So it's not just something the future is, the final future of Christian hope is not just an extension of present time, present existence, the transformation. Let's press pause. I've got a five minute Jesus for you. There are two very ancient traditions about how to elevate humanity.

The first says that we can lift ourselves up through brute force, human effort, and especially technology. Perhaps the most ancient example is the Tower of Babel, people trying to build a tower to the heavens.

There are hints of the victorious possibilities of technology in the Trojan horse story from Homer's Odyssey and Virgil's Aeneid. Soldiers hide inside a giant mechanical horse and take the city of Troy by stealth. Perhaps the most concrete examples are from the Roman Empire,

where weaponry rapidly advanced and military engineering was incredibly complicated and successful. Just Google Roman battering ram and you'll see what I mean. The Romans were amazing technologically. Then there's the Roman engineer Vitruvius, who even worked out how to pour and dry concrete underwater.

He wrote a manual about it, and you can visit the ancient Israeli seaport of Caesarea to see a stunning example of how it worked. In all of these examples, we catch a glimpse of humanity's ancient optimism to lift ourselves up to be the saviors of our own predicament.

At the same time, there was also the occasional recognition that we are not able to lift ourselves up by our shoelaces. In Greek and Roman literature, we find another curious motif of divinity stepping in to help where human prowess had reached its tragic limits.

There are a few examples in Homer's Iliad, with gods deciding on a whim to help out on the battlefield, or in the famous Roman epic poem The Aeneid by Virgil, where deities redirect humans to their safety or success. In Greek and Roman theatre, this is known as Apomechanes Theos, or in its Latin equivalent, Deus Ex Machina, literally, God from a machine. It

It refers to a staging device in ancient plays where a figure representing a god would literally be winched down onto stage. This would be the moment when everything in the story would be turned around.

The plays of Euripides used this regularly in the 400s BC, and it became such a cliche in ancient Greece that the slightly later playwright Aristophanes in the 300s BC used it as comedy. The god appeared almost as a joke in the middle of the play. And in one of his plays, Aristophanes makes a really good joke by having a figure of the playwright Euripides himself winched onto stage as the deus ex machina.

We now use the expression deus ex machina to refer to any kind of dramatic turn in the plot. It's the surprise occurrence that no one deserved or expected. Both of these ancient traditions tell us something real about the human situation. It's right that we aim for the stars, or fill the earth and subdue it, as the Bible itself puts it. But it's also wise to admit our limitations, technologically and morally,

However much it might appear as a cliché, humanity does often need a deus ex machina.

Failing to acknowledge this is to fall into the trap of backing ourselves too much, of seeing ourselves as all-knowing gods. And that is just as much an awful cliché as the comedic overuse of needing rescue from God. The Bible affirms the status of every human being as made in the image of God, called by the Creator to rule, explore, and curate this planet.

The Bible applauds human attainment. The Bible also echoes the deep human longing for the deus ex machina, the hope that God might come down onto the human stage and lift us up out of the mess we've got ourselves into. As the Gospel of John puts it in one of the most quoted passages in Scripture,

You can press play now. Next episode, we continue in the weird world of artificial intelligence, focusing on human upgrades.

Can we enhance human happiness through AI? Does technology hold the keys to the next stage of human evolution? Welcome to the mind-blowing world of transhumanism, plus more on Sexbox.

In the meantime, got questions about this or other episodes? I'd love to hear them and we'll answer them in an upcoming Q&A episode. You can tweet us @Underceptions, send us a regular old email at [email protected], or if you're really brave, record your question for the show by heading to underceptions.com and hit the record button. While you're there, check out everything else related to this episode and plenty more bonus content. And if you're interested in other good podcasts, check out With All Due Respect,

with Michael Jensen and Megan Pal-Detoit, part of the Eternity Podcast Network. Till then, looking forward to you joining us on a future episode of Underceptions. See ya.

Undeceptions is hosted by that guy, John Dixon, produced by Kayleigh Payne and directed by Mark Hadley. Me. Editing by Nathaniel Schumach. And before I go, a random shout out to International Justice Mission, who are doing extraordinary things around the world to end trafficking and slavery. Go to ijm.org.au.