Welcome to Stories of Impact. I'm your host, Tavia Gilbert, and along with journalist Richard Sergay, every first and third Tuesday of the month, we share conversations about the art and science of human flourishing. Today, we're back with Richard's fascinating interview with Lord Martin Rees, the UK's Astronomer Royal and the co-founder of the Centre for Existential Risk at the University of Cambridge.
Joining them in conversation are two of the Center's research associates, nuclear war expert Dr. Paul Ingram and geohazards and geocommunications scholar Dr. Laura Mani. They discuss the Center's research around potential risks to civilization and life on Earth as we know it, from nuclear weapons to pandemics to natural disasters.
And perhaps most importantly, they share what gives them a sense of hope for the future of humanity and for the planet.
Here's Richard. The Center for Existential Risk, tell me what it is and what its mission is about. Well, it was founded about 10 years ago, and its main mission really is to try and understand and, if possible, alleviate the new class of risks, which stems from the fact that the world is more interconnected.
and nations and individuals are more empowered by new technology. And so for the first time, there's a serious threat of global catastrophes of new kinds caused by humans collectively, like climate change, or by a small number of humans by misapplying bio or cyber. There's a whole range of technologies, and we want to try and understand them in the hope of doing something to mitigate them.
Paul, from your perspective, you're looking at climate, you're looking at technological issues. Give me the panoply of issues that the Center looks at. Yes, certainly. So we're looking at hazards that include emerging technologies, but also technologies that have been with us for a little while, like nuclear weapons, biological weapons, and
and the pandemics that we've seen recently. We're also looking at more natural hazards, looking at volcanoes and asteroids, but mostly we're looking at the ways in which these hazards interact and the vulnerabilities that our societies suffer from as a result of this
vulnerabilities that are increasing with the complexity and with the relationships. And we're looking at ways in which society can better govern itself, the way in which we can overcome some of the collective action problems that mean global governance is so challenging in these areas. If you were to list the top two or three or perhaps five issues facing the planet, what would they be?
Richard, I always hate that question. Well, every journalist has to ask that question. I understand. I understand. And the reason why I hate it, I'll just say, and then I will try and answer it. The reason why is because if an alien were to land on this planet a few months after we destroyed ourselves and asked itself the question, what was it that finished the human race off? We
we would almost certainly have a complex answer, explaining the ways in which all these different things interacted and how the roots of our destruction have been based in the way in which human beings have managed their technology and each other for quite some time. Having said that,
And having studied nuclear weapons for the last 40 to 45 years myself, I would say that they pretty serious and up there. But on the other hand, if we had a super volcano erupt of sufficient magnitude, that would finish us off much more certainly than a nuclear war.
Then you have the emergence of very uncertain, very unpredictable technologies that are proceeding at significant pace. And a large proportion of our center is devoted to the uncertainties around artificial intelligence and the way in which human beings will have less and less control over the outcomes and the way in which artificial intelligence is likely to interact with these other hazards.
Then you have biological hazards, both deliberate and accidental, or natural rather, which could interact and mean that because we are not very effective at governing ourselves, could well end up with the deaths of billions of people. So I find it very difficult to come to a conclusion
But what I would say is that some of these are very rapid impact, but uncertain. And some of them are slightly slower impact, but very certain. And climate change is a threat that I think is already affecting our capacity to live on this planet is huge.
certainly going to make it more challenging and introduce stresses to our society. But it's still deeply uncertain as to how that will then manifest itself in raising the risks that we study in other areas.
I want to come to Martin in a moment, but Laura, I'd like to hear from you in terms of the natural world and your concerns in terms of the challenges that lie ahead. Yeah, I think I would mostly agree with Paul there on much of the kind of landscape of global catastrophic and existential risks.
It's fair to say that I think most people in Caesar would view our potential demise to be more of a gentle nudges towards catastrophe rather than something that might necessarily take us out. Saying that, natural hazards might be one of some of those that could potentially do that. If we were struck by an asteroid large enough, for example, that would near certainly have devastating consequences for the future flourishing of humanity.
And much of my research focuses on the risk posed by large magnitude eruptions. And this has been a really emerging kind of area of discussion within the Centre over the last few years. And it's been very much, I believe, to be underestimated for quite a long time. There's that classic narrative of super volcanic eruptions, these extremely large, colossal scale eruptions that could potentially cause dangerous climatic feedbacks that could potentially devastate our world and lead to the demise of humanity.
And those types of eruptions are incredibly rare. Tens of thousands of years, recurrence intervals for that kind of event. But actually the mechanism by which volcanic eruption could potentially cause catastrophe, which is the release of lots of gas that reacts in our atmosphere with water, which begins to reflect some light back out into space, which can be extremely devastating for our global food supplies.
And we actually think magnitudes lower of volcanic eruptions are able to cause that mechanism. And eruptions of that scale have a recurrence interval about every 625 years. So this is actually really high. And this is one that I'm most concerned about right now. And of course, just at the beginning of last year, we had the big Hunga Tonga, Hunga Hape eruption in the South Pacific region.
which just proved to us that our knowledge of volcanoes and their systems is still evolving. It's a nascent field and we discovered new mechanisms that we didn't know existed before about water being lofted up into the atmosphere causing a heating effect.
We've seen the toe carvings of ice sheets in Antarctica as a result of the shockwave causing a tsunami. There's mechanisms that we don't even understand and are just realizing. So for me, I would definitely put, I'd advocate for volcanoes right high up on that list. But I would also concur with Paul that things around the malicious use of technologies and things like pandemics are also high up that list. Martin, you've written extensively, but you've sent me a lot of your writings on AI and technology and the potential impacts of
on threats to humanity. If you were to list some of the challenges ahead, what concerns you the most? Well, I'd like to say I'm very concerned about what we've heard about the collective effect of humans being more numerous and more empowered by technology on the planet is what's leading to climate change, loss of biodiversity, mass extinctions, and things like that, and even natural phenomena like volcanoes, earthquakes,
and asteroid impacts, although their rate is unaffected by humanity, their consequences are greater in a densely populated world where we depend on technology. So that's why they're on our list of important threats today. But I think the kind of threats that
my writings are focused on are the ones which are caused by misuse of new kinds of powerful technology where just a few bad actors, as it were,
could cause a catastrophe that could cascade globally. And this is true of both cyber and bio. In the context of bio, we know that in an interconnected world, a pandemic can cascade and spread globally quite quickly. And we saw that in the case of COVID-19. And of course, we are aware that there could be natural pandemics which will spread in the same way, which could have a much higher fatality rate.
with COVID-19. So we have to worry about those. But we have to worry even more about possible engineered pandemics, because as we know, it's possible to do these so-called gain-of-function experiments and make a virus more virulent or more transmissible. And my worst nightmare, actually, is of that being done by someone who then releases this pathogen
in a way that spreads globally. Of course, this is unlikely to be done because no rational person with rational aims would use biological weapons at all, because you can't predict their consequences. That's why they're not used in warfare, and a terrorist group with well-defined objectives wouldn't use them either. But just imagine some fanatic who thinks that humans are a pollutant, there are too many of them on this planet, let's cut the number down. Then even one person like that
could cause a global catastrophe if empowered by the kind of techniques available in any university or industrial lab. So that's my number one reaction. And in a similar way, cyber attacks, as we know, can cause massive damage to the infrastructure, electricity grid in a large region and things of that kind. And of course, we're very aware at the moment
of the other dangers stemming from misuse of IT, chat GPT and all that which have huge benefits. We benefit hugely from social media and Google and all that but we all know very much what they can do for
breakdowns in society, etc. So I would say that it's bio and cyber, which are the two new technologies we should worry about at least as much as we worried about nuclear 50 years ago. Paul, I'm curious whether you feel we're at or near a tipping point in any of these three areas that have been outlined, environmental, advancing biotech,
or cyber slash AI, do any of them in particular feel like we're near that tipping point or at it? It's very difficult in advance of a tipping point to say with any certainty that we are close to a tipping point. What I would prefer to say is that the speed of change is clearly rapid and accelerating.
And I think even those who have a great deal of focus and expertise in the machine learning enterprise have been astonished at how fast this progress has been in the last few months and years. What previously we thought would take 10 years appears to have taken less than two years. And so it's incredibly difficult to judge in advance
where the world will be sitting with regards to artificial intelligence in two or three years from now, making the life of the science fiction writer extremely difficult, but certainly making the life of any planners and people interested in resilience extremely challenging.
which means that those of us in the center and in similar areas looking at the problem of huge global catastrophic risk
have to use quite challenging methods that go beyond the traditional when we think about what the future holds and how systems that in the past worked to some degree are likely to break down and what the consequences are likely to be. And just to give you an example there, there is a strong consensus behind the idea that we've been very lucky not to have had a nuclear war in the last 70 years.
That system of nuclear deterrence had some level of stability, even whilst those of us who were skeptical were very concerned about it. But that stability, such as it was, looks increasingly precarious in relation to the speed of change and the acceleration. So there is certainly an idea, a very strong idea, that we will reach a tipping point and that that's a really big challenge.
I don't know if that's six months or 12 months or five years down the line, but what I do know is that the acceleration is rapid. When you refer to nuclear deterrence, obviously you're referring to MAD, Mutually Assured Destruction, which has basically kept us, ironically, safe for 70 years, correct?
So I would contest the word safe. What it has done has meant that those that have the capacity to wage strategic war have been more hesitant than they would otherwise have been. But this is not a safe system, and it's becoming, I would posit, increasingly unsafe for the various reasons we're talking about.
and also the way in which actually the vulnerability that lies at the heart of mutually assured destruction is itself quite an interesting dimension when it comes to all the existential risks that we face. It's very important as we look at the ways in which global governance can be improved to recognize at the very first level that we are mutually vulnerable.
And in that mutual vulnerability, there are opportunities to act collectively and cooperatively. But the vulnerability at the heart of mutually assured destruction can only get worse
if we go further and further away from systems that have significant control over the problems we face. We'll get to some of the solutions in global governance in a little bit. But Martin, I'm interested from your point of view, having written so thoroughly on AI and technology, you know what's called the law of unintended consequences. Tell me about it and how important
it is in trying to understand technology and its advancement for the good when it turns out that we can't always predict the outcome.
Well, I think cascading consequences are important. If we take the recent pandemic, it was clearly a medical problem, but certainly in this country, and I think in yours, it had a big consequence for schools and education, a whole generation of kids. So that's an example of where different segments of society, different parts of government were
had to be engaged in an emergency plan. And so that's going to be true, I think, of any global catastrophe. And going back to what Paul was saying about nuclear disasters, then the reason a catastrophe could be even worse would be if the world's food supplies
get disrupted. And so it could be that starvation is more of a problem than the original explosions and the immediate fallout consequences. So I think that's a concern, the unintended consequences. And of course, in the context of biological developments, I use the phrase bio-error or bio-terror. And if we think of pathogens, then the worst case obviously is antigenic virus, which is lethal.
and can spread a version of, say, the Ebola virus, which could be transmitted through the air or something like that, which would be worse than the kind of natural pandemics that we have or which we think are likely to have. That's one thing. But also, there's the risk of unintended consequences, leakages from labs. There are over 60 of the so-called grade four secure biolabs around the world.
And one can't be confident that they are all that secure. Of course, as you know, there's still a debate about whether COVID-19 was a leakage from the lab in Wuhan where they were doing gain-of-function experiments, or whether it was a natural transfer from animals to humans. So we just don't know the consequences of doing gainless experiments, for instance.
Before I turn back to Laura, Paul, I've got another question for you around the issue of AI and emergent behavior that we don't understand.
Talk to me for a few minutes about where we are on AI, what concerns you and emergent behavior, what we need to try and understand to make sense of this new world that we're creating. One of the features of machine learning as it has been developing is that the programs have been set up in such a way that they can learn and they can modify their behavior
And there is inevitably inside that system a black box in the sense that nobody, not even the programmers, understand how it's done, how it has achieved what it has achieved and where it is going. An example of this is in the large language models that we've seen in their latest iterations this year.
where there has been some apparent development in the way that they have returned answers that seem to go beyond the capacity that the original programmers believed it had. And of course, what happens then is that people attribute more to the machines than is actually going on. And it feels as if
we're interacting with beings that are close to us in terms of the weight and the capacity for thinking and for experiencing.
And I think what we don't know moving forward is how that translates into outputs that then have an impact on social cohesion and on the capacity for these machines to be misused. I mean, a very obvious area at the moment is the way in which machines can be used to spoof human beings.
such that all sorts of systems that we depend on for our integrity, be it legal or political or otherwise, or privacy, is encroached upon. And that's very concerning, but it's only one area of a whole variety of concerns that emerge as the capacity of artificial intelligence strengthens. This is what are known as deep fakes, correct? That's right. Explain that to me and why you're concerned about it.
Yes, so deepfakes can be images or they can be audio or they can be videos, they can be text. And the capacity of the large language models now is such that they can be instructed to mimic human beings and specific human beings. So it's possible now for somebody to ask a large language model to interrogate
the historical writings of an individual and then come up with texts that is impossible to distinguish between the actual writings of that individual and the machine, such that one spouse, for example, could be imitated and you would then respond to their email believing it was their email.
There can be videos that are so effective in mimicking the personality of the individual they're mimicking that it's impossible now, this is not the future, this is now, it's impossible to know when you're seeing a video whether that is a real video or a fake video. So it means that all the ways in which we have used evidence to assess reality
is now up for grabs. It means that reality is very difficult to verify. If I could add to this, we don't know what's going on inside the machine and it can have apparent insights which are beyond a human. The first case of this was the famous AlphaGo computer which beat the world champion in Go by making a move.
that baffled all the experts, but turned out to be clever, but one they'd never thought of. And so that was an example where it has special insights. But the downside is that there could be some hidden bugs in these programs which do emerge. That's why you get sort of odd answers from ChatGPT sometimes. And that's the reason why we shouldn't
should not readily delegate decisions about human beings to machines. I mean, if we are going to be sent to prison, recommended for surgery, or even denied credit by the bank, it's not enough for us to be told that on the whole, the machine does a good job in making the assessments. We fear we ought to be able to contest
a decision and have a real human being responding because there could be hidden bugs in the system and there always will be but more and more as they get more complicated but they are going to be displaying superhuman capabilities in more areas
This doesn't mean that they're a super brain, because no one is saying they need to be conscious to do all this. It's just that there are many, many different ways of displaying different kinds of intelligence. An example I would give is that back in 1900, if people thought about flying, they'd be surprised if anything could fly without flapping its wings.
Whereas, of course, planes develop quite differently. And similarly, there are kinds of intelligence which we think are specific to the flesh and blood in our heads, which could be mimicked effectively by some machine. But it doesn't mean that the machine has the consciousness which a human does.
Well, since you've opened the door to that, Martin, I'm curious, are we, I mean, there's the old term within technology of thinking machines, are we creating the potential for conscious machines in our understanding of that?
Well, I'm not sure we would ever say they were conscious, or we needn't say that, and still less that they have emotions. But I think we are entering a regime where machines, if they're not careful, can interact with the real world via the Internet of Things and interact with each other. In fact, there was an interesting paper in the Wall Street Journal just a few days before we're speaking about whether the different ITs could start fighting each other.
trying to gain territory themselves. So these are possibilities, but it doesn't mean that they're going to be in any way like human intelligence, even if they can surpass it. But if I can sort of make a special point as a scientist, I think that these AIs can, through their greater speed and their learning powers,
do things that no human could ever do. Just like they can play Go and chess better than humans, they may be able to work out the ten-dimensional geometry of string theory better and more quickly than any human ever can. So maybe the way we will know if something like string theory is a correct description of the microworld will be if some machine churns away, given the parameters of the theory, and spews out at the end the correct masses of the particles and things like that.
which are otherwise unexplained. So they can have a huge benefit. And we know already that they've been able to work out the shape of proteins better than any human. And so they're going to be very positive in many ways. And so the challenge really is to take advantage of all these benefits, but be aware that since we can't understand what's going on inside them, we have to be cautious about handing over power to them.
Laura, on the earlier point where I asked about tipping points, I know you're a volcanologist, but in terms of the environment and climate,
We've heard that we're near a tipping point on climate, but it seems, if you look at the polling, it is still such a removed issue for most people's agenda and things that are much more important, like can I put food on my table or do I have a job? Climate seems, at least among people
pollsters to be one of those issues despite its importance to be low on the totem pole.
In terms of tipping points and how you persuade an audience to pay attention, what do you say? So that's the golden ticket right there. But yeah, I would agree in a lot of the work that we've kind of done throughout the last few years has been trying to understand this exact issue. Why do some people take some risks more seriously than others? And why are some more motivational for people? Why are they more have a higher intrinsic value? And how can we encourage that to be the case? And that's the case across most existential risks.
for a number of reasons. There are many challenges associated with some of these risks.
Climate, I think we can bracket with many of the challenges that we have for many existential risks and global catastrophic risks in terms of communication and prioritisation, but also has some unique challenges on its own. So first and foremost, many of us are not feeling or not realising that we are immediately seeing the consequences of climate change. It seems a distant problem, somebody else's problem. And that's one of the biggest problems, one of the biggest challenges to overcome for
Meanwhile, there are islands on the other side of the world that are being inundated by floodwaters. There are the strongest monsoons that we've ever recorded. The ozone hole is the largest for the longest period of time. There's many different consequences that affect all of us.
But because we sat here in Cambridge and surrounded by lovely, I'm not feeling those immediate impacts. It becomes a distant problem, somebody else's problem. So that is one big challenge for dealing with some of these risks. But we are making progress on that, seeing small island developing states taking much more stance, UN circles to be able to advocate for their own risks.
With global catastrophic and existential risks, one of the biggest problems is some of these are seen as long-term risks. These are seen as future problems that just don't kind of feature in our day-to-day world. And of course, if your local politician came to you and said, "Oh, my manifesto for this election is that we're going to deflect all asteroids with any threat to Earth," of course, you would not potentially vote for this if your garden floods every year from the local river or something like this.
It's the way in which we want to prioritize those risks personally in our own context and our own environments too.
But not all existential risks and global catastrophic risks are long-term risks. Of course, we've got Paul here, he works on nuclear risk, and we are on the cusp of a potentially dangerous scenario. Volcanic eruptions, I mentioned the Tonga eruption earlier. This eruption happened, we weren't expecting it, we didn't know it was going to be that large, a large magnitude eruption with catastrophic consequences could happen at any time. Just like with asteroid impact, yes, we have excellent surveillance,
But we still have a large portion of the sky that we are unable to look at because it points towards the sun and therefore we're not able to see many of the objects. And it could just come from there. There could be a threat that emerges that we didn't see before. And suddenly we may have a very short window of time to kind of think about those risks. So some of these are more pressing than others may think.
Some of the other challenges as well is around how we prioritize risks in our own world. And one thing I like to talk about a lot in my research is the kind of emotional kind of value that some of these risks have.
So one of the analogies I like to use quite often is we were all really emotionally affected by the images of the turtle with the plastic straw up its nose, about microplastics in our oceans. And microplastics have had massive traction in the aftermath of this and banned. It was incredible to see that kind of pace of having many microplastics kind of banned from single-use plastics and things like this. But no one's talking about the unseen risk, which is plastics in our ocean are
leach chemicals and compounds into our natural environments, which affect our ecosystems. We're seeing pods of dolphins and whales that are infertile because of PCBs that are long lasting in our environments, staying there in that supply chain. And there's this disengagement between understanding what actually some of these risks are because they're unseen risks.
it's very difficult for people to visualize what the risk of an AI might be or these kind of approaches. So it's a very complex issue to kind of overcome and understanding how we can effectively talk about these risks and motivate people to kind of take them quite seriously and switch from that kind of extrinsic motivations to intrinsic motivations. But it's very much around building awareness, first of all. But a lot of this kind of starts with governance policy level and trying to get those issues onto tables somewhere and trying to get that taken quite seriously.
If I could add a very short footnote to that, it's certainly true that in order to make people care about the oceans or the environment, scientists can campaign, but we need charismatic influencers. And I'd like to give an example of two. The papal encyclical of 2014 was the first time a pope had said that
humans have a duty to the rest of creation rather than just having dominion over it. And he has a billion followers and that made a big difference to the Fordian consensus at the 2015 climate conference. And it's been our secular Pope David Attenborough who has done more than anyone else to alert people to the problems of ocean pollution and that sort of thing through his films showing, for instance, a
albatross returned to his nest and coughing up for its young not the long for food but few bits of plastic.
That was an image which raised public perception of the risk of ocean pollution, just like the polar bear and the melting ice flow was an iconic image for climate change. And so if we can persuade the public to care by using these global influences, then, of course, the politicians will take care if they know they'll gain and not lose votes by making these long-term decisions.
Paul, you run a multidisciplinary team of folks at the Center. Last time I did these interviews, I spoke to your noted philosopher, Hugh Price, who was looking at the morals and ethics of existential risk. Talk to me for a moment about values and ethics and morals and how they play into trying to understand risk.
The important thing, I think, around this is to recognize that the relationship that we have to these risks and the relationship that we have to each other is part of what defines us as human beings.
And the question that interests me here is not only how do we manage these risks and minimize them, but also what does it mean to us as human beings and the
ethical dimension of our identity if we are willing to set up and promote systems that threaten our annihilation. It seems to me to undermine our sense of who we are and what a life well lived is, both individually and collectively.
So if I can speak personally for a second, having the opportunity to work on these risks and to be considering how best we respond to them and reduce them gives me a sense of purpose and meaning. And I think that's true for everybody at the center. And whilst it may seem to be very dark and we can be accused of being doomsters, actually it's a real opportunity to put in perspective
what a life well lived is. I think collectively, and I was having a conversation with one of our colleagues just before we came on here, I think collectively there is an opportunity for us to use existential risks as a way of asking the question, where are we going as a collective humanity? And
How do these existential risks point us in the direction of improving our relationships with ourselves and with the ecosystems of which we are part? And that ethical question, it may look as if we're focusing on the dark side of our humanity, but actually there's real opportunity here.
to ask some of the questions that human beings have been asking themselves for thousands of years, but in the light of really difficult challenges today.
I mean, one of the centerpieces, Martin, for the Foundation is human flourishing. So Paul is, in some sense, making that connection between the dark work you're looking at and human flourishing. What would your answer be to the ethics and morals of trying to understand existential risk? Well, I mean, I think we know how much we owe as a heritage to prior generations.
in terms of infrastructure and ideas and our entire civilization. And surely we owe it to pass on to future generations an equal legacy and not a depleted and devastated world. And so I just thought that's a very, very important
ethical goal that we should ensure that we don't pass on a depleted world to future generations. And one example is the diversity of life. I like to quote E.O. Wilson, who said that if human actions cause mass extinctions,
It's the sin that future generations will least forgive us for. We need to care about these things. And that's just one example. And I think if you want individuals to think long term and voters to think long term, they should think about the lives of young people today, babies just born, who will live, we hope, well into the 22nd century and think long term. So I think to ensure human flourishing in future centuries.
is a real ethical imperative and we are putting all that at risk.
if we don't give more effort to trying to minimize these existential risks. And centers like CESA are trying to do this. And in fact, there are all too few people around the world doing this. There are probably a couple of hundred people who are working full time on this. And the way I put it is that the stakes are so high that even if we can reduce the probability of these risks by one part in a thousand,
Our work's been well worthwhile. We've earned our keep because the stakes are so high. But I would say that's one ethical concern, which is global. But of course, more parochially, we've got to make sure that the benefits of science, as it's now more empowering, can be harnessed without at the same time releasing all the downsides of science. That's the big challenge for politicians. And that's what we do.
as a group can try and do this because we are academics, but unlike most academics we are trying not just to understand the world but to change the world as well. Laura, as representative of the youngest generation on this panel, what would you say?
I mean, I obviously agree with both Paul and Martin. And I guess for me, there's a moral imperative around making sure that we even just think about existential risks. I mean, even before the formation of Caesar, there was only just one group in Oxford thinking about this. And that's the kind of mind blowing that we haven't been thinking about this sooner. And there's this, yeah, it's kind of Martin says, there's this kind of
consequentialist kind of view of this. It's like, okay, well, if we don't think about this, then the cost of this could be massive. We could be looking at, you know, the loss of billions of lives in a catastrophic food disruption or trillions to the economies, whatever your value, your intrinsic value is there. And so it kind of
makes sense that we make sure that we think about this, particularly factoring in future generations in those discussions is the one that I think is the most important in lots of these discussions. And that value of lost generations, billions and billions of people to come, and making sure that they potentially have a world that is fair and just for them equally. And we thought about this a little bit, even in the space of the volcanic risk thing, the research that we've been doing and thinking about a lot of the way we talk about that risk, for example, is what could we do if something like this happened?
And they're not easy conversations to have. And they are very kind of focused, more precautionary. But then we kind of could see ourselves in a bind. If we become too cautious in our kind of approach to thinking about some of these risks,
And like Martin says, we could find ourselves having not ever really made much progress and not have made that tiny contribution. So I think the importance of ambition, curiosity is really important when we think about the work that we do and thinking about the protection of future generations, particularly, but also, you know, the protection of our world and overcoming some of those basic kind of principles that we think about when we think about the ethics and the moral philosophy of what we do.
Paul, I know you're going to hate this question. I'm sure it's been asked of you before, but how do you sleep at night? Oh, it's been asked many times. Caesar's number one question, I'm certain. It is fascinating because I can imagine that if you don't think about these things very often and then you suddenly face them, that you imagine losing sleep. I've been thinking about nuclear weapons and nuclear war
for 40 years, and I think I can count on one hand the number of nights that I've lost sleep. And the reason for that is because I've already integrated this thinking and the consequences into my life so that it's not some problem out there that is separate from me. It's actually part of who I am that I think about nuclear weapons and nuclear war
And I think there is positive opportunity as well as negative consequence. I believed when I was a teenager that we wouldn't last 10 years. I believed that there would be nuclear war before the end of the 1980s. And we didn't. And there is hope in that story because nobody predicted the end of the Cold War, and yet it happened. So...
So I sleep. I sleep because I have hope, even though there are so many reasons why life could get more and more challenging unless we do change course. But I do have optimism that we can wake up before it's too late and that we can strengthen our societies and be more caring and compassionate.
with each other because that is not only the right thing to do, it's also the most likely way in which we will escape the risks that we study. So, Morten, same question to you. And if I hear Paul, he is relying to some degree on our better angels. Well, I mean, I echo everything Paul has said, and I'm even older than Paul.
And I can remember the Cuba crisis, and that was the time when there was a severe threat. And indeed, when the history is looked at in retrospect, the threat was even larger than we thought at the time. And I personally think that had I known that the risk was as large as proved to be from later records,
I would not have supported at all the policy of mutual assured destruction. I think the risk was too high. I'd rather have faced the certainty of a Russian overrun of Western Europe than even a one in five chance of the destruction of civilization. So I wouldn't have taken that risk. And I think many people feel the same. And I think the risk is at least as high now. And I think...
We should worry also about these other misuses of these new technologies, 21st century technologies, as well as the 20th century technology of nuclear science, because these are equally dangerous and they could lead to a sort of gradual disintegration of society.
if they allow us to drift into a world where we can't maintain all the things that have allowed us to live in an integrated economy. Let's end on a positive note, not that the conversation hasn't tended toward that. Laura, with you, in terms of solutions, among them better global governance, what do you point to and what do you think the journey looks like over the next few years?
for the center? I'm pretty hopeful that we're on a positive track. And yeah, let's just focus around governance solutions and making sure that this becomes the most important agenda in some of these conversations. And I think we are seeing progress to that. I think the COVID pandemic and the Ukrainian war has really helped to put some of these discussions on the table. We had Boris Johnson quoting Toby Ord in the floor of the UN. And these are conversations that are now landing on the table in different UN departments too.
So I'm really hopeful that that's going to be something that's emerging much more strongly over the future. For me, in terms of some of the other work that I do in terms of thinking about volcanic risk, we're thinking much more practically about physical solutions. Are there things that we could actually do to kind of mitigate and prevent risk? So we're starting to think about advocating for better surveillance, monitoring networks for volcanic eruptions.
but greater research funding and resources to understand which volcanoes potentially have the eruptions that might have this problem. And even thinking about solutions, are there geoengineering solutions that we could potentially consider that might be able to mitigate or prevent the risks posed by large magnitude eruptions?
So I think there are lots of solutions on the horizon. I think I'm quite motivated to see that lots of people has become more of a normal conversation than it was even just a few years ago when I joined Caesar. Many people, and I used to say the word existential risk to people, and I work with many people in risk, they would say, oh, what's that? And I was like, oh yeah, existential risk.
So this is already a really positive movement towards seeing that the publics are getting better understanding of what these risks are. And of course, that kind of leads to this change, this kind of societal change for understanding risk. The COVID pandemic is this really crucial moment in our history. It gives us a pedestal to say, look, we don't ever want to go through something like that again. We've just seen countless of millions of people die and they didn't have to. There are things that we could have done about that.
And using that as a very critical moment to kind of advocate for progress in these areas, to getting these agendas on the tables and getting litigation actions towards some of these in place and preparedness actions. So I'm really hopeful for that side of things. Yeah, I am optimistic about our future. Paul, for you, solutions, global governance, technology, where do you come down?
So there are various ways in which there are wicked challenges around global governments, collective action problems across cultures that are suspicious of each other. As somebody who has studied the really significant challenges of nuclear weapons management across countries, I'm not a blind optimist.
But what I would say is that there are reasons to be hopeful that we can find ways around. We have found ways around. I mean, just look at the incredible network
UN agencies in which people from all over the world are cooperating to address some of the really significant challenges that humanity faces in a cosmopolitan manner. Just look at the way in which regulation and intervention has been successful in reducing some of the worst
ravages of natural diseases across the world. There are many things to be hopeful around. And I think that looking at existential risks enables us to see some of the hidden, undiscovered opportunities there are to look at the assumptions that drive competition between states.
And I feel optimistic that as we face these challenges in the coming decades, that we will find new ways of collaborating, even where there are different cultures with different assumptions, because
We need to be building ethics that involve listening and engaging, respect for plurality, and recognising that sometimes a culture that has a bit more emphasis on individualism and privacy has some strong answers, and sometimes a culture that is based more on collectivism and other ways of arranging our society has some benefits and advantages, and that we learn from each other.
Well, I think governance is getting harder because of social media, because more voices can be heard, which is a good thing. More strident voices appear on the scene. And also those in disadvantaged parts of the global south know what they're missing. They are rightly embittered by the unfairness of their fate.
and they're not going to be as content to continue with the status quo. That's going to be a problem. And I think also we are going to need more organisations along the lines of the World Health Organisation and the International Atomic Energy Agency to deal with enforcement of pledges in CO2 reduction in climate change, for instance, and also perhaps in dealing with the international conglomerates who control the internet.
We need to have international agreements. And above all, we need to involve China, because the fact is that the Gemini is going to move after four centuries from around the Atlantic to East Asia. And we've got to accept that and realize that any stable world has to involve bringing in China and related powers as well as the Western powers.
Last question for Paul, which is a must on the Templeton World Charity Foundation's efforts with you. How important has it been? I know your project is coming to an end, but it will continue to live. So just talk to me for a moment about TWCF's efforts with you and the mission, how important it's been. So let me say from the outset that the Templeton grant has sat at the core
of the Centre's mission and activities for the last six years. And the reason for that is because it's so wide-ranging,
It's innovative in the way we have been operating. We've been looking at things that go beyond the traditional academic, such as Lara's work on communications and the science policy interface, as well as methods that are on the cutting edge of scientific progress, such as foresight work, which in the past has been rather underdeveloped. And Templeton's funds have enabled us
to operate within these areas in ways that will absolutely continue strongly because we have discovered the importance and reinforced the importance of these approaches.
So we are extremely grateful to Templeton and recognise that Templeton's contribution to the Centre will have years of legacy to it. Not only the individuals and their methods will continue and develop, but the Centre itself and the way in which we collaborate with others has been hugely impacted.
Excellent. You've been incredibly gracious with your time. It's been a wonderful conversation. I have only 600 more questions to ask, but I'm not going to. I think it's lovely that we get to end a challenging conversation with laughter. Special thanks to Richard Sergei and this week's guests, Dr. Martin Rees, Dr. Paul Ingram, and Dr. Lara Mani.
If you enjoyed this expansive conversation, be sure to check out Dr. Ingram's inspiring and timely TEDx talk, Finding Purpose Within a World in Crisis. We've shortened the URL for you so you can find it at bit.ly forward slash 42 lowercase s uppercase r uppercase n lowercase r lowercase g.
If you appreciate the Stories of Impact podcast, please follow us and rate and review us. You can find us on Twitter, Instagram, and Facebook, and at storiesofimpact.org. And be sure to sign up for the TWCF newsletter at templetonworldcharity.org.
This has been the Stories of Impact podcast with Richard Sergei and Tavia Gilbert. Written and produced by TalkBox Productions and Tavia Gilbert. Senior producer Katie Flood. Music by Alexander Filippiak. Mix and master by Kayla Elrod. Executive producer Michelle Cobb. The Stories of Impact podcast is generously supported by Templeton World Charity Foundation.