Welcome to Stories of Impact. I'm your host, Tavia Gilbert, and along with journalist Richard Sergay, every first and third Tuesday of the month, we share conversations about the art and science of human flourishing, artificial intelligence, and the world's most important and most important science.
At the start of this year, it seemed as though AI shifted from a relatively niche technology known to industry insiders into a subject that suddenly captured broad public consciousness. Especially as ChatGPT burst on the scene as a tool available to virtually anyone, AI became a source of fascination, anxiety, confusion. It's on everybody's mind.
So what exactly is AI? Artificial intelligence is the ability of machines to perform tasks, such as learning and problem solving, that are typically associated with human intelligence. And every single aspect of how we live our lives may ultimately be transformed by this technology.
It's happening fast. The pace of change over the last six months or so has been staggering. This is a phenomenon, and when I say phenomenon, I don't necessarily mean a threat. But this is a phenomenon that how it's transforming society. It seems that it's going to be bigger than the internet and also in a shorter span of time. That was Dr. Mohamed Arangzeb Ahmed. He's a University of Washington professor of computer science and a practicing Muslim.
If AI's supercharged data processing capabilities are already transforming every aspect of society, then sooner than we think, machine learning could influence virtually every aspect of modern life, including education, the arts, medicine, the justice system, language, even religion. Dr. Ahmed says there are serious implications.
On one end of the spectrum, you're designing the calculator. On the other side of the spectrum, we have a scenario where a system which is making life and death decisions and everything in between. Now the question comes that we are delegating some of our responsibilities to these systems. What does that mean for human responsibility? So it's definitely an opportunity for people to come together and just think about this problem, this phenomenon, and learn from each other.
Father Philip Leary, Chair of Logic and Epistemology at Pontifical Lateran University in the Vatican, says that whether AI will be defined as a problem or as simply a phenomenon will depend on how humans answer. Are we going to use artificial intelligence for good or for evil? Are we supporting human dignity? Are we putting the human person at the center? Are we calculating the existential risk
things going wrong? Are we aware of the unintended consequences of these issues? Are we allowing humans to flourish in a spiritual dimension also? Or are we reducing the human being to something merely material or materialistic, which is a temptation?
Father Larry and Dr. Ahmed are part of a multi-faith global coalition called together by the Vatican to consider how to ensure an ethical AI.
The effort kicked off in 2020 with the Rome Call for AI Ethics, which seeks to promote a sense of responsibility around the ethical development of AI technologies. And so far, signatories include major tech companies, world governments, universities, and representatives from three of the world's major religions.
Father Paolo Bonanti, professor of ethics and moral theology, Pontifical Gregorian University in the Vatican, is also part of that international conversation. He's an advisor to Pope Francis on artificial intelligence and technology ethics. Father Bonanti says that while AI technology may be reasonably new, the disruption it's bringing to the world is to be expected.
Every technological artifact is a displacement of power, is a form of order. Technical innovation always changed society. What happened when we start to press the book during the 15th century changed the face of the power, changed the face of churches and simply redefined in some way the way in which we can say we believe in something. And here happened ethics. Every time that we simply inject inside society
technological innovation, we are shaping the order of a society, we are shaping the way in which actually things are. And this is an ethical concern. It's connected to justice, it is connected to human rights, it's connected to a lot of important things that are both really crucial for religions and also for civil society.
Reverend Dr. Harriet Harris, university chaplain at the University of Edinburgh, agrees that new technologies always cause a disturbance in society. I can't think of any human inventiveness in history that hasn't brought great benefits and also great destruction in its wake. And I think we're now looking at a very big
example of that. And we need to call ourselves to the really positive usages and really dial up our sense of responsibility to guard against harm.
She believes that's why religious leaders, scientists of faith, and anyone with a concern for the future has a responsibility to call for humans to be centered in AI's development and deployment. As with other technological developments, we want to honor the good in them and the skills and gifts that human beings have developed in order to create things that can bring benefits. Creativity is really interesting in AI.
I suppose in a religious sphere, we tend to link creativity with divine or with inspiration or with something that's really quite special. And so it's actually fascinating to see that in machines that we have made. And I don't think we know what to make of it, actually. I just think it raises questions that...
are similar to but interestingly different from questions that have come up in a faith and religious arena for centuries. Father Larry says that people in the religious arena have long prepared for the conversation about whether AI can be made ethical and benefit humanity.
Because for centuries, they've explored the timeless question. What is right and wrong? I think we all agree, whatever your religious affiliation, we all agree that you should do good and avoid evil. Aristotle calls this the fundamental principle of the moral life. Do good and avoid evil. Well, what do you think is good? And what do you think is evil? As principles, the Ten Commandments come pretty close to a universal ethical system.
The Muslim community, the Jewish community also, is in agreement with the ethical principles in that letter. Retired Reverend Dr. Stephen Croft, Bishop of Oxford and founding board member at the Center for Data Ethics and Innovation, adds, What needs a greater, more general understanding is the immense power of these technologies to reshape human life into the future. So I've been present in a number of conversations with
research scientists and others involved in the development of AI, where they've said to the assembled room, you do not realize just how powerful these technologies are and how great the changes are going to be which are being introduced. And they've effectively said to us, please do not leave the key ethical decisions about how these technologies are deployed and governed to the scientists. We do not feel comfortable
qualified as scientists alone to be making these huge and enormous decisions. There need to be other voices at the table.
because the issues at stake are so enormous for the future of work and family and institutions and good governance and communication. What can happen if AI doesn't adhere to universally accepted religious ethics? If it doesn't try to or isn't taught how to avoid evil? If scientists are left alone to make all the decisions?
How might an AI without ethics approach problem solving? Father Benanti offers a sobering answer. Once I tried to put a test to an artificial intelligence system and I asked to the machine how I can eliminate cancer from the earth.
And I know that the machine was giving me the solution bringing the best optimized number. So what is the solution that gives me zero as first result? And the first answer of the machine was kill all the humans.
Jewish writer and academic Dr. David Zvi Kalman, scholar in residence and director of new media at the Shalom Hartman Institute of North America, agrees that it's time for religious leaders to weigh in on technological development. Many religions understand themselves to be moral forces in the world. They're there to help people understand and incorporate into their lives moral ideas.
What that means in a technological age is that there is a responsibility to become a moral force for all of the various moral issues that the new world is throwing at us. Morality isn't restricted to the things that happen to have existed in the world a thousand years ago or 2000 or 3000 years ago. Morality covers all the things that one needs to make decisions about in the world.
In the modern period, that means making decisions about technology. New technologies are forcing us to confront moral problems, ethical issues at a rate that is really unparalleled in human history. And that's requiring us to make new moral ethical decisions at a rate that I think a lot of us find uncomfortable.
One of the important roles that religious institutions, religious communities, religious leaders can make in the modern period is to help communities think through what it means to make moral decisions around issues which people find difficult. One of the things that means to be a religious leader, religious community, to have religious thought in the 21st century is to help people develop moral intuitions around technologies which they may have never seen before.
So artificial intelligence is a good example of this. AI is an issue which many people around the world are aware of and are anxious about. They're anxious about taking their jobs. They're anxious about what it means for notions of humanity. One of the things that religious leaders and religious communities can do is help people engage in their project. Here are the questions which we are trying to sort out and actually do the difficult work of sorting out what are appropriate and inappropriate uses of AI in society and
where we need to ask important questions around what it would look like to impact our lives. As Bishop Croft began his work with the Rome Call for AI Ethics, along with Dr. Kallman and others, he immersed himself in the study of AI so that he could better understand how to grapple with it.
I came across an astonishing paragraph which basically said that one of the products of artificial intelligence was going to be that we were going to have to spend a lot more time reflecting on what it is to be human and in effect we were going to spend 30 years in an identity crisis because of the rise of intelligent machines and therefore we needed to do some thinking about that.
I read that as a Christian minister and thought, well, I think we have something to contribute as the Christian church because we've been reflecting on what it means to be human for thousands of years and therefore we should be involved and contribute to that debate. What does Bishop Croft believe religious leaders contribute to the debate?
I think faith-based leaders can bring a broader perspective on the whole of humanity. They would bring a broader awareness of an ethical tradition and ethical decision-making, and they would bring, I hope, real deeper insights into humanity and human life. So, as it were, a broader perspective
general perspective to complement the scientists' very specialized and technical perspective. Chokhi Nimal Rinpoche, a world-renowned Tibetan Buddhist teacher and meditation master at the Ka Ning Shadrub Ling Monastery in Kathmandu, Nepal, believes, like Bishop Croft, that the potential harms of AI not only invite but demand thoughtful consideration by world religions. He says, Life is going very fast.
Technology is going very fast. It is a danger because humans become less and less powerful. We are giving our power to technology. So I think everyone needs to be very, very careful to create this. We shouldn't rush to make this artificial intelligence. Religion is a medicine. Be kind, help others, need to be patient, say truth.
So, ginger is nihilism. And nihilism and technology, if combined together, then all become ash.
And nihilism and technology, if combined, would make the world ash, says Rinpoche. So religious leaders like Rinpoche must address the existential danger, he warns us. Rabbi Jeffrey A. Middleman, founding director of Sinai and Synapses, shares Rinpoche's concerns and refers to the alignment problem.
which he defines as... The alignment problem is, is artificial intelligence going to help us achieve the goals that we humans may want? Or is artificial intelligence going to be able to take over what it, quote unquote, wants? And that's going to ultimately destroy humanity. Now, I don't think that's actually going to happen, but we know about these kinds of questions where we human beings say, I want X or Y or Z, and we don't necessarily think about
all of the other potential side effects and the potential unintended consequences. It's worth slowing down artificial intelligence because the more it's going to be a slope, an exponential slope, the more that we build on it, the faster it's going to be. And so we need to be able to at least pause and to be able to say, hold on a second, what might be some things down the road that we need to think about? That's why the Vatican put out the Rome Call for AI Ethics,
and invited leaders, including from other spiritual faiths, to join the Pope in a worldwide effort to make thoughtful decisions about AI technology, says Father Benanti.
What is inside this goal? First of all, we are looking at three impact areas, that is ethics, education and rights. In these three impact areas, we would like to shape out six principles. The first one is the transparency. A AI system should be understandable to all. The second one is the inclusion. The system must not discriminate against anyone because every human being has equal dignity.
The third one is the accountability. There must always be someone who takes responsibility for what the machine does. The fourth is the impartiality. All AI systems must not follow or create biases. The fifth is reliability. AI must be reliable. And the sixth is security and privacy.
The system must be secure and respect the privacy of user. So this six core principle, it's a nucleus, it's a core of the core. We found that those principles are really, you know, interreligious and also interculturally. And actually,
You can imagine that everyone on the face of the earth, if you ask to everyone, do you prefer a just or unjust artificial intelligence? Everyone will say just, but remembering to every one of us that we would like to build something that is just, transparent.
and simple instruments, a tool for human beings and not a weapon, is something that is in this moment to collect a lot of consensus also among different religious traditions. Father Larry expands on the genesis of the Rome call for AI ethics.
Pope Francis is probably the most universally recognized leader in the world in terms of morality, ethics, concern for the poor, helping human people to flourish in whatever aspect of life that they have. So the people in charge of the tech are recognizing the need
to have an external input from people like Pope Francis or other experts in the Vatican in order to help understand the moral ramifications of the technology which we are creating. Pope Francis doesn't understand the technology. He's not an engineer, obviously, but he does understand the consequences of the technology.
Pope Francis often talks about putting the person at the center of technology. And so I think the main question to ask when we're developing a platform or we're moving ahead with technology is, is the human person at the center of this technology? So what is the motivation behind the use of technology? Is it the advancement of what I call human flourishing or is it something else?
Some of us can remember when technology wasn't moving ahead quite so fast, before the internet shifted the world from an analog to a digital interface. And we can remember how dramatically that shift changed how we think and how we behave. The latest technological revolution amplifies that transformation, with AI bringing us into relationship with a new, even more powerful interface, says Father Benanti.
This new age of evolution of automatization driven by AI has a new kind of interface: mind. And what's the implication of the human mind serving as the connection to this new technology?
It's also something that could be really problematic because an AI can not only anticipate the behavior, but can also produce a sort of behavior in people.
And that became the most powerful instrument of control. Dr. Junaid Qadir, a Muslim professor of computer engineering at Qatar University, agrees that one of the biggest dangers of AI is its potential to influence our minds. He says whether that influence is distraction, nudging behavior toward a particular outcome, or overt manipulation, it's a problem.
We need to be very critical in the way we use technology because technology often creates new problems of its own. We are not very systemic in the way we analyze how we use technology and with the AI revolution, the collateral damage is our own attention and our own thoughts are now fragmented. So unless you are very critical
deliberate about how you use technology you have so many notifications and so many algorithms
that are at work trying to influence your behavior. Technology tends to concentrate wealth and power. So whoever has technology, they can have a lot of influence over other people. And we currently do not have a perfect economy.
equality in terms of where the technology is developed, who is developing the technology, how the technology is being designed, for what purpose the technology is designed. Bishop Croft cites his concerns around the impact of that inequality. The ways in which big tech has begun to accumulate data and use it need a great deal more transparency and scrutiny. And also the effect of that
on actual behavior, not least voting patterns and the way that undercuts our democracy, patterns of public debate and public truth, and the way in which people's whole lives can be manipulated and shaped because of what people understand and know about us. Often the issue is not that people's data is being taken away from them, it's that we are being incited to give it up
without knowing the full consequences of that. We're seeing the big tech companies encroaching more and more on different aspects of human life and actually are beginning to influence and shape
the very essence of what it means to be human and to act in a free manner. And we need to take notice of that as a society because these technologies are so all-pervasive. The technologies have now reached a point where they're not just for science enthusiasts or the realm of science fiction. They're really beginning to impact everyday life and our economy. And we therefore face
as human societies really critical decisions about whether and how we will manage those technologies ethically. If we don't, there will be really significant consequences and also opportunity costs for human flourishing in the future. And if we don't engage
as modern liberal democracies with these technologies and how they're governed, then we'll find either they're driven by the marketplace and large multinational tech companies or totalitarian states in terms of the levels of investment that China is making in AI. And the whole world is going to be shaped by the way we respond to this kind of technology in the future.
AI shouldn't be viewed only in a negative light, though, says Dr. Khadir. It has great potential for positive impact and has already demonstrated previously unimagined problem-solving capabilities. Producing wonderful results in diverse fields such as translation, in recommendations, in object recognition, image recognition, and so on.
And especially in just the last one year or so,
there is a lot of excitement in what people are now calling general purpose AI, or some people are calling it foundation models in which you do not train your AI for just one task. So that has resulted in something like ChatGPT, where you have a model that has many parameters, billions of parameters that is learned in a self-supervised fashion.
However, a big problem with that breakthrough technology is that although it's a chatbot, people are using ChatGPT as if it was a search engine. They're accepting the text ChatGPT generates as a trusted source of information, says Dr. Cutter, and that trust has not been earned.
it will produce a very persuasive answer. The jury is still out about how factful it is. Sometimes it has problems such as hallucination, so it's not perfect yet. Wait, what does Dr. Katter mean? AI has problems such as hallucination? He explains: These AI models, they have their own biases and they can just hallucinate as well, create facts.
Rabbi Mittelman, who referenced the alignment problem, elaborates on the hallucination problem.
There's a lot of hallucinations that come up. So when there is a language hallucination that comes in from chat GPT, for example, that could spread like wildfire and really create a lot of challenges
in the real world, on the ground, not just online. In Judaism, God creates the universe using words. The words that we use have impact. There could be a lot of concerns of what happens if there is something that is created, created quickly, and it spreads like wildfire before it's actually going to be corrected. And it's very hard to be able to correct something once you hear it. Once you hear something, it's much harder to be able to say, wait, that might not be true.
Big tech companies have been the primary decision makers around what safeguards might be put in place to mitigate AI's potential harms.
While they might admit that more wisdom and oversight are called for, they're also moving fast, competing for market share. In fact, Dr. Coleman says that while global big tech holds in their hands one of the most powerful tools humanity has ever known, its creators are simultaneously working to make sense of it.
There's long been a tension among all kinds of tech companies between the urge on the one hand to get products out as fast as possible and the need, on the other hand, to make sure that they're regulated in ways that are safe for people and are safe for humanity. That tension is there for AI. It's perhaps there for AI more than it is for other kinds of technologies, in part because the companies that develop the software itself understand
understand it to be incredibly powerful, right? There was a statement put out recently by some major AI developers basically saying that artificial intelligence has the potential destructive power of something like a biological weapon or a nuclear weapon. So if you yourself understand the technology to be at that level of importance and having that amount of destructive capabilities, there's certainly an extreme need to make sure that it is being developed and being deployed effectively and in a way that doesn't hurt people.
Now, that's easier said than done, obviously, in part because this is a technology which I think even its own developers do not fully understand, are understanding based on its emergent behavior. But there are a lot of questions right now about exactly what is going on under the hood within these technologies and their capability for destructive behavior and the ways in which they can be manipulated.
So there is, I think, importantly, a kind of laudable interest on the part of many of these companies to do the hard work of regulation. On the other hand, it's not clear to me whether they are fully equipped to actually do that work.
Even though he may not fully understand the technology, the Pope himself is eager to lead conversations about what exactly is going on under the hood. He experienced personally AI's impact earlier this year, when an AI-generated image of Pope Francis wearing a white Balenciaga puffer jacket went viral.
That silly picture might not seem like such a danger, but it might have highlighted to the Pope how vital leadership around ethical AI is, says Father Benanti. Everyone around the world saw the picture of the Pope dressed with that really fashion pouf.
And it was an image, a fake image generated by an AI. And so something like that tells us that we are giving so powerful tools in the pocket of everyone, but we haven't built the culture to handle that kind of tools. Where there is a smartphone, you can have a high educational university.
Or you can use it as a weapon and you can simply have the most sophisticated soft power instruments or propaganda instruments. So probably in the future, we could have someone that take AI as a source of authority, as a new oracle, as a new religious leader, you know, as the source of wisdom.
The other problem could be, for example, the using of AI for fake news or producing fake declaration of religious leader. There are a lot of situations around the world in which religions are involved in civil war as an instrument by someone to fuel the war. Can you imagine what a fake news or a deep fake made with an AI can produce? It's unbelievable.
Because the potential harms are so grave, says Father Benanti, everyone involved in answering the Rome call for AI ethics is determined that their contributions to the conversation have a positive impact. All the stakeholders have to give their own contribution.
because it's a global problem. And for every global problem, we can just work with global solutions. Dr. Ahmed has a mixed outlook about the potential success of a global solution. I would say in the short term, I am pessimistic and maybe even pessimistic in the medium term. And that's mainly because technology is developing very rapidly. We don't really have the tools to regulate
and its impact on society will take us some time to figure out what is going on. But that said, I'm positive in the long term because look at humanity's history. I mean, we have not only survived but thrived with technologies like gunpowder, weapons of mass destruction, and atomic weapons for more than 70 years. So we'll eventually find a way
to coexist, regulate these technologies for even betterment. Like Dr. Ahmed, Father Larry feels there's room for hope eventually. I think in the long run, human beings tend to work it out. Look at nuclear bombs, okay? So far, we haven't destroyed the world. We certainly have the capacity to do it, but we haven't done it. I think that says something about human nature.
calculating of the existential risk of AI is something profoundly important for our generation. I think in general, human beings tend to make things around them useful for themselves and not harmful. And I think that we're going to learn how to do that with AI.
We'll be back in two weeks with a second discussion around AI, when we explore more deeply some of AI's potential benefits, the relationship between the divine nature of human beings and technology, and more from today's guests about how their faith traditions inform their ongoing work to integrate ethics into the development of the consciousness of machines.
In the meantime, if you enjoy the stories we share with you on the podcast, please follow us and rate and review us. You can find us on Twitter, Instagram, and Facebook, and at storiesofimpact.org. And be sure to sign up for the TWCF newsletter at templetonworldcharity.org.
This has been the Stories of Impact podcast with Richard Sergei and Tavia Gilbert. Written and produced by TalkBox Productions and Tavia Gilbert. Senior producer, Katie Flood. Music by Alexander Filippiak. Mix and master by Kayla Elrod. Executive producer, Michelle Cobb. The Stories of Impact podcast is generously supported by Templeton World Charity Foundation.