cover of episode The Perils & Promises of AI

The Perils & Promises of AI

2023/10/17
logo of podcast Stories of Impact

Stories of Impact

AI Deep Dive AI Chapters Transcript
People
C
Choki Nima Rinpoche
D
David Zvi Kalman
H
Harriet Harris
J
Jeffrey A. Middleman
J
Junaid Khadr
M
Mohammad Aurangzeb Ahmed
P
Paolo Benanti
P
Philip Leary
S
Stephen Croft
Topics
David Zvi Kalman:不同宗教在AI问题上展开合作,基于共同的谦逊和对未知的探索,这是一种前所未有的跨宗教对话,旨在确保AI发展符合道德规范,并保持宗教在道德领域的持续影响力。 Choki Nima Rinpoche:面对AI的冲击,宗教和社会界限应该被打破,人们应该团结一致,共同应对AI带来的挑战。 Philip Leary:AI的发展应该以人为本,不同信仰传统应该携手合作,确保AI技术以符合人类福祉的方式发展。 Harriet Harris:不同宗教在人类繁荣的根本问题上具有高度一致性,AI技术应该被用于解决贫困、饥饿和医疗保健差距等全球性问题。但同时,AI算法中的偏见问题不容忽视,需要一个多元化的国际组织来解决这个问题。 Mohammad Aurangzeb Ahmed:多样性是解决AI伦理问题的关键,不同视角的融合可以帮助我们找到更全面的解决方案。AI技术可以用来识别和减少算法中的偏见,对社会产生积极影响。 Jeffrey A. Middleman:AI中的偏见问题突出,需要来自不同种族、性别和宗教背景的人参与决策,以确保AI的输入是多元且公正的。AI技术将挑战人类的价值观,特别是关于个人贡献和内在价值的观念。 Stephen Croft:在AI的开发和应用中,需要在伦理和创新之间取得平衡,需要全社会的参与,以确保AI技术被用于造福人类,而非加剧社会不平等。 Junaid Khadr:AI技术如果被明智地使用,可以减轻人类痛苦,但幸福并非来自满足个人自私的欲望,当前以满足欲望为目标的乌托邦式追求是错误的,需要从伊斯兰伦理学的角度来探讨AI的伦理问题。 Paolo Benanti:AI应该被视为积极工具而非武器,用于促进人类繁荣。AI技术可以使全球知识更容易获取,并让人们避免从事危险的工作。

Deep Dive

Chapters
Religious leaders from various traditions discuss their approach to AI ethics, emphasizing humility, curiosity, and the need for interfaith collaboration in grappling with emerging technologies.

Shownotes Transcript

Translations:
中文

Welcome to Stories of Impact. I'm your host, Tavia Gilbert, and along with journalist Richard Sergay, every first and third Tuesday of the month, we share conversations about the art and science of human flourishing.

In our last episode, we learned about the Rome Call for AI Ethics, which asked representatives from world business, educational institutions, governments, and religions to support ethical principles around artificial intelligence, including transparency, fairness, inclusivity, impartiality, reliability, and security and privacy.

We're back with the same guests again today, scientists and technology experts aligned with Christian, Jewish, Muslim, and Buddhist faith traditions. They'll tell us more about what they've learned in the years since they first responded to that global call from the Vatican.

They'll also share what not only concerns them about AI, but what gives them a sense of optimism and hope. And we'll hear more about what they bring to the international dialogue around emerging technology from their perspective as persons of faith. Let's begin with Jewish writer and academic Dr. David Zvi Kalman, scholar in residence and director of new media at the Shalom Hartman Institute of North America,

He shares why diverse religious thinkers come to the collective conversation with a shared sense of curiosity and humility. With AI, with technologies, you have religions coming together not because they are trying to share something which they already understand, but because they're all in the dark. None of them is an expert. All of them come with a kind of humility around the things that they don't know and the need to say something to make sure that they continue to be relevant and continue to be moral forces.

This is a kind of interfaith work, which I think we have never seen before. This is a kind of interfaith work that is really about a kind of open-ended exploration and coalition building around the technologies of the future. And I think we are going to see multi-faith work

around technology, which looks unlike anything that we have seen in the past, because it is coming out of this place of real grappling, real interfaith, multi-faith grappling around these core ideas, where it's not like the religions come to the table beforehand and say, well, I already know what I think. I don't know that there's ever been anything like that in human history, where you have religious traditions coming together on a matter like this. But it will only happen if religious traditions actually seize the opportunity.

It will only happen if religious traditions actually say this is something we want. We recognize the importance here. We recognize the peril if we don't do this. So I think there's an opportunity, but it has to be taken. Tibetan Buddhist teacher and meditation master Choki Nima Rinpoche believes that religious and social divisions must be set aside so that there can be an inclusive conversation around the coming impact of AI.

We have no more time to say you're Buddhist, you're Christian, you're Hindu. There's no way to say I'm Asian, you're Western, this and that. This chapter is gone. The goal is to become one. We all need to relate to each other. We have no choice. Father Philip Leary, chair of logic and epistemology at Pontifical Lateran University in the Vatican, agrees that it is vital for diverse faith traditions to be part of that conversation.

We should all unite in order to make sure that we're developing the technology in the proper fashion to put the human person at the center. We're focusing not on what divides us, but on what unites us, which is that vision of the human person that I think almost all religions agree upon.

Reverend Dr. Harriet Harris, chaplain at the University of Edinburgh, agrees that people of religious faith hold common values around humanity. There's more commonality than difference between religions when it comes to the fundamental things about what enables flourishing, what enables us to live.

the meaningful, purposeful lives that we're called to live. University of Washington professor of computer science Dr. Mohammad Aurangzeb Ahmed agrees that diversity is key to problem solving. If you want to develop, let's say, ethical AI or these systems which can address totality of these different scenarios that we are likely to encounter. So having diverse perspectives is extremely important.

So let's say even in my case, if I'm trying to answer these questions, so just limiting myself to a particular perspective, let's say my tradition or let's say a sub-tradition, my tradition. So it may be the case that if I just do that, then maybe there are other answers that I'm less likely to look into. And maybe those traditions have a quote unquote, a better way to address those questions. So I think it's a way to think about it. It's not just trying to find answers, but having conversation across different cultures. Right.

Rabbi Jeffrey A. Middleman, founding director of Sinai and Synapses, shares more about why diverse voices are so important. There is bias in artificial intelligence. The output is only as good as the input. And so being able to ensure that the input is good and the input comes from multiple different sources, that is incredibly critical.

And I think that's a really important element of having diversity of ethnicity, diversity of gender, diversity of religion, being able to be having these conversations. Who's at the table making the decisions?

Stephen Croft, Bishop of Oxford and founding board member at the Centre for Data Ethics and Innovation, believes that while it's important to give developers the freedom to be creative, a range of viewpoints is needed in order to ensure ethical innovation. I think there's a need to balance ethics and innovation, really. Good governance will need to follow from it as well. Those technologies

should not be developed just by scientists or technicians. They shouldn't just be deployed by tech companies. There needs to be good ethical governance, good public engagement and good engagement by the whole of society in order to determine how that technology is used and rolled out. That's essential to get the benefits of the technology but also present the harms that could come from it.

Since the Rome call for AI ethics was announced in 2020, how have Bishop Croft's views about AI technology changed? I suppose what kept me awake at night in those early months was the idea of intelligent machines which would take over the world, the stuff of science fiction. I discovered that that is still many years away probably, and we may never get there. But what then began to keep me awake at night

And intrigue me was the rise of narrow artificial intelligence, the way AI is being used now in all kinds of ways, sometimes regulated, sometimes not very well regulated. And that seems to me to present a clear and present danger to the flourishing of human life and to justice and fairness. The third dimension which has begun to intrigue me is the whole opportunity

that the development of AI represents for human life and health and well-being. So the moral dynamics of not making use of this technology have also emerged alongside the dangers that it presents. Bishop Croft is not alone. Dr. Junaid Khadr, Professor of Computer Engineering at Qatar University, also recognizes AI's potential to support human well-being.

What gives me the most hope is just if we are careful and if we use technology in the right places, it can alleviate a lot of human misery. It can cure people. It can even enable larger access to knowledge. So that gives me hope that if used wisely,

then technology can be a force for good. And Father Paolo Benanti, professor of ethics and moral theology, Pontifical Gregorian University in the Vatican, reflects open-mindedness in his hope for the positive potential of AI to be realized. We have also to be really open to the new wave of technologies.

to simply try to face the technology as a positive tool and not as a weapon. Every time that we use the machine as a copilot, as an instrument to allow the human beings in the center or to be better human, we are using that as a tool for human flourishing. What are some of the benefits AI will bring to humanity?

Healthcare, as an example, now we know that we have had that with a lens of a smartphone can simply make a diagnosis of your high pathology. And probably there are places in the global south in which it's really difficult to find an oculist or an ophthalmologist. Well, with a smartphone, a non-trained ophthalmologist, a general practitioner, a general doctor, can give best cure.

Or you can have a special diagnosis that is actually made in the best lab of the world. This kind of revolution can democratize a lot of things that now are just for the most part the richest part of the world. It can be accessible to much more people, also with lower income.

Reverend Dr. Harris shares Father Benanti's belief that AI might help remedy some of the most pressing problems humans face today. So AI for me becomes interesting in a hopeful way when looking at the immensity of some of the problems we're facing in the world. I mean, huge problems of poverty and hunger, of real inequalities in birth and deaths,

in terms of, you know, societies with really depleted access to good health care, good water. And I think human beings have lived way beyond their means and created massive, massive problems. And we also are quite inventive at finding solutions to these problems.

And if AI is one of those routes, then I do see good potential and uses for it there. However, she adds, the algorithmic bias in AI cannot be ignored. We do know that computers have, the algorithms work according to all the biases of the people who have been programming the computers themselves.

And they might suffer from unconscious bias more than human beings do. So, again, I think that's an area where responsibility and very high awareness is required. I think a very multicultural and diverse set of kind of United Nations really is

needs to be involved in order to try to counteract computer bias because I think that already for a long time we've been subject to it and that that is a danger that's already a present danger and we need to do better human beings need to do better on unconscious bias and we need to do better in terms of how we work with machines in relation to that.

Although he agrees that the bias baked into AI is a threat, Dr. Ahmad says that it is also being used to identify bias. So on the positive side of things, I think things like algorithmic auditing of real-life decisions, that can and is already having a very positive impact on society. By that, what I mean is that

Just because we can now we are able to capture data on a massive scale regarding human decision making, we can quantify biases that all of us have. I see that in healthcare AI every day. And because we can quantify that, we can also de-bias it. We can recognize human biases, we can also recognize biases which get implemented in these machines. So I think that's a great achievement.

Father Benanti anticipates that many other benefits will come from the AI. All the linguistic AI that can simple shape the knowledge that we have at the level of the receiver.

And so we can have all the knowledge of the past, all the wisdom of the past, all the books of the past, that now are able to be communicated to every woman and man on the face of the earth in the language and in the level that they can understand what is. It could be a multiplier of human wisdom. We can have really for the first time a really worldwide, every language accessible source of global knowledge.

And both these examples tell us that the power and the possibility of an AI-driven future are really, really powerful. We can really have better people if we apply for good AI. Another positive result? We can simply avoid to let people work in dangerous environments or with dangerous jobs.

Because a robot or an AI can simply do things that before was needed a human being to be done. Dr. Ahmed agrees that the future of work will shift, but it's complicated. Also taking a lot of load in terms of productivity off of human shoulders, not just as individuals, but as societies and communities.

enabling people to pursue other things that they want to do. So it's said that let's say in 30 years time a lot of jobs will be automated. So the positive side of that is that humans will then be free to pursue other things. The negative side of that is that then we have to ask the opposite question which is will those jobs be replaced by additional meaningful jobs?

Bishop Croft questions whether the impact on the workforce will deepen inequality. I think one very, very big issue is the future of work and what the future of work is going to be, both with increased automation and the fact that the economic effects of increased automation are going to fall very unevenly across the population.

So the economic questions are really significant and how the workforce is retrained and other economic opportunities created is going to be really critical. Rabbi Middleman questions the coming impact to citizen workers in the context of his religious faith. I think another major question is what is the value of a human being?

That is, in Judaism, there is an idea that everyone is created, the Hebrew is, in the image of God, everyone has...

infinite worth just by virtue of being a human being. But a lot of American society, or at least a lot of Western society, someone's value is what they're able to contribute to the world. And I think both things are true. I think there is an element of a lot of the value that we bring to this world is what we contribute. And there is also value of just being a human being.

Artificial intelligence, I think, is going to challenge what are we able to contribute to this world in the same way that for many people, the way that they contributed to this world 60, 70, 80 years ago was in industry and being able to create. And then all of a sudden that got automated. And so what was their value? Where do they find that value here? I think that's going to become a question as not just the physical elements are going to

replace humanity over time, I think artificial intelligence is going to potentially replace or at least change the ways in which the creative class is going to contribute to this world. Father Larry is blunt in his expression of concern for laborers and for the creative class. Goldman Sachs predicts in the next 10 years, there will be 300 million people displaced by AI.

that is going to be very serious for society. We can't have that many people without jobs. Father Larry presents another pressing question that demands an answer. Who has control over these huge AI platforms? Stephen Hawking, before he died, said that AI is more existential risk than nuclear bombs. And why did he say that? Because

Atomic weapons are in the control of countries and AI, the big AI programs are in the control of the platform. And that's a huge difference. I think the problem

is going to be malevolent actors. You know, what happens when we get a huge state-sponsored AI that has no scruples, that has no standards, that has no guidelines whatsoever? That's scary. That's scary. So I think the control issue is going to be important. Dr. Cotter underscores the importance of guarding against those who would use AI as a weapon. It's very clear that

you know, if it falls into the wrong hands, because this is a very important technology, it can be used for doing harm at a scale which was not possible before. You could automate things which, you know, previously people could not do just because they did not have the logistical means for doing that. So it goes without saying that you need to have the right purpose and intent to actually get

beneficial use out of AI. But also AI has many other perils. So for instance, modern technology also creates certain harms without the people actually aiming for that. For instance, you may just be trying to optimize for maximum user engagement or you may be trying to optimize for maximum views of something

But it turns out that there are certain negative externalities that emerge from this. And what are those negative externalities? Bishop Croft says... The social media companies amass a great deal of data about our choices and preferences and views. And artificial intelligence makes it possible to crunch that data and to design adverts targeted at particular sections of the electorate.

It's not all the fault of technology, but there is evidence of willful manipulation beginning to have an effect. We need to be alert as a society to these developments which are eroding individual rights and privacies which are a very precious part of our lives. Dr. Cotter agrees that the way big tech can target individuals and feed them what they think they want is problematic from a social and from a religious perspective.

if we

give people exactly what they want. Essentially, we are catering to what we would traditionally call their base desires. So that is not really ideal for the society. In the modern generation, we don't differentiate between base desires and higher desires because we don't have this vertical dimension. Desires are desires and in a libertarian world, we don't decide anything for the society.

As such, we say that we will let people do what they want to do as long as it's not overtly antisocial. If it's not harming anyone else, who are we to decide?

Dr. Qadir's Muslim faith informs his answer. Happiness does not come from fulfilling, you know, your individual selfish desires. In fact, this is explicit in the Quran that it is a virtue of the people of paradise that they are people who suppress their individual selfish desires. This is, I think, what also differentiates the

current dominant worldview where we don't differentiate between good desires and bad desires and we say just give people what they want.

we have this transformative perspective that we need to have this aspiration. So currently with technology and with the worldview that we have, we are trying to create a utopia in the world by giving people what they desire. But this is misguided in that it is producing things that are antisocial and they are actually disintegrating even the mental health of individuals.

So I think there are many questions that can be tackled by Islamic ethics in the domain of ethics, in the domain of law. And we need a holistic integration of all of these fields. Dr. Ahmed says a code of ethics is the answer. We live in very interesting times. What I find really fascinating is that the questions that traditionally were left to philosophers and scholars of ethics

Those questions are now also becoming engineering questions. So somebody actually has to go and implement, let's say, certain code of ethics informed by, let's say, certain moral philosophy and engineer those questions in code. Father Larry says these concerns about technology are exactly why religious leadership is needed. People of faith should be asking the engineers who are bringing this technology to life questions.

what is exactly what you're doing? And how can we advise you to make sure that this is used for human flourishing and not for the detriment of human beings? Are we supporting human dignity? Are we putting the human person at the center? Are we calculating the existential risk of things going wrong? Are we aware of the unintended consequences?

these issues. Are we allowing humans to flourish in a spiritual dimension also or are we reducing the human being to something merely material or materialistic which is a temptation? Is your vision of the human being also transcendent in the sense of

There is more than just the physical attributes of the human being. There are spiritual aspects that are even more important. And do you, as you are developing these tools, do you take that into account?

I think the religious leaders, their responsibility is to remind the actors in AI and new technology of the complete vision of the human being. And of course, as Catholic priests, that includes the relationship with God.

How to program AI with ethical answers to these questions and ensure that the technology works for the good of humanity is an open question. But Dr. Kalman believes that the engineers responsible are willing to create ethical AI, especially if there is broad public engagement in the effort, along with support from company leadership and from faith leaders and other stakeholders.

Most of the people who are working within AI companies feel like the really big decisions, the really big moral decisions are kind of above their pay grade. That they're just working on like, you know, some small corner of some small feature of some giant project, but the big moral questions are being made by executives. So that is to say that even within tech companies, many people do not feel fully empowered to actually make ethical decisions. That's why I think they're not the place to turn always.

I think the place to turn is instead the public at large and to help the public come to terms with this. That being said, once religious communities do have ideas about how AI ought to exist in society and the moral and immoral uses of AI in society, it is most certainly going to be possible to translate those ideas into code, into things that engineers within companies can actually enact.

And so I don't think there's a real gap between the ideas that are being developed within religious context and the ideas that are being developed in engineering context and technological context. Other than that, someone actually needs to do the translation. Someone needs to figure out how to take the ideas and the morality that is being developed within religious context and figure out how to make it actionable by engineers. However, the pace of technological development still presents a problem, says Rabbi Middleman.

That's why he finds wisdom in his Jewish faith. How do we create an ethical AI? I think one thing we need to balance is how tightly we hold the reins. And I think we need to hold the reins more tightly than we might expect. Technology always outpaces ethics. And this is something that I think the Jewish community needs to be much more nimble at. We need

We need to be looking at these questions in different timescales, in different kinds of ways. Again, we humans created a tremendous amount of technology that we can no longer fully create. Once something's out of the bottle, it's hard to be able to put it back in. So I think that's something that religion is able to do. It's very good at self-regulation. Judaism and multiple other religions are about how do you...

self-regulate? How do you negate some of what you say that you want so that you can be doing something that is of a larger piece, that's of a higher value? I think it is hard to necessarily program an ethical AI, but I think if we are able to move things more slowly, I think that's going to be a very helpful way of at least encouraging a more ethical take on artificial intelligence.

Rinpoche looks to his Buddhist faith for guidance in the conversation around AI's creation. Buddhist ethics are based on being harmless. Harming is bad, helping is good. So any technology is not only smart, it needs to be kind. If it is kind, then I think it's worth it. In Buddhism, the most emphasized

Kind intelligence is helpful for everyone and only intelligence lack of kindness is very dangerous and compared that quite kind but not so intelligent is still good. A very intelligent

and it's not kind, at the top of that it really is aggressive, intelligent kindness, then technology will not harm to anyone. Technology will serve and help and guide everyone. Bishop Croft looks to Christianity as the organizing principle for what he and his fellow leaders should ask for and expect from AI engineers.

So the Christian Church has been reflecting on what it means to be human and on human flourishing for more than 2,000 years. And I think we have something very, very important to bring to this new conversation about human flourishing in an age of technology. I think the Church has a really important role as a voice for a distinctive ethical tradition which has shaped much of the way the world is.

And clearly in a global world with multiple faith traditions, it will be one voice within that, but still I would hope a very significant one. And it's those traditions about faith and justice and love and human dignity which I think need to be brought by the Church into the debate around artificial intelligence as it's currently being used and deployed.

The whole Christian ethical tradition, I hope, informs my perspective and some of that tradition is embedded now in many of our great multinational institutions and indeed national institutions, concepts of fairness and justice particularly. But the other way in which my Christian faith, I hope, informs my view of ethics and artificial intelligence

is the whole Christian reflection on what it means to be human, which is really at the centre of most of the questions about the deployment of artificial intelligence.

How do we really support human flourishing and growth? And the Christian tradition is very powerfully one which celebrates and affirms what it is to be human because at the heart of the Christian tradition is the faith and the belief that Almighty God, maker of heaven and earth, became a human person and lived a human life.

and did so in part to demonstrate what a fully human life is like and how human life is at its most fulfilling.

If the collective conversation from diverse faith leaders can help shape the development of artificial intelligence, says Reverend Dr. Harris, the possibility grows that AI will help humans flourish. If they really did operate as a kind of fully multicultural, multi-faith, multi-ethnic,

age, multi-habilitated, neurodivergent, then we are going to get, I think, a more humane, world-serving kind of AI out of a properly diverse context.

While Dr. Kalman is committed to this effort and recognizes the work of leaders trying to define how to program AI so that it supports human flourishing, he also has a caution. This cannot be a short-term project or a one-time process. There are already religious thinkers who are trying to imagine what does it mean to develop an AI that could be compassionate.

What does it mean to develop an AI that is not just kind of executing according to some predetermined code what it ought to and ought not to do, but it has some room for notions of mercy and notions of virtue and notions of compassion as well. I think

Putting those into AI systems is both incredibly important and may actually prove to be quite difficult. And maybe the case, and probably is the case, that these are not the kinds of concepts that you can program into an AI and then walk away in the same way that you can't just kind of tell a kid about the concept of compassion and then walk away.

Having children, having human beings develop moral ideas is not just a process of telling them about it. It's a process of seeing it in action and correcting and correcting and correcting over and over again until they get it right. And even then, they don't always get it right.

So this is a long process. And I think part of what it speaks to is that a kind of commitment to AI within human society is an eternal commitment. It's a commitment that doesn't go away after the AI has been deployed. It's a commitment that needs to stick around because we probably don't have the ability to develop moral code for AI that works in all circumstances. And so we have a kind of responsibility, not just now, but into the future to make sure that the AI systems we developed continue to execute in ways that are beneficial for humanity.

There's a variety of different ways in which Judaism looks at questions of AI, says Dr. Middleman, and which can help us navigate the coming AI age. The biggest part of this is looking at the relationship between God and God's creation and humans' creation and AI. So God creates humanity, and then humanity doesn't exactly follow what

God wants humanity to do. It's the story of the Garden of Eden. And actually, it's a lot of the story of Frankenstein, of creating something and not being able to control it afterwards. There are elements of AI which we don't control, which we don't even fully understand in the same way that God created humanity and God doesn't totally understand the way humanity is going to work.

I think everybody agrees that's what we want to have happen. How we do that, that's going to differ from faith tradition, from faith tradition, from business leaders to policymakers. Everybody's going to have different perspectives.

perspectives and needs. The important part of this, I think, is everyone bringing their full selves to the table, bringing their own discussions, understanding that we are going to explore these questions a little bit differently. I think everybody ultimately, in my mind, has more or less the same goal, which is we want artificial intelligence to be helping humanity, not harming humanity.

Whether those tasked with making ethical decisions about AI are Buddhist, Jewish, Muslim, Christian, or from another religion, says Father Benanti, there is more reason to have faith than fear. As a religious man, I bet on human beings. I'm optimistic. I have hope. Because there is something inside the human heart.

And this place that every one of us has, that is the place where the ethics arise, makes me confident that things will not happen without someone that asks to himself, "Is that right?"

I hope our special two-part series has been as enlightening for you as it has for me. Whether you're comforted by Rinpoche's idea of a kind AI, Father Benanti's optimism in human beings' thoughtfulness, Bishop Croft's openness to making space for AI's benefits, or the constructive perspectives of any of our other guests,

These conversations offer some guidance about where we might each contribute to the conversation ourselves. Let's all play a role in learning more about what the future holds and holding our technology companies accountable for ensuring that the tools they deploy in the world truly serve humanity.

Join Templeton World Charity Foundation this November 29th and 30th for the second annual Global Flourishing Conference, featuring dynamic dialogues with leading scientists, policymakers, practitioners, influencers, and advocates working on the scientific frontiers of flourishing. The Global Flourishing Conference is virtual and free for all. Visit humanflourishing.org to learn more.

We'll be back in two weeks with another episode. In the meantime, if you enjoy the stories we share with you on the podcast, please follow us and rate and review us. You can find us on Twitter, Instagram, and Facebook, and at storiesofimpact.org. And be sure to sign up for the TWCF newsletter at templetonworldcharity.org.

This has been the Stories of Impact podcast with Richard Sergei and Tavia Gilbert. Written and produced by TalkBox Productions and Tavia Gilbert. Senior producer, Katie Flood. Music by Alexander Filippiak. Mix and master by Kayla Elrod. Executive producer, Michelle Cobb. The Stories of Impact podcast is generously supported by Templeton World Charity Foundation.