Welcome to Stories of Impact. I'm your host, Tavia Gilbert, and along with journalist Richard Sergay, every first and third Tuesday of the month, we share conversations about the art and science of human flourishing.
Today is our final episode of our sixth season of Stories of Impact, and Richard Sergay is back in conversation with Templeton World Charity Foundation President Dr. Andrew Sarazin. Looking back over the year of stories we are just wrapping up, and looking a little bit ahead at what's to come when we begin our fourth year of the Stories of Impact podcast. Let's begin at the beginning. Here's Dr. Sarazin.
So the mission of the Templeton World Charity Foundation is to fund innovations for human flourishing. We support research,
We fund the development of new tools and interventions, and we also create movements to launch those innovations for wider impact. How does TWCF fit within the Templeton philanthropies? We have two sister philanthropies, the Templeton Religion Trust and the John Templeton Foundation.
As a family of philanthropies, we have some similarities, but also some differences. At Templeton World Charity Foundation, we take very seriously the instructions to create impact that is positive for people around the world. And so we invest in
those tools, technologies, interventions, behavioral change approaches, curriculum, policies that all are designed to improve human life in a way that enables flourishing. How do you think TWCF distinguishes itself from other major philanthropies and foundations around the world? Yeah, I think fundamentally it's the vision of our founder, Sir John Templeton, that sets us apart. He believed in...
in
goodness of human life, you know, in its beauty and power, but also its mystery that we don't know a lot about human nature and ourselves and also the universe. And so it's this great adventure that he instructed us to set out in the early phases of it now to really discover ways in which we can fulfill our potential, the ways in which we can flourish and enable others to do so. So
That's a very distinctive mission. Other philanthropies will focus on poverty, on climate change, on medical research, on education reform, on criminal justice reform. Those are all really super important topics. But I think one of the things that was unique about Sir John Templeton's
broadest ambitions for the foundation was to really enable the search for meaning, purpose, and truth, and to show that it's possible to seek those and even to find some answers to those questions. So you mentioned the term human nature. You just recently published a fascinating piece in Time whose headline is, How Technology Can Help Us Become More Human. What was your intent and what do you mean by that? So I think there's this
narrative, certainly I feel it today, that we are really at the mercy of technology. Technology has sort of taken on its own life in some sense, that we're at the kind of receiving end of the deliverances of technological productions. So a new release of ChatGPT comes out and we're all kind of scrambling to figure out what it means for ourselves, for our culture, for institutions. And so we're
there's definitely a ways in which human life has been fundamentally altered since I grew up and that, you know, the way in which we form our relationships, fill our minds, train ourselves, build our identities, all of these, these key features of human life are absolutely mediated by technology and specifically certain kinds of information technology. And more specifically around that really sophisticated forms
of sort of computer technology that go under the banner of AI. And so that default mode doesn't have to be the way in which we operate in the world today. There are other ways of thinking about the development of technology where we can use these tools to actually strengthen human capacities as opposed to either ignore those human capacities
that are wonderful or even actively erode them. And so you can think about kind of any stage of human life, let's say adolescence. Adolescence is an obviously important time. It's one where we know that humanity as a whole is struggling during that really key developmental window, rates of self-harm, of mental illness, of a lot of definable clinical conditions arising in that age group. And so it's of intense interest
But adolescence is a time of exploration, of development of identity, of learning your own boundaries. And so against the sort of a rising tide of something like anxiety and depression, we can actually design technologies that can mitigate those issues. And I list an example of a really amazing technology called MindLight, which seeks to improve self-control and therefore reduce anxiety in adolescence.
And through careful study, this video game system has shown to reduce symptoms of anxiety that are as powerful as the standard of care guided by a therapist. So this really cool system where the video game mechanics are set up such that your character has a light in the game attached to their head and the light is given feedback through a headset that the user wears and
So that the calmer the person is, the more they control their anxiety, the bigger the light gets. And they're able to explore this world to move on to other levels, to vanquish monsters and so forth. And so it's actually a dynamic neurofeedback loop that's set up such that the user actually strengthens self-control, reduces anxiety, and then does that in the context of a fun, immersive experience. So that's just an example of a kind of technology that
focusing on a key developmental window that has proven clinical effects, that strengthens a skill that we need to navigate the world. And it should be one of hundreds of different kinds of technologies to do so. And I assume this is a good example of what philanthropy and foundations like TWCF can support to help
technology and humans coexist. Yeah, it's not just coexist, sort of bring the technology in service of humanity. I mean, I think that's the key. It's a creation of ours and we should optimize that for us, not to optimize it for the machine itself or to engagement rates or advertising dollars, you know, which is really how the internet operates today.
but actually making their technological systems in service of human experience in a very real way. You know, and you mentioned philanthropy as kind of driver of this. And I very much believe that, you know, philanthropies are best suited when they take on a market failure. And this is certainly a classic market failure of there's a lot of research on things like forgiveness or gratitude, humility, curiosity, wisdom,
things that are really the substance of our life, the things that we think are super important, the things that we want to enable in our kids, the things we hope our communities value. And so they're so important, yet they don't really filter out into our
many, many different kinds of cultural products that our society creates, including one of the most important cultural products, technology. And why is that? Why do you think that is? Well, it's because there's a gap between research and practice or research and practice and policy and culture, where actually philanthropy, I would argue, has an obligation to step in.
So, you know, in part, there's a kind of supply side issue that the people that are really expert in gratitude or in self-control are not this. And what excites them and drives them and motivates them is not necessarily either cultural production or artistic production or certainly not technology creation. And so there's a bit of a gap between, you know, in the medical field, they call that a valley of death where you've advanced a concept, a tool, a technology, an idea enough to
to show that it may have some relevance, but you don't satisfy the concerns that people with real resources and businesses and investors have about the investment in that technology. And so it's called the valley of death because it's where most good ideas go to die. There's a pilot that's done, but it doesn't really advance beyond the pilot stage. And the same is true for technology that's been designed purposefully for the good of an individual.
I think also there's this fascination that sort of the paradigm that we always use with technology and evaluating technologies is,
the paradigm of efficiency, right? So if something makes our life more efficient, if it saves us time, then therefore it is good because then we will fill our time with other things. Well, I think the history of labor-saving devices is useful as that is to reduce drudgery and so forth. I think human beings are also really, really good at inventing other ways to use your time, which are not necessarily good for us. So our ability to fill our time with other things is almost infinite. And so there's a bit of this
fascination with efficiency as the kind of key goal of any technology, then as long as it does that, you know, I'm going to use this large language model chat bot tool to write my emails. Well, great. And it might help you, you know, in a narrow slice of life, but will it make you a better communicator? Will it help you prioritize where to focus on, on real relationships? Not necessarily. And so I think there's this sort of false expectation
narrative of efficiency as the primary end.
Where is your worry meter at this point about tech and these sorts of human value-based issues? Are you an optimist, pessimist? Where do you come down? Well, I mean, over the long run, I'm an optimist for sure. I think we will navigate these transitions over a long period of time. I think I'm probably more neutral to pessimistic in the short term because I think that there are a lot of things which specifically around large language models and sort of the chatbot technology
will only actually further entrench large interests, government or commercial interests, rather than offer this kind of democratization tool. It's really, really clear with the kind of new chatbots that have been released in the last six months that it takes very little to copy. You can copy the technology instantaneously. So once one system is created, you're going to get dozens and dozens, hundreds of these systems running around.
And so really, human attention will still be the most limiting resource, and therefore it's going to favor those who have large interests and ability to command human attention already. So there's a real equity issue that needs to be resolved in the short term. I think there are real issues with democracy and misinformation. These systems will create content at a rate which is
more rapid than we've ever seen digital content being created in the past. And they'll also be almost indistinguishable from human generated content. So there are some huge things to worry about in the short term. And if we get through those things, I think I'm a big, a big believer in technologies for knowledge production, for science, for individuals. But I think, you know, I'm still pretty nervous about it. Let's stay with science and technology for a moment. Um,
TWCF has been funding an amazing project that came to something of a conclusion over the last week called adversarial collaboration around consciousness. First of all, tell me what adversarial collaboration is in your mind and why did you fund that? Yeah, so adversarial collaboration refers to a specific method of doing science that
That is kind of sort of like what we all thought science was about. You know, we have people sit around and have these great theories, these hypotheses about how the world works to explain something. And then, you know, other scientists would disagree with them and they would each do experiments. And over time, you know, one theory and one set of experiments would support one approach to the world and one explanation.
would prove to be more reliable or more precise. And so therefore, one kind of theoretical approach would be seen to be more truthful than others. And this, you know, the sort of Copernican revolution was like this, where it was, you know, a heliocentric solar system versus a earth-centric solar system. You know, those two different paradigms and that data showed one was correct and the other. So that's the way people thought science worked. It turns out,
Actually, science really doesn't work like that, especially for newer areas or areas that are sort of harder to kind of theorize about or areas that are harder to generate data about. And one of those areas is the topic of consciousness theory.
which is to say our subjective awareness. How does mind emerge from the brain? That word entered the English language in the 16th century, which initially meant that it was something that was private. You had a secret. And...
Since then, you know, the term conscious, to be conscious can be many different things. We can be self-conscious. We can be conscious of something. You could raise our consciousness. There's a lot of different ways in which we use the term. And so there's some fundamental kind of language and conceptual hygiene that's necessary to approach the topic anyway. But it turns out, so if you asked, you know, how does mind emerge from the brain?
There's roughly a dozen scientific theories that are out there, maybe a little bit more about that. Many theories of consciousness are not scientific. They're metaphysical, that they make assumptions or claims about fundamental reality, but they're not testable.
smaller fraction sort of about a dozen are actually scientifically testable and we looked at that kind of mix of different theories said hey look there's actually a large number of these theories and they're not getting resolved so like you know 20 years ago 30 years ago some of these have been around for even longer business as usual within the research community was to have them all go off and do their own things and so making incremental
advances doing small experiments and really lack of progress. So what we did was to create a mechanism to test these different theories head to head with each other using this mechanism called adversarial collaboration, which is you get adversaries to design a set of experiments, which actually differentiate between the two theories. So a common set of experiments that are repeated and,
by independent parties in multiple labs around the world so that the individual, you know, team A is not doing the experiments for team A and team B is not doing experiments of team B. There's actually a third party that is doing experiments and then all the results get pulled together. And the idea was that they will cause the theories to evolve more rapidly. They'll cause revisions to theories because both groups agreed in advance
to the experiments. And that's what we saw with the first, just a little over two weeks ago, I think, that the first set of adversarial collaboration results were released in a preprint archive testing one theory of consciousness, the global neuronal workspace theory, versus another one, integrated information theory of consciousness. And the results, the project was a success in the sense that experiments were completed and
the experiments did identify some areas where there was support for integrated information theory and,
in at least two of its predictions were shown to be supported by the data, whereas the results for global neuronal workspace theory were more mixed in this context. So the project was a success in the ways in which the theories either found support in the evidence or did not. And what that's going to do is cause the individual teams to revise their own thinking because in no case was one theory shown to be totally accurate in its predictions.
And so, you know, there's going to be a rapid evolution for explaining the data that came out of that. And we expect.
several other studies within the next year that will cause even further revision. So, you know, the idea is that science makes progress because there's iterative confrontation with reality that causes you to change your own theories about how the world works. And from that point of view, the project has been a tremendous success so far. And we look forward to a number of other research findings that will emerge over the next year. What do you think surprised you the most?
I thought what was really, really cool was the way they wrote the paper. And so this is not, it hasn't been peer reviewed yet. It's been submitted to some really high profile journals. But what was really interesting was the way that they wrote the paper was that, remember, there are three teams. There's the kind of experimental team who didn't have a dog in the fight, so to speak. There's team A and team B. The whole group wrote the introduction section, the method section, and the results section of the paper.
Each individual team wrote individual sections for the interpretation, which I thought was a really wonderful way to kind of have one paper that everyone wrote on. But they didn't have to agree on every single detail, and that was okay. And I think science should have more of that. It was a really surprising and I think a delightful surprise that, you know, there were skeptics initially to say that actually science
If you did this, no one would ever agree or sign up to put their name to the final paper that came out of the data. And that was not true. You know, this team in particular coalesced around a paper. They had different interpretations and they accommodated those differences, but they allowed them to submit one article. So that, you know, again, that was a really pleasant surprise. And, you know, there were skeptics to see whether that was something that would be possible.
Changing gears, one of the major initiatives this past year has been around the issue of forgiveness. Tell me a little bit about it and how you connect it to human flourishing. Yeah, I mean, forgiveness and redemption and other ways in which we deal with the real world in productive ways is so important because it's one thing to sort of idealize a state of flourishing that
Yes, we're going to, you know, prosper and have, you know, joy in our lives and laugh and, you know, find meaningful moments. But anybody that's lived any length of time knows that that's, you know, we don't live in that world. And so there's this real sense in which to flourish means to be able to navigate imperfection and trauma and hurt together.
and the bad things that happen in people's lives and move past that. And forgiveness is this amazing example of a mechanism, a tool that we have to convert these really, really awful experiences into positive experiences. I sometimes say that it's a sort of emotional alchemy or psychological alchemy where you convert psychological garbage into gold. And that's really what it is.
And it seems to be absolutely essential for human life. Archbishop Desmond Tutu said that forgiveness is like oxygen. It's freely available to all of us. And so in April, results were released for the world's largest forgiveness intervention trial in five countries. The intervention was a two-hour workbook that included
Doesn't require a therapist and it uses a model called reach which is a model developed by Professor Everett Worthington and colleagues that leads a person through a process in about two hours This workbook takes place and the basic mechanisms are that you know a person Decides to forgive commits to forgiveness so uses, you know, what's known as cognitive mechanisms and
but then also goes through a series of exercises to visualize a person forgiving a person that has harmed them, to give a gift of forgiveness, an altruistic gift. And this is all designed to engage the emotional centers in the brain. And this 5,000-person trial in five countries showed that this two-hour intervention had some significant effects on symptoms for depression and anxiety.
So it's the world's largest trial ever for forgiveness intervention. The number of participants in the study were more than all of the combined participants for all of the forgiveness trials that were attempted up to this point. And the foundation is super committed to taking the results of this trial and finding ways for practitioners, whether they're mental health workers or pastors or imams or rabbis or teachers,
to use the results of this study to benefit the people that are in their lives. As you look over the horizon for the foundation, what do you think are some of the new frontiers worth exploring? That's a really good question. I mean, I think there's another initiative that we didn't talk about today, but is
really important has to do with polarization in the world. I would argue that polarization is an existential threat in the sense that it makes all other things harder. Polarization is where you get entrenched positions on the different sides of an issue and new information or even time does not resolve those differences. You only exacerbate those differences. And so I
It's a collective phenomenon. It's kind of like a pathology, a cancer that affects groups of people. And I think right now we have about 20 active projects on looking at the mechanisms that drive polarization at a fundamental level. So we're not interested in a particular topic, but really what are the underlying mechanisms? Is that trust? Is it identity? Is it the dynamics of how people communicate or other basic features that we can look at almost any situation, any situation
place that people are polarized and specifically identify like what are the causes of that polarization and therefore what interventions may target some of those causes to unravel it. So I think that's a big, big, big area. It will only get worse, you know, per the discussion about artificial intelligence, you know, so hugely important, I think continuing to grapple with the threats of artificial intelligence for humanity and,
Why I wrote that article was because I think that's a really important direction for the world. And I think the foundation has an absolutely unique role in standing for some of the things that make us most human and are worth celebrating. And therefore, investing in research and practice to institutionalize those are absolutely part of the future. Broadly on human flourishing, I mean, you pivoted the foundation a couple of years ago to focus on this
what some may call an ethereal term of human flourishing, but you successfully transitioned the entire foundation to work on this issue. It's central to its success in many ways now on all sorts of different levels, science, technology, virtue. We're finding our way through this. I think the goal for that sort of fundamental belief was that we should work
Speak in terms we should have goals for the foundation society should have goals which are really about the positive production of some Outcomes that align with what we know to be true that life is worth living for a reason not because we are heartbeats although that's kind of essential but because it's filled with moments and relationships and
and experiences which give us meaning. And so that's what the whole idea around human flourishing was about, was to try and take what is this admittedly vast space, but make progress in identifying some of those key factors. And we find that there's even more and more interest in the topic, really, as time goes by. It's a way to speak with religious institutions, which have really been thinking and practicing elements of human flourishing for decades.
thousands of years because, you know, religious institutions and religious practices are oriented towards, you know, things that are beyond the immediate, more about, you know, the ultimate. So it found, you know, a way to integrate work on religious institutions and religion and belief with, you know, medical research, with psychological research, you know, with the best science that's available on all those topics. And so it is a broad tent, but I think
It is meaningful. And we're seeing, you know, more and more interest in researchers around the world and policymakers around the world to take on this mantle of human flourishing and to talk about it and mention it in policy speeches, in major addresses, in the scientific literature. So we do see the seeds of that.
And that's going to continue over the next, you know, hopefully decades. And you'll be holding your second annual global conference on this. That's right. November 29th and 30th this year. It'll be virtual. So programming over two days. That is a moment for this growing group of researchers to come together. But also people who are interested in using science to benefit their lives will be part of the program as well.
We are always grateful to be in conversation with Dr. Sarazin, whose innovative leadership over the last seven years has continued the vision of Sir John Templeton. We are taking a couple of weeks off for the summer, but we'll be back in September to start our fourth year of programming. We can't wait to share new, inspiring stories about human flourishing.
If you enjoy the stories we share with you on the podcast, please follow us and rate and review us. You can find us on Twitter, Instagram, and Facebook, and at storiesofimpact.org. And be sure to sign up for the TWCF newsletter at templetonworldcharity.org.
This has been the Stories of Impact podcast with Richard Sergei and Tavia Gilbert. Written and produced by TalkBox Productions and Tavia Gilbert. Senior producer Katie Flood. Music by Alexander Felipiak. Mix and master by Kayla Elrod. Executive producer Michelle Cobb. The Stories of Impact podcast is generously supported by Templeton World Charity Foundation.