Welcome to Stories of Impact. I'm your host, writer Tavia Gilbert. And along with journalist Richard Sergay, every first and third Tuesday of the month, we share conversations about the art and science of human flourishing. In the dozens of episodes we've shared with you over the last four years, you've heard stories of experts examining the science behind everything from bees to whales, video games to dance, education systems to communication networks—
Today, we are zooming out further, speaking with researchers who are exploring ways to improve how we do science and how we cultivate and educate better scientists. When you think of a scientist, what image comes to mind? Do you imagine a genius laboring solo in a lab, displaying little emotion as they logically analyze data?
To begin today's story, let's challenge that stereotype. Scientists are anything but dispassionate. Scientists are human too. We think of ourselves as being very sort of analytically and very reasoned and very cold rational.
But when it comes to our own theories, much of that falls by the way. And we are very biased in this sense. What I mean by that, we look for confirmation. I've spent my last 20 years pushing a theory. So everything I look, every paper, every datum I look, every experiment, I look under the aspect, does it fit my theory? Well, if it not, then typically I'll just disregard it, won't even read about it, won't know about it.
That's Dr. Christoph Koch, a meritorious investigator at the Allen Institute in Seattle and the chief scientific officer of the Tiny Blue Dot Foundation in Santa Monica, California. Dr. Koch says that contrary to the idea that scientists are ambivalent data analysts, just like anyone else, they're invested in being right.
If you spend a significant fraction of your time and energy or your life, you know, let's say the past 30 years, you've defended a particular theory, then of course it's almost impossible to give up that theory. And you'll go out of your way, you'll invent all sorts of reasons why, you know, any data that's conflicting with your theory can't be right. It wasn't collected in the right way or there's an exception or this, that and the other.
And when a theory is challenged or disproven, how might a scientist respond? If it's sort of thrust in my face and I have to deal with it, then I'll try to invent reasons for this result that contradicts my theory. People, you know, are very creative. In fact, the more intelligent you are, the more...
The more creative you are coming up with excuses why this possibly cannot be the case. Dr. Koch's colleague, Dr. Daniel Kahneman, agrees that just like any other human, scientists have the impulse to defend what they're passionate about and what they believe to be true.
Nobody changes their mind about anything that really matters to them. So certainly not about politics, basically. Certainly not about religion. But not about anything else that matters to them deeply. When you have developed a theory, you're not going to give it up easily because you care for it. And it represents you. And you're going to stay at that with it. I wouldn't say come what may, but pretty much.
Dr. Kahneman is a best-selling author, an economist, and a Princeton University professor emeritus of psychology and public affairs. Among other subjects, he has long studied human bias, including its role in scientific thought. Dr. Kahneman and Dr. Koch agree that even highly trained experts are emotional, fallible humans, so not as impartial as they might believe.
Bias is really very insidious and when you have a theory or a set of hypotheses about a phenomenon and you design an experiment, you intuitively choose the experiment that is the most likely or highly likely to confirm your theory. I think the idealized model of science as being purely objective and so on is really false. It's misleading.
Dr. Lucia Maloney is a research professor in the Department of Neurology at NYU Grossman School of Medicine and a psychology research group leader at the Max Planck for Empirical Study Consensus. She agrees that confirmation bias is a problem.
We scientists would like to think we look at the evidence, we wait for what it is, and we make conclusions in a completely sterile way. There is no bias. And why is this important? It's because we are human beings. Even if we are conducting science, we are not robots. We don't just look at the evidence and make a fair conclusion. Oftentimes, we look at the evidence from a point of view.
and from a privilege point of view, because we come to the table with beliefs. Dr. Maloney says that when scientists run experiments with blind spots of assumption and privilege, resulting preconceptions have an impact, not just across research projects, but across entire fields of study. In many fields, from biomedical studies to psychology to others, it's becoming fairly obvious
that the results that some of the studies portray are not sufficiently unbiased. The samples that we collected are not sufficient enough, or the samples that we collected are not collected in a way in which we really sample the whole population. We only collect some people and not others, for instance, and that already has an impact on the results themselves. This is a fairly clear problem that the experiments that we conduct are not innocuous.
Because sometimes experiments that we do are much more likely to give us a particular answer. So the experimental design that we pick already can have an impact on what the conclusions are going to be. Dr. Koch agrees. People are very strongly biased by their theories.
And they are extremely reluctant to let go. We want to find certain things because we believe in, let's see, our theories. And so funny enough, most of the time when people ask these experiments, they confirm their hunches. It confirms their notion of what they think is going on.
Now obviously that can't be the case that every scientist is so brilliant that he or she always exactly hits on the right answer. Dr. Maloney recognizes the fallibility of her fellow scientists while defending their intentions. One thing that needs to be said is that there's no malicious here. Science is not broken. But we have to admit that science is made out of scientists in some occasions.
Even if we have done a good data collection, good experiments, and we have done even the analysis in a proper way, it's not impossible that we even take the results and we interpret them in a biased way. As you can see, there is many, many different ways in which this bias mind, this confirmation bias that we all have, we all have that confirmation bias, and scientists are no exception.
That confirmation bias can alter the way in which we conduct science. And unfortunately, there's no mechanisms, no systematic mechanism that prevents us from doing that. Without a systematic mechanism preventing confirmation bias, the answer is for scientists to challenge their colleagues' findings. But while that can be valuable and necessary, being on the receiving end of critique isn't painless, Dr. Kahneman says.
Scientists quite frequently disagree with each other. They disagree on the interpretation of facts, and in some cases they disagree on the facts. And quite often they disagree on the methods, so that they question the methods by which the other side is obtaining their facts.
So, this idea that scientists agree because they're all objective is, you know, that's a very idealized version of how it's done. In fact, you have people who are quite emotional, quite ambitious, and typically with an ego, and they are trying to advance the scientific contribution, and they sometimes
encounter somebody who disagrees with them, and they don't like it. Dr. Susan Fisk recently retired from Princeton University as a professor of psychology and public affairs. She admits that in her past, she was guilty of tough scientific critiques that probably caused her colleagues some pain. Early on, when I was more sort of ambitious and cutthroat, I really would take somebody's theory apart if it was competing with mine.
And it felt bad afterwards. You know, do I want to be somebody who goes around killing off other people's children? I mean, that's what you do in science, right? You say, oh, your model's fine, but it's not really right. And here's my data that destroy you. People talk about destroying each other, you know.
And it's a very unpleasant way to live because some of these people might even be friends of yours. I mean, I've seen a number of people get taken out by other people who think their science is shoddy. And I don't think that's the way to do it. I don't think...
mob science is right. When she grew unwilling to take out her fellow scientists, Dr. Fiske says she then squandered time avoiding scientific disagreements when she could have learned more through courageous dialogue. I worked at UMass Amherst for 14 years, and I had down the hall a colleague I really respected, this person in particular. He had a theory that was kind of contradictory to my theory, and I
We never in 14 years sat down and said, so what do you think? Why are you getting what you get? Why am I getting what I get? Because, you know, it would have been unpleasant. Dr. Kahneman doesn't believe that the only way to challenge a colleague is through meanness, mob science, or the killing off of other researchers' children, as Dr. Fiske put it. He wants scientists to sit down with empathy and the willingness to listen and ask each other questions.
I think there should be a more realistic discussion as part of the scientific method of controversy, of disagreement, of how we handle disagreement, and not in an idealized way, but in a practical way, how we handle disagreement between scientists who also are human beings, and with emotions and with biases and with
when all that comes from being a human being. I mean, we know that there are disagreements in every branch of science. And so in every branch of science, there is the potential for people who disagree to try in good faith to reach agreement. And so talking about controversy and how it can be resolved and how it should be resolved is a useful topic, I think. Dr. Koch says there's a better way for scientists to challenge each other's work.
It's called adversarial collaboration. So there's this other way of doing science where you take two competing ideas in any one space, whether it's physics or neuroscience, and you try to design an experiment where you agree, okay, we're going to do an experiment to try to do a match-up between theory A and theory B. And we ask, we will ask nature a question, an experimental question, and we'll arrange conditions so we can
try to come up with some prediction where the theories directly contradict each other, where theory A says this should be the case and theory B says, no, no, no, that should be the case. And then we can ask other people who are not involved directly in theory A or theory B to actually do the experiment.
Collect the data, make the data available to everyone, evaluate it, and then present us with the result, at least in theory. So this is what it means, adversarial collaboration. You take two scientific adversarial theories that have different prediction about a similar question that they are both interested in. This is in contrast to how science is usually done, where the scientists who believe or promote a theory often also do their own experiments.
But again, we're trying to remove as much bias as we can. Because if you really love Theory A, then for various reasons, sometimes very insidious in your mind, you tend to prefer data and look at places where you think your theory will be confirmed. And you don't go to places where you have some intuition, maybe my theory doesn't apply over there, so I'm not going to look over there. I'm only going to look over here. This is just human nature.
Dr. Maloney explains how adversarial collaboration can help scientists avoid the confirmation bias that undermines their work. Adversarial collaboration is… A process by which two adversaries, or people who have opposing views…
sit together, try to understand the theories of the others, try to come up with experiments, experimental designs and predictions, whose test will be able to arbitrate among the different theories. And the key is that they agree ex ante that the methods are good to test their theories and that if the results go against their view, they will be willing to change their minds. Now, that's how it should work in practice.
Now the question is: are people able or not to change their minds?
And who is supposed to change their minds? And the whole idea of having the adversaries together is to figure out methods that could disconfirm the hypothesis, right? That's the key of the process is the disconfirmation aspect. This process makes for better science and more informed scientists, says Dr. Koch. I would always emphasize we are biased. So the more we know about our biases, can we actively counteract them?
I mean, that's a genius of science. We can go back and do a better experiment and understand about the bias or get other people to repeat the experiment. So that's why it's important to understand the exact nature of these biases so we can minimize them, perhaps not totally eliminate them, but we can minimize them.
Dr. Kahneman says that the genius in adversarial collaboration is that it enhances research rigor and trains researchers how to be respectful and productive in the way they disagree. When a scientist is moved to criticize another scientist, that is usually done with gloves off.
a fair amount of sarcasm, and inevitably the reply is going to be angry and defensive, and the rejoinder is going to be even less pleasant than the original critique. That's angry science, where basically people's egos are up and nobody is willing to concede anything, and the tone is unfriendly.
So adversarial collaboration is supposed to be an alternative to this way of dealing with disagreements between scientists. It should be done in a collegial manner. There ought to be norms of civility. It's an alternative to controversy, so that when scholars disagree,
They tried to understand each other and agree on a joint experiment to resolve their differences or on some other ways of collaborating instead of quarreling in public. Decades ago, Dr. Kahneman himself established what became the practice of adversarial collaboration as a viable alternative to angry science. I found myself in disagreement with
with two colleagues who were also friends. They had published an article which disagreed with something that I had said, and I still disagreed with them. And then instead of writing a reply or a comment on their paper, I wrote them, and I suggested we collaborate on a study to try to resolve all different problems.
And that was the very first time that I did that. Dr. Kahneman's willingness to work with his friends, rather than tearing apart their research, led to what is today an increasingly employed and maturing scientific process. He says that adversarial collaboration is designed to challenge everyone involved. Now, in adversarial collaboration,
The experiment that gets executed was not designed to confirm your theory. It was at least in part designed by somebody who wants to disprove your theory, not to confirm it. And so it turns out that the experiments that get done do not correspond to what either of the participants would have chosen on their own.
And it turns out quite frequently that both sides are wrong. And my conclusion is, it's probably a desirable outcome. Well, if they disagree, then either there is a clear victory or both sides are wrong. One of the possible outcomes of adversarial collaboration is both sides being shown to be wrong. Both sides being wrong is not a bad outcome.
because it tells you that there is a problem, that the theories are incomplete and that they need to work to respond to challenges that they were previously avoiding. And I believe that this is a good thing for people to realize that there is something about their own theory that they do not understand because it overcomes the natural bias to run experiments that confirm your theory
rather than test it. That by itself is a good thing. That is that in a context to which you are committed to publish the results, you publish results that are an embarrassment to you. That advances science and I believe it's a good thing. Dr. Fisk adds that adversarial collaboration relies on: An honest commitment by two individuals or groups who have competing theories about the same phenomenon.
and who respect each other as scientists and trust each other to produce data that is probative. The benefits are not just in the research, but in the relationships between scientists. I think adversarial collaboration requires trust and respect, and it's never wrong to respect people. You don't have to like them, but if they're your colleagues in the field, it's a small field, you're going to see them the rest of your life.
it's better to respect them. And this sort of attack dog approach doesn't seem to me to be useful. They might argue about what is the important variable, but if you trust people,
and their judgments and their abilities as scientists than their ways to test who's right. So it makes it a more pleasant place to be. Scientists who commit to beginning an adversarial collaboration need courage, humility, and goodwill toward those they may be inclined to see as competitors more than colleagues, says Dr. Kahneman. It illustrates the
Several things. It illustrates a joint commitment to the scientific method, that several people can agree on that. And it illustrates mutual respect. And in that sense, I think it's a positive contribution. It takes a certain willingness to the possibility of your being wrong. It takes some respect for the other side. I mean, that's essential. And some degree of trust in the other side. The first condition is mutual respect.
and agreements that the disagreement in principle, if it is a scientific disagreement, should be subject to resolution.
Even if it turns out that this ideal is rarely achieved, it's a very good thing to have when you start. Virginia Cooper is the principal advisor for listening and learning in a polarized world initiative and the advisor for discovery sciences at the Templeton World Charity Foundation, which has supported research into adversarial collaboration. She says that the process has grown out of not only a desire for an alternative to angry science, but also a desire to
but because collaboration is actually a natural impulse. I do think that there is starting to be a shift and you are seeing more collaboration. But for a long time, science, I think, has been about the lone genius. We were all looking for like the Einstein and those people, like the Watson and Crick and their one lab. And that doesn't really exist.
when you really think about it, most people are collaborating or want to collaborate. And so I think we're getting to a point now where people are trying to collaborate better. People are trying to share more. Adversarial collaboration right now is a very innovative way to do science. It's an interesting way to do multiple types of science. So they've done it a lot in physics, but it's now gaining a lot of traction in the social sciences, as well as things like neuroscience and biology. I think it adds a level of rigor to science.
because you have to be able to understand not just your theory, but the opposing theory. But the business of science, the science industry, is stacked against a collaborative and collegiate spirit of research. There are powerful incentives for scientists not to trust each other, not to work together, not to be open about process, says Virginia Cooper. And that results in serious deficits across fields of inquiry.
So right now, the way it is, if you want to get published in a high impact journal, you have to discover something. You have to prove something. You have to, you know, find the new patent. You have to do something that's going to get people interested. Unfortunately, the way science is done now from that top perspective, the incentives aren't to do good science. You're not incentivized.
to do all the sharing. You're incentivized to get the paper published in that top impact journal. And that top impact journal only wants results that are sexy. So you saying, I tried this and it didn't work, or if somebody said, this person said this, but I redid their experiment and I didn't get that, none of that is going to make it into the journal. So right now, science is incentivized for you to not talk about null results, not be very open to what you do,
because you don't want people to check into your work. So you don't really share your codes and your data. You don't share your methods. And so people can't really do what you did. The only way we know about the science is at the end when you publish the paper in the high impact journal. We don't see anything along the way. And most people can't
get access to what you did in the first place. So that's how we have this thing that they're calling the replication crisis right now, where a lot of this science and a lot of things that we think are true, people are trying to do it and they're like, I'm not getting that result. Dr. Fisk adds, I think the efforts at replication are probably a good idea, but the trouble is the people who are doing the replications are motivated not to find the effect because if they find the effect, there's no news.
So as somebody who does research that's occasionally the subject of a replication attempt, I feel like let me be part of the process because there may be some angle on it that is not communicated in the methods. And that's my fault if so. But, you know, if I'm saying you're doing these things in a dramatically incompatible way with the way we think about them, then maybe it's helpful to make that an experiment and say Fisk's way and your way.
I think cooperation is a better way to instill progress. So, you know, if you want to shape people's behavior, you don't punish them. You hold out aspirational possibilities and reward them. And so I think, you know, a number of people have left the field because of the unpleasantness of all this. I mean, if somebody's cheating, making up data...
erasing data, then they don't deserve to stay in the field. But if somebody is, you know, making a good faith effort to study something objectively and they make mistakes, that seems different to me.
Dr. Maloney says that the cooperative process of adversarial collaboration reveals truths about human bias and human behavior, and about the impact to fields of study when, through a good-faith effort at objective study, irreconcilable differences are revealed and results cannot be replicated.
perhaps shared collaborations will on the one hand surface those angry feelings that were always there and hopefully provide the mechanism to also resolve them through hopefully a proper dialogue. If the disagreements are so far apart,
and that the views can almost no be reconciled, then it's also not going to have a significant impact. In the field where I work, which is mostly psychology and neuroscience and human neuroscience, there is significant evidence and many papers demonstrating these days that it's very hard to replicate behavioral studies
and that it's also even harder to replicate neuroscientific studies. Think of, you know, like if you conduct a decision-making study about political views and you conduct one study, say, in the Obama era, and then you conduct the very same study in the Trump era, beliefs have changed.
and therefore the contexts have changed. So even if you run exactly the same experiment, chances are that the results will be different because the context has changed. So we have come to realize that yes, the replication crisis is an issue. Despite the replication crisis and systemic industry roadblocks to developing trust and collaboration, the trend, if slow, has been greater information sharing or open science, which enables adversarial collaboration.
Any degree of open science was harder to accomplish in the analog past, but in the digital age, it's a matter of choice and courage, says Dr. Fiske. In the old days, people could write to you and say, I don't believe your results that you published. And for seven years after your publication, you had to be able to send them paper copies of stuff. Wow.
So you might, you know, photocopy an entire banker's box of questionnaires that they could examine. And certainly you had to be able to send them your method, like the questionnaire you used to measure things. But open science means that you tell people from the outset what you did and they can find it online without asking you. But this means that from the get-go, other people can replicate you. Open science means you're taking a pledge that you're willing to show people
everything you did. Virginia Cooper agrees that open science is invaluable,
and that it is the key to engaging in adversarial collaboration that makes science and scientists better. Open science really is looking at how we can share science, share outputs, share codes, share everything. So ways to make the scientific method more accessible to anybody that's looking for it. That could be sharing papers, documents.
Sharing data, sharing software, just sharing whatever you're doing as a researcher, sharing it with the general public and sharing it with the research community so that they can do things like replicate your work or just kind of like hold you accountable and see exactly what you were doing to get your results. And the reason that it connects really well with adversarial collaboration is because you have to be transparent in what you're doing if you are
working with somebody you consider an adversary, a group of people who are not necessarily agreeing with you, but answering your question from a different perspective. You need to be very transparent with each other and with the general public once the works come out, because what you don't want is for one group to be proven right and the other one to be proven wrong. And then the group that's proven wrong will say, well, you didn't do this, or you don't have proof that you did
this. Dr. Maloney says, Oftentimes, the open science movement is understood as making science transparent, open, and robust. How do you make science transparent, open, and robust? Transparency can be achieved, for instance, through doing pre-registration. That means that before running your experiment, you say what you're going to be doing.
How do you make your research open? By sharing your code, sharing your data, sharing your papers. How do you make your research more robust? Is by having pre-registrations. Now, everyone can attempt to replicate, for instance, one experiment or even take the data to rerun the analysis that the authors have proposed. And in doing so, see how much the results replicate.
Trust by verify should be our motto. And there's nothing wrong. The fact that we verify doesn't mean that we don't trust. We trust and verify. And why do we verify? That's the important key. Because we know that implicitly and consciously, we can make mistakes. Science is not perfect. It's never meant to be perfect. A good science is science that is corrective. You make a mistake, you correct it.
That's it. And how do you detect that mistake? By planning replications, by doing pre-registrations, by trusting but verifying. But good science is not just open science that is transparent and robust, says Dr. Fiske. Good science comes from teamwork. And that's another reason that the research around adversarial collaboration suggests that it's powerful and productive. Science operates in teams.
And there's the excitement and the loyalty that you have when you're working with the same team of people.
And then there's the competition between teams. And so it seemed to us to be more constructive to be working together and leveraging each other's talent and energy and ability to run replications of each other. Dr. Maloney explains further. I would say that team science is a collective of individuals with different competencies that come together to approach a problem and solve it together. And that the answer is in the collective intelligence.
meaning that the total is more than the sum of the parts. Now people who previously were only seeing the evidence from their particular point of view now have to negotiate. The scientific method could be improved. We have learned for the past 200 years that doing things the way in which we are doing it is good. But we have also learned that there's always ways in which we can do it better.
It's very hard to now find these days a Galileo, somebody who was capable of doing everything. And part of that is not because those Galileos don't exist. It's because the problems have become more complexified. When problems become more complex, the solutions also become more complex.
The days when we just open up a body to just look into, it's past, past, past, past. This is centuries ago. Now we need to do much more. We need to look not just within the body, but within the cell, within the bodies, within the different species and so on. And that requires much, much more complex medicine. Is adversarial collaboration the future of all science? Dr. Kahneman says... I believe that it's going to spread.
And I believe if it becomes a norm that the angry science kind of research and enterprise is replaced by adversarial collaboration, I think that's a net improvement in the progress of science.
And I think this will happen. I don't think it will transform science, but it certainly could make things better for scientists and for science. The editors of journals have a very important role to play in this. That is, if they discourage angry science and if they promote adversarial collaboration, they can actually make it happen. By rejecting critical articles that do not
demonstrate that an honest attempt was made to get the collaboration going with the person that you are criticizing. Now, the person that you are criticizing may refuse to play with you, and that will happen, in which case it should be documented and shown to the editor and be part of the record that somebody refused to engage in an adversarial collaboration. That should be part of the editorial process, in my view.
Dr. Maloney says, I personally believe that to make science more robust, we should do adversarial collaborations in any field. This is not a field that we should do just in consciousness. I think this is a widespread need for the scientific community, period. Now, I just happen to work on the field of consciousness and I just happen to love and be fascinated by the question. And here's where I think that given the state of knowledge that we have in the field, given the number of theories that we have,
And the maturity, as well as the immaturity, so to speak, of the field, adversarial collaborations can make a significant impact in making each theory more robust. Now you make the implicit beliefs open to the community. You can test them, see whether or not they are falsifiable, first of all, and ways in which then how do the proponents of the theories react towards those falsifications.
So now you have the scientific process open, right? Because oftentimes these are things that, you know, we scientists think in our own office, right? You know, we just think about these things, but we don't necessarily make them explicit. So the idea of this adversarial collaboration is make explicit something that oftentimes is implicit. And even more so, you can think that even if you have a theory, you have to then make it measurable.
My strong belief is that it could be used for many more fields. What lessons has Dr. Maloney learned from researching adversarial collaboration? First is that dialogue, true dialogue and true openness is extremely difficult and is far less prevalent than we think.
And that was the somewhat worrisome lesson. So the biggest lesson for me was understanding that it's not necessary for the theory proponents to change their mind. It's not necessary, not even wanted.
And I will tell you why. Because if the theory, if we take science as something that only happens through a process and not a destination, meaning at one time point, you need many experiments, you need people to continue working on their ideas and collect enormous amount of evidence against their beliefs to change their mind. If you update your mind with the first contrary evidence, perhaps it will be too soon. So it's important to understand that it's important to change your mind over time.
So it also told me a lot about what science is and how robust science is. Virginia Cooper respects the scientists who are committed to the process of adversarial collaboration, and she's hopeful that it's a process that will increasingly be used. You spent the last 30 years
Working on this theory, you spent so many hours in the research lab, so much research funding, and you stood behind this for like 30 years. And then you have to come back and say, yeah, all of my life's work is wrong. I think that's a really hard thing to do. I think it's a very hard thing to do. I think it's incredible that you even got theorists signing on to it. I don't think I would have been able to do it.
to put my life's work up against somebody else. I don't know if I could do it. So I think it takes a lot of humility, a lot of intellectual humility to even be willing to put yourself on the chopping block like that. And I think it's wonderful that people are signing up to do it. So I'm really excited to see how Adversarial Collaboration continues to blossom and grow.
Before we close for today, we want to acknowledge the passing of Dr. Daniel Kahneman. Dr. Kahneman died on March 27, 2024, at the age of 90, after a long career as a writer, psychologist, and economist. We are so glad we got to share some of his work with you in this episode.
His contributions to science and scientists, to community and collaboration, were honorable and invaluable. He was an inspiration to us at Stories of Impact, and I hope he inspired you too.
We'll be back in two weeks with another episode. In the meantime, if you enjoy the stories we share with you on the podcast, please follow us, give us a five-star rating, and share this podcast with a friend. And be sure to sign up for the TWCF newsletter at templetonworldcharity.org. You can find us on Twitter, Instagram, and Facebook, and at storiesofimpact.org. This has been the Stories of Impact podcast with interviews by Richard Sergay.
Written and produced by TalkBox Productions and Tavia Gilbert. Senior producer Katie Flood. Assistant producer Oscar Falk. Music by Alexander Filippiak. Mix and master by Kayla Elrod. Executive producer Michelle Cobb. The Stories of Impact podcast is generously supported by Templeton World Charity Foundation.