cover of episode Evidence-Based Charity and Moral Psychology

Evidence-Based Charity and Moral Psychology

2024/12/17
logo of podcast The Michael Shermer Show

The Michael Shermer Show

People
J
Joshua Greene
Topics
Joshua Greene: 本期节目探讨了高效利他主义、基于证据的慈善以及GivingMultiplier的独特模式,旨在最大限度地提高慈善捐赠的影响力。研究表明,高效慈善机构的影响力远超普通慈善机构,其差异巨大。GivingMultiplier平台鼓励人们同时进行高效捐赠和个人情感捐赠,通过‘分拆捐赠’和‘支付转发’模式,激励人们参与,并为高效慈善机构提供更多资金。该平台尊重人们的情感需求,同时鼓励高效捐赠,力求在理性与情感之间取得平衡。平台选择的全球健康和贫困慈善机构都经过GiveWell等机构的严格评估,注重实际影响力而非运营成本。高效利他主义的核心思想是将慈善捐赠视为一项投资,注重回报和影响力,鼓励人们以理性的方式对待慈善捐赠,并最大限度地提高其影响力。长期来看,减少人类冲突,特别是部落主义,是应对生存威胁的关键。 Michael Shermer: 节目中,Shermer与Greene探讨了有效利他主义,以及如何将情感与理性结合,从而更有效地进行慈善捐赠。他们讨论了捐赠者疲劳、公共与私人解决方案、堕胎、死刑和政治两极分化等复杂问题。Greene分享了关于解决社会分歧和激励集体行动的实用见解。讨论中也涉及到对有效利他主义的批判性思考,以及如何平衡对当下问题的关注与对长期问题的关注。同时,讨论也涉及到对功利主义和权利的辩论,以及如何将两者结合起来,以更有效地解决社会问题。

Deep Dive

Key Insights

Why do some charities have a much higher impact than others?

The most effective charities can be up to 100 times more impactful than typical ones. For instance, donating $100 to a highly effective charity can do more good than donating $10,000 to a less effective one.

What is GivingMultiplier and how does it work?

GivingMultiplier is a platform that allows donors to split their donations between their favorite charities and highly effective ones recommended by experts. It also offers matching funds to increase the impact of donations.

How does GivingMultiplier address donor fatigue?

By allowing donors to split their donations between personal favorites and high-impact charities, GivingMultiplier helps donors satisfy both their emotional connection to local causes and their desire for maximum impact.

What are some examples of highly effective charities supported by GivingMultiplier?

GivingMultiplier supports charities focused on global health, poverty, animal welfare, and long-term initiatives like climate change and pandemic prevention. One popular charity is GiveDirectly, which provides direct cash transfers to those in need.

Why does GivingMultiplier not focus on long-term risks like AI or nuclear threats?

While long-term risks like AI and nuclear threats are important, GivingMultiplier focuses on immediate impact, such as saving lives through malaria prevention or deworming treatments, as these have clearer and more direct benefits.

What is the 'Ndugu effect' and how does it relate to charitable giving?

The 'Ndugu effect' refers to the emotional impact of personalized giving, where donors feel more connected to a cause when they can focus on helping a specific individual or group, even if the overall impact is the same.

How does GivingMultiplier measure the effectiveness of charities?

GivingMultiplier relies on organizations like GiveWell to measure the direct impact of charities, focusing on outcomes such as lives saved or improved quality of life, rather than overhead costs.

What is the role of the Gates Foundation in effective philanthropy?

The Gates Foundation has been a pioneer in effective philanthropy, supporting global health and poverty initiatives. GivingMultiplier recognizes their contributions and even received recognition from an award partly sponsored by the Gates Foundation.

How does GivingMultiplier handle the tension between public and private solutions to poverty?

GivingMultiplier acknowledges that while a more progressive tax system might be ideal, in the current world, private philanthropy can still make a significant impact. It focuses on leveraging existing wealth to reduce suffering effectively.

What is the dual process theory and how does it apply to charitable giving?

The dual process theory, which distinguishes between emotional (deontological) and rational (utilitarian) decision-making, applies to giving by allowing donors to balance their emotional connection to local causes with rational, high-impact giving.

Shownotes Transcript

Translations:
中文

You're listening to The Michael Shermer Show.

What's the new project? And by the way, it's nice to see Egghead actually doing something in the world. Well, you know, that's the thing. My PhD is in philosophy. So I came from, you know, definitely Egghead, but not necessarily a field that's focused on doing stuff in the world. And there's been this migration over time where I started out as a philosopher and thought, gosh, you know, a better understanding of, you know, how we got to be the way we are and how our brains work could not only help us do things,

better philosophy, but also that, you know, some of the ideas coming from philosophy could help us understand our own minds, brains better. And that's where I kind of the first 10, 15 years of my career was looking at moral dilemmas, trolley problems, and looking at how that works in people's brains. And that was kind of bringing philosophy in some ways to step closer to reality. And then in the last five to 10 years, I've, you know, have this feeling that

The world needs all of the ideas it can get, and we've learned a lot about human nature and human morality, and is there something that I and my collaborators can do to...

Yeah, go for it, please. Yeah.

in how much impact the most effective charities have. We, we tend to intuitively think of differences in effectiveness as like differences in height between people. We're like a really tall person might be 50% taller than someone who is not so tall. But in fact, it's like the differences among plants. It's like comparing a shrub to a redwood in terms of how much difference this can make. Now, this isn't, this is an idea that has been, uh,

The forefront has been organizations like GiveWell that, you know, for over 10 years now have employed teams of

highly sort of well-trained quantitative researchers to try to figure out how do you reduce as much human suffering as possible per thousand dollars? How do you save as many lives as possible if they were for per $10,000? And, uh, the work they do is incredible and a lot of people don't know about it. So that's a starting point. And so for years I've thought, well, you know, some people, they hear about this and they say, oh my goodness, immediately, uh,

I should be giving more and giving more effectively. But then there are a lot of people hear this and go, hmm, I don't know. And to some extent, even though I'm mostly in the first camp of people who are very quickly converted, there's still part of me that thinks, yeah, but, you know, I could give everything to the most effective charities in the world. And these are organizations that do things like

chemical prevention and bed nets to prevent malaria or treatments that get rid of intestinal worms in children, mostly in developing countries, that enable them to go to school. And these things can cost like a dollar each, right?

I could give everything to that. But my wife and I still give to the local public schools where our kids have gone. And we give to, you know, other causes that feel closer to home. And so kind of taking myself as an example, but how do we sort of strike that balance and leverage that?

So, uh, what Lucius Caviola, my, my collaborator, who's now an independent researcher at, at university of Oxford in England, we thought was, you know, for years I tried convincing people to give more and give more effectively the way I was convinced essentially by repeating the arguments from the philosopher, Peter Singer. Uh,

you know, if there was a child who was drowning right in front of you, wouldn't you give something? Wouldn't you wade in rather and save that child's life? Of course you would. Well, children are drowning in poverty on the other side of the world and you can do, you know, save them as well. And those arguments don't have a huge effect on most people. So what Lucius and I thought was, well, what if instead of saying, look, you should give more effectively, what if we said,

Do both. You already give to charity. You do things like I do, like supporting the local public schools in Cambridge or wherever you live or whatever it is that's close to your heart. Why not just say to people, do both? Make a split donation. Pick your favorite thing that's close to your heart. But also here's a super effective charity that does an enormous amount of good. Why not make a split donation to both? And what we found in our sort of most basic experiment was if you have in the control condition, it's all or nothing.

Pick your favorite charity and give everything there or follow the impact expert's advice. And what we found was that relatively few people follow the impact expert's advice and they choose to give to their favorite charity. But in the treatment condition, when you can have those two all or nothing options or a 50-50 split, we found that so many people chose the 50-50 split that more money overall went to

to the highly effective expert recommended charity, even though it was all coming from 50/50s or mostly coming from 50/50s rather than people giving everything. So we thought, okay, that's interesting. And we did a bunch of experiments to try to understand the psychology behind this. And the short version of it is that you kind of, people have two goals. They want to give from the heart and connect to things that are personally important to them, but they also do care about impact.

It's just usually not the primary concern. And so when you do a 50-50, you get almost as much value in terms of scratching that emotional itch

It's not so sensitive to whether you give $50 or $100 when you're supporting the local animal shelter or something like that. But then if you, you know, do that, then you have another $50, let's say, left over to support something that's highly impactful. And now you feel like, OK, not only did I scratch that itch to support this thing that's close to my heart, but I did something very smart and effective as well. And so you get both kinds of satisfaction.

So we thought, okay, that makes sense. And in a way we can talk about later, this actually connects to some of my early work on moral dilemmas and the kind of heart and head pull when it comes to harming people rather than helping people. Put a pin in that for now.

We thought, okay, this is interesting thing, but how do we motivate people to do this? And we thought, well, obvious thing, we can offer people money. Say, so, okay, we'll add money on top if you make a split donation like that. And what we found is that people like that. And then we thought, okay, but where's the money going to come from? And then we thought, well, maybe some people would be willing to pay it forward and take some of what they were going to give and put it into a fund that would then provide those matching funds for the next people, right?

And it turned out that like the numbers added up that enough people were willing to put into the pay it forward fund for other people's donation matches that it would cover the matches for the people in the next donation. So we were like, okay, well this seems to work in the online lab.

And then Lucius, you know, and his techie friends built a website to try to implement this in the real world. So this is about four years ago. They created Giving Multiplier, which you find at givingmultiplier.org, that basically does this. So if you go to the website and if you're listening at home, you can just Google Giving Multiplier. Oh, and I'll add that for this podcast, I'm not asking for your donation this second, but if you do, you can enter the code SHERMER.

and that will give you a slight higher matching rate. So go there, givingmultiplier.org slash Sherbert. All right, all right, here we go, here we go, here we go. You're talking, now I feel super guilty. For the greater good, are you going to feed cash into the internet? So, okay, putting the pitch aside, let me explain how it works. So if you go to the website, you'll see

First, you pick your favorite charity and we've got a database, you know, so it could be the Skeptic Society or, for example, you'd find that there. And then you look at our list of super effective charities. About half of them are global health and poverty. That's where you can really have a huge bang for your buck.

Some of them focus on animal welfare and in particular trying to improve conditions or eliminate the need for factory farming, which is an enormous source of suffering, at least depending on certain assumptions about animal brains and animal consciousness.

And then there are things that are more long-term and kind of big bet sorts of things like tech and policy for dealing with climate change or strategy and technology for preventing the next pandemic. So right now we have nine charities. We're going to be adding a 10th soon. You pick one of those. And then you decide how much you want to give total. And then we have this nice little slider where you can decide how you want to allocate between those two.

And then we add more money on top, depending on how much you give super effective. And then once you've sort of teed up your donation, you can have the option to support the matching fund. And you could also just directly support the matching fund. And so that's basically how it works. And the way I like to think of this is it's...

It's meeting people where they are, right? It's not saying you have to give every last penny that you have to what the experts say are the most effective charities. There's room in life for your personal connection and your personal meaning. And I live that as well. This is not me looking down on other people who are like the splitters, right? Um, and,

And but there's also we should be doing more with the immense power that we have to reduce people's suffering. And and so it's a kind of invitation for people to support what they already like. And it's a kind of win win. You shift a little bit towards towards hyperbole.

higher impact stuff and we add more money not just to the higher impact stuff but to you know so if you donate to the local animal shelter or the skeptic society we'll add more money on top to that as well um so we launched this three years ago four years ago um about and uh we've raised like 3.4 million so far and

Uh, most of that has gone to super effective charities, but a lot of our list, but a lot of it has gone to excellent charities that people have, um, that, that, that people have chosen to support themselves. And, you know, I could give more detail about the sort of impact in terms of, you know, you know, 300,000 deworming treatments and, and, and things like that. But, um, that's, that's the basic story. So, you know, the hope is that this holiday season, you know, I think your audience, uh,

Part of why I was excited to share this with your audience is I think these are people who have big hearts, but they also have big brains. And if they're going to give, they want to give with their hearts and with their heads. And I think this is a great way to do that. So that's the story. By way of example, if I had, say, $1,500...

I said, okay, Josh, I'm going to give $500 to the local school. I'm going to give you $1,000. You take $500 and put it in one of your effective charities, and then the other $500 goes into the multiplier, and that multiplies somebody else's giving, something like that? So the actual mechanism would be you would, at the outset, just split between...

You know, let's say like to the one that you chose and one from our list. And then what would happen is when you get to the end, it would say you could take the money that you were going to give to this new effect, super effective charity that you hadn't heard of until a minute ago. You could instead put that into the matching fund.

And some people say yes, and some people say no. And the numbers kind of add up so that the people who decide to do that tend to give enough to support the people who say, no, I'm just going to divide it the way I originally wanted to. The other option is you can just directly support the matching fund. If you just like the whole idea and the whole system and you want to just give it some extra juice, then you can just support the matching fund directly.

But if you go to the website, you can see how the slider works. Yeah, nice. And then that $3.5 million or whatever it is you said you raised, is that mostly small donors? You didn't have one big guy come in with $2.9 million? No. We've gotten, I think, over 8,000 donations from close to 3,500 donors.

unique donors. There are some people who have donated much more than others. So it's, you know, it's not all small donations. And a lot of our biggest supporters support the matching fund. But it's a lot of people. And actually, most people who've heard about it have heard about it from podcasts. So, you know, I was extraordinarily lucky to be on with Sam Harris.

And Laurie Santos and Sean Carroll early on and their listeners have been fantastic supporters and people who've come back.

uh, year after year. Um, and, uh, that's, that's, that's been a lot of how, how, how we, we've managed to kind of keep the virtuous circle going. You mentioned give well, um, that's different for, that's one of your charities on your list or is it different? So, so we, we give wells relationship to us is that we rely on their guidance for, uh, which, uh,

global health and poverty charities we, uh, we, we, we choose to support. So, you know, it's, we, we do a little bit of filtering on our own to try to get a balance of, of, of, of interest, but the global health and poverty charities that we support are all ones that they support. One of our most popular ones is kind of a baseline for them. This is give directly. Um, and I think, uh,

you know, it's considered sort of like, you know, the baseline charity from GiveWell's point of view. But for someone who's a libertarian, GiveDirectly, I think, has a special kind of appeal because here you're not saying here what you need is this pill or what you need is, you know, some specific health

healthcare thing. It's saying, here's a direct cash transfer, unconditional. You do with it what you want, and people can choose to do things like, you know, put a better roof on their home or invest in a motorcycle that will enable them to get to town and buy goods and sell them and vice versa. Like, so some of it is providing for immediate needs. Some of it is sending their kids to school. Some of it is entrepreneurial, but you leave it up to the recipients to decide, you know,

how to use the money. Yeah, that's nice. What was the name of Peter Singer's organization?

organization that did something like that? So Peter Singer started an organization called The Life You Can Save. That's right. And they are friends of ours. Yeah. And, you know, they have, you know, the set of charities that they support. It's a bit wider than the set that we support. So they kind of invite people in with a wider range of charities, but they don't have our split and match kind of thing. Whereas our invitation is

Our list of super effective charities is kind of the biggest bang for your buck, according to the experts. And then the invitation is, and why not combine that with anything you want? And we'll add money to both. So we kind of are opening our arms with different, different, different strategies. But we're in some sense in the same business and they're a great organization. And so the metric is how many lives are directly affected by

Such that I know my money's going straight there and not for overhead or brick and mortar buildings and payroll and things like that. So this, I'm really glad you brought this up because this is a real shift in thinking in sort of impact philanthropy. So for a long time, the way people sort of, the proxy for impact, for effectiveness was philanthropy.

overhead. How much is going to the organization versus directly towards the benefit? And it just intuitively that sounds good. Well, you know, I didn't come here to pay for your air conditioning bill. I came here to, you know, do the right. But it turns out that's just really not a very good metric for impact.

But it's easy to understand and it's something that you can apply to any charity. So that's how it got popular. And people thought, well, that's a pretty good way to track these things. And it turns out it's really not a good way to track these things. And if you thought about this in terms of like investing in a business, you would never think that was a good idea, right? You would never say, I don't want a business who invests in research and development. I don't want any business who tries to, you know, hire and retain the most talented people. I just want all

the money to be going into, you know, constructing the product or something like that, right? And, you know, that might work in some industries, but there are certainly plenty of industries where the top businesses succeed by investing in people and infrastructure and research, right? And the same is true for making an impact when it comes to saving people's lives and improving people's quality of life. So we pay no attention to overhead. Instead, we rely on people like GiveWell to directly...

measure the impact and what they, they use, they use methods that are essentially asking, uh, you know, how does this affect people's quality of life? So, you know, there, there are sort of ways borrowed from the field of health economics, the same kind of methods you'd use if you were the national health service in England and you're trying to decide, well, should we use our money on this kind of preventative medicine for 10,000 people or some expensive surgeries for a small number of people? And you'd want to know like, well,

How many years of high quality life is this going to extend if we do the surgery or if we get 10,000 people to have this screening or something like that? So hard problems. There's always guesswork involved, but they're essentially using those methods that come from health economics and then making judgments about numbers of lives saved. But really what matters is like how many years of quality of life

Are you getting when it comes to these health interventions? Yeah. Yeah. That's such a great analogy because I own stock in all seven of the magnificent seven, Apple, Google, Intel, Microsoft, Meta, NVIDIA, whatever the other one is anyway. But I never think, well, I hope they're not spending that money on their overhead in the building. Right. They are. They have to pay their employees. Right. Yeah. Yeah.

Well, it sounds like if you've been investing in those for a while, you've been doing well. Yes, yes. I do need to give a charity. I'm going to start giving to you guys now as well as my friend Lon. My friend Lon Alderman, he was the co-founder of Race Across America with me. He was at one point the greatest long-distance cyclist in the history of the world. And then he started doing rides in...

He went to Peru just for fun to ride his bike over the Andes, and he started meeting all these people that just really needed help. The first project was somebody didn't have stairs to get up into their...

uh, basic hut or whatever. It's a kind of a brick building or something. So he built stairs for them. And then they go, well, we don't have a bathroom. I can build you a bathroom. Lana's really talented with wood working. And then he ended up building schools and libraries and all this stuff. And it's like, Oh, I'm going to give this guy money. And the sense is first, I know him. And second, I can see, he sends me pictures. Look, I just built this library. Oh my God. Uh,

And my money went there. So there is that intuitive sense, like I'm really making a direct difference. I can see it right there. Yeah. What's his organization called again? Oh, he just does it through his church, his local church in Illinois. But yeah, anyway, maybe I'll connect you to him. Maybe he could, because he's always looking for more projects like that. Well, we can do fundraisers. So anyone who's interested now, there's a new feature we've added. You can do a fundraiser where you can do it one of two ways.

Let's say you have a personal favorite cause. So you want to do your friend Lon's kind of, whatever you call it, sort of habitation projects in Peru, right? You fix that as the personal favorite charity that you're supporting. And then the people who are participating in your fundraiser can pick one of our super effective charities. And that's your split, right?

Another way to do it is to say, OK, I really want to do something with the new incentives, which is an organization that incentivizes mothers primarily in Africa to get their kids vaccinated. And I'm going to make that my thing. And then you can pick your own personal favorite charity and any anyone can can do their own fundraiser. And we've had people do this for for weddings, for birthdays, for other memorials, for other kinds of of of special events.

And it's a nice way to give people some freedom to sort of pick something that means something to them or that they like the idea of, but also direct it to something that you think is really valuable or special to you. Nice. Yeah, that's nice. Yeah, last year Lon got held hostage on one of these putt-putt boats going up the Amazon by local activists who were upset about some oil company was drilling oil

Uh, and the oil contaminated the local water supply. And that, and so, and Lon, Lon says, I can drill a hole for you. I can take care of the drilling for these guys here and you can drill over there. So he ended up doing this. I just love those kinds of stories. It's crazy. You can drill a hole for oil?

No, for water. He drills water wells. Oh, okay. So the locals were saying, they were complaining about the, the activists were complaining about the local oil companies that were drilling, contaminating their wells. And Lon said, I'll just drill you a new well. And they're like, oh, who is this guy? Wow. The guy sounds like a superhero. Oh, he is. No, he's a great guy. Amazing. But in any case, he even started a bike race team in...

I forget what country it is in Africa where they started making bicycles out of bamboo. It's crazy. So anyway, sustainable cycles. Yeah, no, but I mean, there's so much of that that can be done and we're also wealthy and just spoiled here in the West. So related to that, effective altruism is another one of these terms people that have been

ricocheting around for the last several years and then it kind of blew up with the whole Sam Bankman Freed thing. Yeah. You know, this long-termism and, you know, these arguments. I started telling friends about this and they're like, are you out of your mind? Like, the best thing I could do is to make billions of dollars and so that I can save, you know, a trillion lives in the next 10,000 years or they're like, what? Yeah.

Yeah. So what is effective altruism and what is it you're doing that's different from that? Well, I would say giving multiplier is related to sort of a certain version of effective altruism, right? So the core idea is, coming back to the analogy with investment, right? That if you're an investor...

You're not just going to say, oh, you make a product. Great. Here's my money. I'll invest. You're going to look and say, what kind of a return should I expect? Do you have a business model that makes sense? Do you have some kind of durable competitive advantage that's going to allow you to reap rewards year after year? Right. All the sorts of questions that across the river at Harvard Business School, they tell you that you're a fool if you're not asking. Pretty intuitive when it comes to charitable giving.

People tend to have more of a sort of checkbox mentality, which is just sort of like, oh, you're doing something good. I like it. Great. Right. And, you know, if it's just about making yourself feel good, then just checking that box is great. You check your box and you feel good. But if you're really serious about trying to do good in the world, then you're going to want to look at the returns that you're getting.

An effective algorithm is just saying, why would you take trying to benefit other people less seriously than the way you do when you're trying to benefit yourself, right? Give the other people as much sort of your intelligence when you're doing that work as you do when you're trying to improve your own bottom line. And that's really the core idea.

And everything else is up for grabs in terms of how the evidence turns out. So in the early days of effective altruism, people looked around and were like, what? How do you get the most bang for your buck? What are the best things that you can do? And a lot of the things that people...

came to are things that we continue to support. It still turns out that malaria is this enormous scourge primarily in Africa, parts of Asia as well. And it is very preventable with bed nets and with chemo treatment, not as cheaply as people think.

but well within the reach of people like you and me and people who are even, you know, fortunate, but not as fortunate maybe as someone like you, well within their reach to save multiple people's lives per year. For real, not like, you know, just as a soundbite, but for real.

So that's the kind of early version of it. And then more recently, there's been this idea of long-termism, which I take seriously, but I'm not as sure what to make of it, right? And the idea is that, look, the things that are really, you know, so the one observation is that the future is huge, right? So if you look at how long

a typical mammalian species lasts, right? It, it, it, humans you'd expect are maybe something like 10% through that, right? And we may last longer because we're so smart, or maybe we'll last less long because we're going to blow ourselves up or whatever it is, right? But the future is potentially enormously long such that if, you know, our whole species was sort of condensed to a year, we're in something like January or February or maybe even earlier, right? Yeah.

And if you think of it that way, then the biggest gains are to make sure that things go well or don't go badly, that we don't go extinct, right? So what can we do? Well, what about nukes? What about a terrible pandemic? You know, what about AI taking over? And people can debate about which of those things are most likely, which actions that we can take today are most likely to prevent the worst...

things from happening and not have our species die in its biological infancy. I think about this stuff a lot. I am not sure what to make of it. A lot of it really depends on the details. And personally, I have my own views about what's the right approach for long-termism. We can talk about that in a second.

With giving multiplier, we've decided to not go there because, you know, with a couple of semi exceptions, because I think like, for example, artificial intelligence is a huge concern among a lot of people who, you know, think of themselves as effective altruists. It requires a lot of background assumptions.

or background arguments to see why this thing that sounds like science fiction should be something that's more important than saving people's lives today. And I don't feel like giving multipliers the right place to try to make that argument. So that's why we don't go there. But I have thoughts and opinions about that. The short version of where I see sort of where I invest in things that don't have an immediate payoff is,

It has to do with human conflict. My view is that the biggest threat to our survival in the form of things like, you know, nuclear holocaust and and and and use of, you know, biological weapons or bio threats gone gone wild. It ultimately comes down to whether or not we have some kind of good governance.

And the biggest threat to good governance is is, you know, intergroup rivalry in political form or sort of more cultural ethnic form. We might call tribalism and that our best hope for overcoming that is to figure out ways to reduce tribal animosity at scale.

And so that's the other main thing that I'm working on. And I view that as medium-term to long-term, whereas I view giving multiplier as sort of more or less immediate benefit, that the money that we bring in this year will be saving people's lives next year. Yeah, that's really good. That's really important. I think it's that long-term versus short-term. I mean, don't we have a moral obligation –

To future generations to have clean air and water and a sustainable earth. Yeah. But we also have a moral obligation to help the people that are suffering right now. Yeah. And I would say that's more important than the people a thousand years from now or whatever. Or that we have more control over it, right? I mean, you know, the future is so hard to predict. How many people 20 years ago would have thought that the political landscape looks the way it looks now? Yeah.

Some people, but not most people. Right. I mean, so I I think we need both. But but I I'm humble about our ability to know what is going to have a long term effect. But one thing I do believe is that the the arc of human history is.

is about the tension between cooperation and competition and cooperation at increasing levels of complexity, you know, from, from, from hunter gatherer band to, to, to tribe, to nation, to occasionally United nations. And that there are certain fundamental principles that are, that are always in place. And so the things I have the most confidence in are the things that can, can bring people together, uh,

essentially through mutually beneficial cooperation. I think that that's the heart of it. Yeah. What are the most pressing problems? Give us the half dozen things that you're working on that you can solve. You mentioned malaria and mosquito nets. What else? Yeah. Well, so one of the other ones is animal welfare. And this is a place where I'm a bit of a hypocrite, and I will admit that right up front, but I'll speak like a true believer or like a true...

actor. So there are billions of animals that are processed and killed in factory farms every year. And we don't know for sure what it's like to be a pig or a chicken or a cow going through a factory farm. But, you know, we have some idea of how their brains work and how human brains work. And if they're in their brains are

similar enough to human brains that it's a decent bet that they feel pain the way we feel pain. They may not have as elaborate, you know, conscious thoughts about it as we do, but the suffering may be just as real. And if an alien were to visit Earth and say, you know, what is the greatest crime taking place on Earth right now?

One answer might be it's what this one species of hairless primates is doing to these other species, you know, by the billion. And so, you know, the imperative there is how do we...

uh, reduce the suffering that happens in the production of, of, of meat. And how do we eliminate the need for the production of meat through technology, through development of, of, of meat substitutes that carnivores love. Um, and so giving multiplier supports two charities along these lines. Um, uh, what is it? The, the, uh, the humane league, uh,

Does largely policy work working to reduce the suffering, reduce the horrors of conditions of factory farms. And then the Good Food Institute is about developing meat alternatives. So these are sort of long term ways of getting at these sources of massive suffering. Now, you might say, I don't believe these animals are even conscious. Or you might say they are, but I don't care because they're animals and not humans. And, you know.

We could have a philosophical argument about that, but it's not going to be a quick fix. But if you're willing to believe, look, there's at least a decent chance that cows and pigs and maybe chickens suffer in these things, and they are suffering by the billions, and it's all just because we like the taste of this instead of tofu, or because, you know, Impossible Burgers are a bit more expensive than beef cheeseburgers, that's a pretty high cost. So that's one of the things we support. And I will say...

As I said, I'm a bit of a hypocrite here. I, I try to keep, I try to reduce the amount of factory farm meat that I eat, but sometimes I'll have a slice of pizza with pepperoni on it. And the cheese in the pizzas is a problem as well. But most of the meat that I get, I'm in a privileged situation where I can, my wife and I can afford to have meat delivered from farms that treat their animals well. Um, you know, so I, this is something I'm working on and I've just decided that

my mental bandwidth is better spent on other things than being a grumpy vegan. But, um, but you're, you're a reducetarian. I am. I'm a, I'm a, I'm a, I'm a reducetarian. Yeah. Yeah. Anyway, that's one of the things we support. Uh,

By the way, are those, those are two non-profit organizations. These are all non-profits that we're supporting. Because there's more profit companies that are developing meat and synthetic meat. So I think the Good Food Institute, I don't, I don't want to sort of comment. My understanding, and I could be wrong about this, is that, you know, they have ended up supporting, like,

Like Impossible Foods, I believe they've supported. I think they are kind of incubating things, but that may eventually become for-profit ventures. But I think they themselves are non-profit. That is my understanding. Yeah. And then where do you put the Gates Foundation in all of this?

So I'm I don't consider myself, you know, in a position to give them a report card. My sense is that the Gates Foundation has done enormously good work, you know, on global health and poverty and that they've been pioneers in developing and supporting a lot of the most effective.

ways of, of reducing human suffering. So my, my, my, my hat goes off to, to, to Bill and Melinda Gates for that. Oh, and I'll add it. Sorry. I wasn't expecting that, but we, we did get a recognition from from, from a award that was partly sponsored by the Gates foundation. But I wasn't just saying that because of that. I've been an admirer of, of, of them for a long time. Yeah, no, I, I mean, he did help pioneer that whole, you know, rational giving and,

thing, make it effective. I like that. And Chris Anderson has his new book about, what is it, Infectious Generosity, I think it is. You know, and then Gates is always talking about the other Fortune 500s. Let's get the 500 billionaires, not biggest billionaires to give. Yeah. So that they want to be on the list. It's kind of a social pressure. We're going to change the norms. That's great. I think it totally makes sense. It's like everyone kind of knows this, that like

I mean, the research shows that like the it's not that more money doesn't bring you happiness, but that you have to for every increase in happiness that you get, like every increment, you have to earn 10 times more money. So the gap, the jump that you get from from from going from one hundred thousand dollars a year to a million dollars a year, if you want to have a jump that big again, you have to you have to then go to 10 million dollars a year.

Like it's, you know, it's, yeah, it's, it's, it's so like, you know, all of the research suggests that if you have got a lot of money, it will make you so much happier to give it to good causes and see the good that it does than to just run up the score. Um, so, uh, you know, it's, I mean, it's just, if from a purely selfish point of view, like,

All of the research suggests that you, you know, once you're past the point where your basic needs are taken care of and you can take care of the people who are most important to you in your life, you will be so much happier if you're using your money to, uh, to, to, to, to improve the lives of other people. Like this is just totally consistent across all the research on happiness.

Right. Yeah. I just keep my hobbies relatively cheap. Cycling and tennis. I don't own a boat. I don't own a plane. You know, I guess if I did, then your overhead would be really high. It'd have to be 10 million a year or whatever. Yeah, I don't have that problem. Now, what about the research on to what extent people, I don't know, are doing it because they need the social recognition skills?

I think Steve Pinker published a paper on this last year about anonymous donors versus those that get recognition for it. And he even cited that funny episode of the Larry David show. Yeah.

in which he plays the character who he wants people to think he anonymously donated, but he doesn't want to actually signal that he was the one. And then the other guy, it leaked out that he was the guy that did it anonymously, but he didn't have to signal that he did it anonymously. It was a hilarious episode. Yeah, yeah. Well, so I think this is consistent with a lot of research showing that when we evaluate other people, we're thinking about their character.

And we're thinking about... And we judge character based on sacrifice. So we look at other people and we say...

How good you are is how much you're giving up. Right. And in a very sort of simple environment, you know, think back to our kind of hunter gatherer ancestors. There's probably a pretty good relationship between how much you're willing to give and how much somebody else is willing to give, get and how much someone else gets as a result. Right. And so our kind of.

natural heuristic for how good are you to have around is how much are you willing to give up, right? But we live in a world where, first of all, we can have immense social influence on people that we don't even know, right? So it's not all in the village. And there are huge differences in how much bang for your buck you can get, how much good you can do, right? So to take sort of the two extremes, you can have someone who,

gives a relatively small amount, does it very publicly, encourages lots of other people to do it. And the money is used to do things that are enormously effective. That is a huge social win. Or you can have someone who, you know, goes through an enormous amount of, of, of, of, of suffering, uh,

you know, to, to, to, to, to, to, to do what they're doing. Right. Let's say they quit their job and they drive around the country, giving away small things to people who don't need it very much. That makes for a great documentary, but they're not very, they're doing very good, but they're likely to be seen as a, as, as, as, as a kind of saint. And maybe once upon a time, that would be a better indication that this would be a better person to be with in a bunker. Right. But,

probably not now. So I think the lesson, a lot of the work that Steve and other people has done is that to first sort of shine a light,

on the way that we are focused on people's kind of purity in, in, in, in giving and say, does this really make sense anymore? And I would say that it doesn't. I would say we should be establishing new norms that say people should talk about the good that they're doing, not because they want to show off, but because it's a guide for other, for, for, for, for other people. Um,

So, my two cents on that topic. Change the norms, yeah. Another thing I wrote about one of my Scientific American columns I called the Ndugu effect. This is when I was writing The Mind of the Market. And I had watched that Jack Nicholson film about Schmidt.

So, you know, he's a retired insurance agent or whatever, and he just is kind of going on the road and he gets bored driving his motorhome around and his wife dies. And so he's left with his daughter and he's dealing with her. And then all of a sudden, late at night, he's watching one of those. Oh, he's just watched some show. And then one of those infomercial comes on and it's it's a charity for, you know, adopting a child in Africa. And so he ends up adopting this little child named Ndugu.

And instead of showing a picture of 10,000 starving kids in Kenya or whatever, it's like, here's this one little boy. His name is Ndugu. And here's where he lives. And here's a picture of his family. And he plays soccer. And he's like, oh, my God. He's kind of weepy about it. And he writes...

much of the narrative about the movie is these letters he writes to little Ndugu. Yeah. And then at the end of the movie, it's very, very touching. He gets a letter back from Ndugu's teacher saying, well, Ndugu doesn't read, but I read him all your letters and this is what he would like to say to you. And he's crying and everything. Yeah. Because you can identify, right? This is your tribal thing. That's an honorary member of my family now. Yeah. Yeah. Yeah. I think this is like the challenge, right? Is,

How do we work with our feelings as they are, you know, work with the things that motivate us to be good in the first place, but try to take more of that energy and channel it in ways where we can take advantage of the incredible powers that we have in the modern world to, you know, do enormous good for people who are

far away from us using technologies that our ancestors couldn't couldn't couldn't have dreamed of right um so yeah that's i think that is the fundamental challenge is like because

Because donor fatigue, how do we harness that? Yeah, they call that donor fatigue or something like that. What's there are different versions. There's like the drop in the bucket effect that is a feeling of just like, you know, if you frame it as well, there's this larger problem poverty and all I'm doing is this imperceptible thing to change it drop in the bucket. Right. But if you put a frame on it around says, OK, well, I'm going to help this one person.

and you can make a real difference there, that feels very meaningful. And it could be the same amount of change in the world. It's just that you put the broader frame around it and it feels like nothing. You put a, you zero in on a particular person and it, and it, and it, and it feels, and it feels huge. Right. Um, and, uh, yeah. So, I mean, you know, how do, how do we personalize high impact philanthropy? It's a great, great question. Um, tension I've seen following this is between public and private, um,

of people who need help, getting people out of poverty. There was a recent critical article, in fact, of Gates saying, this is really probably from the far left, he should have never had that money in the first place. It should have been taxed at 90% above whatever he makes every year. And then the government should be taking care of these problems. How do you think about private versus government help to people that need it? I mean, my view is,

We should do whatever is going to help the most. And you got to deal with the world that you're given. Right. And so maybe it would be better if we lived in a world in which, you know, people with enormous wealth were taxed more highly and that money was used to to to promote the greater good.

but that's not the world we live in right now. And so, you know, what, what, what I would, you know, if we're holding that as, as fact, Bill Gates has this money that he earned, uh,

Is it a good thing that he is using it to lift people out of poverty as opposed to just buying more yachts? I don't know if he has any yachts, but buying more stuff? Of course. So, you know, I'm a huge admirer of Bill Gates. And, you know, if someone has...

Might also agree with someone's you know politics about what a better world would look like politically But I think it's important to when you're trying to solve problems to be to take the world as it is right and Think about how you can make it better as opposed to saying well I'm gonna invalidate option B because we should be in a different position than we're in this is where we are right nice and

All right, let's tie this in with your earlier work in your book on dual process theory, where you took Danny Kahneman's idea of system one, system two, system one, rapid cognition, system two, systematic or rational cognition, thinking through these problems. Then you apply it to moral issues that we have this dual system where we have, you know, kind of Kantian deontological rule based ideas.

ethics that is driven largely by our emotions, our tribal feelings and so on. And then you have the utilitarian consequentialist, more calculating, rational based,

It seems to me you're doing that here and you're giving multiplier. You're saying, I know people have these two systems, so we're going to give you both. Yes, you hit the nail right on the head. A lot of people don't see the connection, but that's exactly the connection, right? To fill in a bit of what you said, I mean, the way I got started moving from philosophy to psychology and neuroscience was thinking about these moral dilemmas, right? So, you know, what are sort of affectionately known as trolley problems, right? And the sort of

the cleanest way to sort of see the dual process effect is to consider two different types of trolley cases. So in one version,

Trolley's headed towards five people. This is the one that gets memed all over the place. And then you can hit a switch that will turn it away from those five and onto one. And of course, people then do all kinds of variations. You got your dog here and Hitler there and, you know, whatever it is. But the standard version of, you know, five people are threatened and you can turn the trolley away from five and onto one. Almost everybody says that that's okay. Or even that you must do that. And then that contrasts with the,

the classic footbridge case where the trolley is headed towards five people. You're on a footbridge over the tracks in between the oncoming trolley and the five. And we stipulate the only way you can save them is, uh, to do something that feels horrible. That is, there's this person wearing a giant backpack, let's say, and you can push them off of the footbridge and,

And they, with their giant backpack, will stop the trolley from in the tracks, literally, and save the five. You can't jump yourself because you're not wearing the big backpack. Yes, this works. You've been to the movies. You know how to suspend disbelief. And there are still most people say or a lot of people say that it would be wrong to use that guy as a trolley stopper. Right. And so what's going on there? And, you know, I originally looked at this with brain imaging, right?

And other people have looked at this by examining their judgments of patients with different types of brain damage. People have looked at people's reaction times and what happens if you distract people while they're doing these things. And what's come out of all of this research with, you know, the usual level of controversy and caveats is that in the switch case, it's pretty flat emotionally. And people just look at the numbers and they just say five lives versus one life. That's OK. Yeah.

to hit the switch. But with the footbridge case, we're responding to a combination of two things. One is that you are harming this person as a means, that is, you are using them to stop the trolley, as opposed to harming them as a side effect of turning the trolley. And you combine that with the fact that you are

pushing them directly versus in a more indirect way, like hitting a switch. Those two factors together combine to produce an emotional response. And you can see it in a part of your brain called the amygdala, which then sends a signal to your part of your brain called the ventromedial prefrontal cortex. And that ultimately tips your judgment one way or another. And the short and long of it is you've got cost benefit calculations on one side, and you've got this emotional alarm signal that's saying,

Don't do that kind of stuff that we call violence, where it's direct, intentional, active is another important piece of this.

And, you know, so, you know, a lot of different research showing, you know, if you, for example, if you have patients who have damage to this part of the brain where the emotional response is bearing on the decision, this is like the famous case of Phineas Gage, which may be familiar to some of your readers. Those people are more likely to treat the footbridge case as if it was a cold switch case. Right.

You can have patients with damage to a different part of the brain, so the hippocampus or the basolateral amygdala, and those people go in the opposite direction. They experience the emotional response more strongly and have a hard time overriding that with a more instrumental cost-benefit kind of reasoning. So that...

sort of tension there between sort of the feeling of doing something horrible versus, well, what about the costs and the benefits? That's a kind of deliberately contrived intractable dilemma that's sort of, you know, contrived in the case of these dilemmas, but maps onto things in the real world. One thing that's sort of frustrating about these dilemmas, and when I present these to people, they always want to try to get out of it. They say, well, who's on the track? And what if you listen? How do you know this is going to work? Because they don't like that tension and they want to try to make it go away. Yeah.

Can I just yell at the guy, get up, there's a trap coming. Right, exactly. So then bringing us back to the other side, you have the heart-head dilemma when it comes to charitable giving. Now this is, do I help people in a way that maximizes the greater good?

you know, giving to the give well recommended charities like the Against Malaria Foundation, or do I get to this thing where I get this warm, positive feeling of connection, the local animal shelter or whatever it is, right?

And, you know, infomercial. Like, the nice thing about giving multiplier is that it's actually a solution to the dilemma. Where, you know, by stipulation, there isn't in the trolley problem. But here, it's a way that you can kind of have your cake and eat it too. So just as you said, that's like the full circle of going from the negative domain to the positive domain and putting that dual process psychology to work. Yeah. So, okay. Okay.

We have these two systems, deontological, rule-based, and utilitarianism. Okay, it seems like we need both. I mean, in the doctor scenario, the doctor has five dying patients and there's one healthy guy in the waiting room. Why not sacrifice him? Pretty much nobody will go, oh yeah, that's a great idea. And why is that? Because we don't want to live in a world where doctors can just walk around sacrificing people?

Right. I mean, I think you could object to that on two levels. And just to no pun intended, flesh this out for your listeners. The case here is that you have these five patients that need organs of different kind. One needs a liver, two need kidneys, and you could sacrifice one person, distribute the organs to the other five, save more lives. Right.

And I think you could object to it on two levels, right? One is just the sense of just horror at doing this, that that's just a terrible thing to do, that it just feels like the footbridge case on steroids, right? And then the other is I think you can actually make a good...

consequentialist or utilitarian argument against this, which is to say at a higher level, it would be terrible to have a health care system where everybody is is is is a potential organ bank. Right. And that the long term damage that that would do to the system would be much worse than the gains you get from being able to occasionally

save somebody by murdering someone and taking their organs. So I think that in the end, the utilitarians and the deontologists come out in the real world in the same place with those. But there is that same kind of, you know, even if you stipulate, assume this would promote the greater good, it still feels like a horrible thing to do. And I wouldn't want to live with a world in which people didn't have that feeling. So what you're saying is that, uh, say my argument that, uh,

uh, people have a right to bodily autonomy and control over their bodies and their lives. And, and no matter what your utilitarian calculus is, we're not going to, uh, allow that to happen. So we have a bill of rights, for example. Yeah. So what you're saying is that beneath that, it's, I'm stopping it. I'm starting the foundation at rights. You're saying there's actually a consequential argument beneath the rights argument. Yeah, I think that's right. And in fact, if you look, go back to the early utilitarian philosophers, uh,

Not so much Jeremy Bentham, but more John Stuart Mill and the lesser known Henry Sidgwick. They very much endorse this kind of multi-level view of what it means to really be a wise utilitarian thinker.

I also don't like the word utilitarian because I think it precisely suggests kind of an unwise kind of narrow cost benefit calculus. So I prefer to call myself a deep pragmatist. But what I'm talking about there is taking into account things at the individual level, at the social level of understanding people's motivations, their emotions, having a sort of deeper understanding of human nature that kind of that, that, that, that, that,

Puts puts guardrails and brakes on some sort of simplistic utilitarian calculations. So when it comes to I think so, Mill is a great example. Mill wrote on Liberty. Right. And Mill was was was, you know, one of the great defenders of liberty.

of individual rights in many domains, but he viewed the expansion of individual rights as the best means to securing long-term sustainable human happiness.

And I think that that's basically right. I wouldn't go so far as to say that I'm a kind of, I'm not a rights fundamentalist. So I wouldn't say we have certain rights and we must stand by them, even if it makes humanity miserable forever. You know, to me, like the ultimate, you know, ground level is about human happiness and suffering. But I, but, but I think that, so I justify to the extent that I overlap with,

libertarians, I justify it on those consequentialist grounds. And I think that's really consistent with the original utilitarian philosophy. Yeah. So, yeah, I agree with that, I think. Let's just use this example of the problem with utopian thinking. It's a kind of consequentialist, you know, we can achieve this perfect society where everybody's happy forever, but

If it weren't for those people over there, the Jews or the Hutus or whoever. And so there's a justification for exterminating them because then we have eternal happiness forever. Right, right.

So I'll go back to the investment analogy, right? That if someone comes along and says, you know, my product is going to give you 30% returns guaranteed for the next 20 years, you'd say, well, Mr. Madoff, I'm a little bit skeptical, right? And, you know, it's a kind of investment argument that is your promising returns guaranteed.

But you better you better look at the data and you better use some common sense and say, is that too good to be true? Right. And in the same way, I think that someone who a utopian who thinks, oh, it'll all be fine as long as we get rid of these these these people.

They're selling moral snake oil, right? That they are selling something that, yes, what they're promising is to deliver great returns, but not only are the means terrible, but the means are likely to be ineffective, right? And so I would say to be a real deep pragmatist, consequentialist, utilitarian, whatever you want to call it, you're not just signing up for whoever can promise the best outcome. You have to pay attention to...

What are the means? What cost do you pay along the way? And is it likely to work even if you pay those costs? So I think we have to be humble and recognize that certain guardrails

are, are very important because we are not very good at making long-term calculations about what's going to produce what consequences. And we are often highly biased in saying, oh, you know, what's good for Ford motor company is good for America. No kidding. Well, maybe that's true. Maybe it's not right. Um, so, so, uh, yeah, I think we have to be there. There are good

wise, consequentialist, deep pragmatist reasons to being skeptical of people who we would naturally characterize as utopians. And also these people are just factually wrong. The Jews did not cause this, or the Hutus did not cause that. Right. And

on a bigger scale, there is no utopia to be found. That's the wrong goal. We can't get there. There is no there there. So that's why I like Kevin Kelly's idea of pro-topia. Just make tomorrow tiny bit better than today. Just a little bit. And just like compound interest, just do like 1% better every week. And then, you know, in two years, it'll be a hundred percent better. Something like that. Yeah. Yeah. I think that's right. And in fact, um,

we mentioned the life you can save. Charlie Bresler has, has run that for many years. And, uh,

He has a, the way he puts sort of thinking about charitable giving in the heart head dilemma is he always says personal best, right? That in the same way, you know, you're, you're, you're an athlete or an investor or whatever it is. Your goal for this year is not to be perfect. Your goal is, can I do a little better than I did last year? Right. And I think that's the right approach because if there's anything that's going to get us to the best possible outcome, it's focusing on making, making whatever improvements can be made today.

And then there's our evolved propensities for liking or disliking things. So here I'm thinking of John Hite's thought experiment about Mark and Julie, our brother and sister, and they're out on vacation alone together and they think it would be fun and interesting and rewarding to have sex.

They use two forms of contraception. They keep it a secret. They don't tell anybody. It doesn't harm their relationship. They decide that it was as great as it was. They're not going to do it again. What's wrong with that? Okay, so as you know, because I present this to students, they're like, what? They can't come up with a reason why it's bad. Oh, because you'll get pregnant. No, two forms of contraception.

Oh, because it'll ruin their relationship. No, it made their relationship better. Well, people will find out. No, they kept it a secret. I don't know. It's just disgusting. Okay. Why is it disgusting? Well, because there's an evolutionary logic behind it. It's not good genetically for genetic diversity or incest. And that's why we hit, you know, the Westermark effect. Okay. So there, I guess you would argue that there's a consequence to, in our ancestral environment of incest genetically for the species. Yeah.

Something like that? Yeah. So, I mean, I think that, you know, we have that aversion, at least many of us do. I do. And, uh, and, uh, and it makes sense for that reason. Right. Um, and I also think that there are some pretty good practical reasons for any real world version of this case to think, you know, you're, you're, you're, you're playing with fire if you're, you know, uh,

if you're crossing those lines with that kind of a relationship. Yeah, yeah, yeah, definitely. Yeah. Yeah.

Let's talk about abortion for a minute, since that's back in the news. I was rereading your section on that. You're making a pro-choice, consequentialist argument, which I'm pro-choice too, but I try to listen to the pro-life argument. So how do you respond to their claim? It's murder. You are murdering a human life. Let me just flesh it out. No woman who's pregnant, who wants to be pregnant, describes it as, I have medical tissue I have to get rid of. Right.

They don't think of it as a procedure. I have a baby. I have, you know, this living human being is inside me. Right. And women are killing it. You're killing a human being or a potential human being, however you want to describe it. Yeah. How do you deal with that? Yeah.

Yeah. Well, so I can make arguments sort of following arguments that Peter Singer made a long time ago. And some people will be persuaded by them and some people won't. But I think they're good arguments. And what I would say is, OK, first, you know, if I'm dealing with someone who's pro-life, again, anti-abortion, I would say, OK, so what you're saying is that the magic moment is the moment of conception, that once conception has happened, you know,

Then you have a human life and it is sacred and any termination is murder. But a few moments before conception, when it's just an egg and a sperm, then then it's birth control and it's OK. Right. There are other kind of people say, well, I'm opposed to birth control. True. But that that's a separate argument. So let's take the people who are OK with birth control, but not OK with any abortion, no matter how early after conception.

guess what I would say is, look, we actually now have a pretty good understanding of what happens when a sperm meets an egg.

And, you know, you can describe in great chemical detail how the chemical receptors on top of the sperm connect with the chemical respectors on the egg cell and how it causes it to open up. And the genetic material of the sperm is released into the cell body. And then how that the genetic material integrates with the genetic material. And you can tell a completely fascinating mechanistic story about what happens at that moment.

But what does not happen there, as far as anyone can tell, is that a soul is attached to that combination of sperm and egg and merger of the genetic material inside a cell membrane. Now, what I would say to our friend is say, look, if you believe that what happens at that moment is a kind of miraculous ensoulment,

I'm not going to tell you that you can't believe that. We live in a society in which we from many different worlds and many different tribes have to live together. And if you don't want to have an abortion because you believe that there's an ensoulment process that science has not been able to detect, but that you are confident is there because of what it says in an ancient book, you can believe that for yourself.

But I don't think it is okay for you to take a belief that's ultimately based on your insistence that a metaphysical event happens at that moment and say, and that must be the law of this modern country with people from many different faiths or who've given up faith entirely. Why should your assumptions about what happens at that moment of molecular interaction govern the country?

tell people that they have to have children, bring them to term, that a woman has to bring a child to term who they don't want to. And it could be, if you're going to be consistent, it could be the child that was produced by a rape, right? If you really take pro-life seriously, you should not be making an exception for rape because that was a terrible crime, but now there's a human life. Why should that human life be squelched out because of that, right? And you're saying that, like, a woman has to bring...

The offspring of their rapist to term because of what you believe about magical kinds of metaphysical things happening at that union. You're asking a lot, right? So that's my response there. I don't know. Pro-lifers out there, tell me if you're persuaded. Actually, don't. Or, well, you can if you want, but if you're impolite about it, I'm not going to respond. Well, so what you're saying in a way is they're either factually incorrect or we actually don't know.

you do not know that there's a soul that happens. We don't know that there's souls. There are probably not. You're just factually wrong about that, I guess you could say. Or I would even be more gentle. I would say it is an article of faith

And what right do you have to impose that article of faith on people who don't share your faith? Yes, right. I suppose then it becomes a political issue in the way that we can't have everything. So you have conflicting rights: the rights of the mother to choose, the rights of the fetus to live. Like, immigration would be another one. Is there a rational argument for what the right number is? I mean, short of the extremists to say, "Close the borders, don't let anybody in," or "Open the borders, let everybody in," just move away from those.

Often, I think maybe there is an irrational or scientific answer to that question. It could just be whatever, who's ever in power gets what they want, something like that. Yeah. There's actually, I mean, on the case of immigration, there's actually a lot less disagreement than people think if you frame it in terms of policy as opposed to politicians and parties. That when you ask people...

What do you think a sane immigration policy looks like? People on the left are much more likely to say,

You know, we shouldn't have open borders and, you know, we but but at the same time, we should be, you know, welcoming to people who are escaping horrible situations and welcoming to people who want to come here and be productive members of society. And you see more or less the same thing on the right. And I think if there's a difference, it's it's it's in what people have heard that that people on the right are more likely to have heard that.

the United States is overrun by violent immigrants, and it's just not true. And if you and you know, if the gap is smaller than you think, and if you actually brought people sort of, you know, in line with our understanding of what's actually happening, as opposed to scaremongering, there probably would be almost zero difference between people who identify as left and who identify as right on that particular issue. When it comes to things like abortion and trans rights,

I think that there are maybe deeper differences in people's worldviews. But even there, I think that there's probably less actual necessary disagreement.

than one would think. Yeah, like on capital punishment, again, supported more by the right than the left, there I guess they would make the argument that you take a life, you have to give a life, something like that. Whereas, yeah, I think it should be abolished. But my argument is consequentialist. There's just too many mistakes made, errors, people wrongfully convicted and killed by the state. I don't want the state to have that much power, that kind of thing.

Yeah, yeah. And I think there's too much of an emphasis on retribution in our criminal justice system. Right, right. And that we need to really take more seriously the idea of, you can call it rehabilitation, you can call it putting people in an environment that's likely to make them better citizens rather than worse citizens. Yeah, restorative justice. Yeah, I like that. Yeah.

All right, let's see how far you'll go with me on the is-ought fallacy fallacy, as I call it. Okay. Because much of what we're discussing about, I think, really is just people are just factually wrong or they have their mistakes or whatever, and that we can discover moral truths. I'm not even sure how far I could go with this, but let's see. So here's what I write here in this. It is my hypothesis that in the same way that Galileo and Newton discovered physical laws and principles about the natural world,

that really are out there, so too have social scientists discovered moral laws and principles about human nature and society that really do exist. Just as it was inevitable that the astronomer Johannes Kepler would discover that planets have elliptical orbits, given that he was making accurate astronomical measurements and given that planets really do travel in elliptical orbits, he could hardly have discovered anything else.

Scientists studying political, economic, social, and moral subjects will discover certain things that are true in these fields of inquiry. For example, democracies are better than autocracies. Market economies are superior to command economies. Torture and the death penalty do not curb crime. Burning women as witches is a fallacious idea.

and that women are not too weak and emotional to run companies or countries. And most poignantly here, that blacks do not like being enslaved and Jews do not want to be exterminated. And the next step is, well, why don't they want to be exterminated or tortured?

So my starting point is the survival and flourishing of sentient beings. But why should sentient beings not want to—why would they want to survive and flourish? Well, because of the second law of thermodynamics. Here I use Lita Cosmedes and John Tooby's—it's their paper that the second law of thermodynamics is the first law of psychology.

That is, the most basic lesson is that natural selection is the only known natural process that pushes populations of organisms forward.

Right? So, and then I conclude, and then you can respond. Yeah.

and that they prefer ignorance to education and illiteracy to literacy, that women want to be lorded over by men, that some people like being enslaved, and that large populations of people don't object to being liquidated in gas chambers. But I doubt it. Yeah. So I think for all practical purposes, I agree with everything you said. That is to say,

We have, coming from both the life sciences and the social sciences, an increasingly compelling, integrated, systematic understanding of what drives human behavior, what drives conflict, what drives cooperation, and what types of things tend to make people more happy, healthy, prosperous. Right?

To the extent that I disagree with you, it's if I do it all, it's almost a technicality. That is to say, and this is me putting on my strict philosopher hat, right? If someone says, OK, I agree with you about all of these facts about what promote political

increases in happiness, increases in availability of resources, increases in the things that tend to make people more well-resourced and happy, education and freedom from violence and all those things. I agree with everything you said. I just don't, this is my counterpart interlocutor. I just don't think there's a fact of the matter that happiness and material flourishing is better than suffering or better than nothing.

And if someone says, I just disregard your basic premise there, I don't know if I have an argument against them. I can just say, well, if you're going to try to make the world worse,

you know, empty of the things that I think are valuable and that other people think are valuable. I'll fight you on it, but I, but I, but I, I'm not sure that I can give a knockdown argument to someone who's just willing to disagree with the fundamental moral premises. I think that for practical purposes, it's kind of irrelevant because the forces that are opposing the,

that better world are not people who are self-avowed psychopaths, right? They're not sort of evil demons who would be espousing this kind of view. They are people who are fighting primarily for their own self-interest or for the self-interest of, of, of their group. Right. And so, uh,

In so far as they're trying to defend what they're doing in moral terms, then I think that we can appeal to all of the things that you cited and say, well, look, you claim that the world is going to be better if you're put in charge and all of the resources go to what you want. Right. And there are these rules put in place that seem to do very well for you, but not for other other other people. But the evidence suggests strongly no. And the evidence says, don't be surprised that you're biased about this. Right. So I think that.

For practical purposes, we're dealing with other humans who've kind of got one foot into our shared values and one foot out. And what we need to do is leverage the part that's consistent with a broadly beneficial world to...

If we can talk them, talk them out of their irrationality and inconsistency and bias. So I think, as I said, for practical purposes, I agree with you, but I could make a sort of nitpicky philosophical argument about whether or not

The premises are self-evidently true. I guess, you know, I'm not a philosopher, so I look at, you know, kind of from a social scientist or historian perspective. What are people actually doing? They vote with their feet. You know, where do they want to live? You know, North Korea or South Korea, East Germany versus West Germany for the unification and so on.

Now, a counter to that might be national sovereignty, where maybe a Middle Eastern country will say, it's none of your damn business what we do over here. We don't like Western values. We don't want our women running around in bikinis and having porn on our, you know, and so on and so forth.

and prostitution. We, this is what we think generates a higher quality society with these very draconian, uh, laws and rules about, you know, women and so forth. And we Westerners are appalled by this because,

But do we are we violating their rights to national sovereign, sovereign, sovereign autonomy to say we are going to bring to you democracy along with capitalism and all the other stuff that goes with it? And you have to take our movies and our music and sex, drugs and rock and roll. And they go, no, thanks. We don't want that. Yeah. Well, I actually think.

That, you know, you won't be surprised to know that I think that when I imagine a world getting much better than our world, it doesn't look like Islamic fundamentalism or Christian fundamentalism or any other other kind. Right. But I also think that we who might look at, you know, certain countries that we might think of as backwards and say they've got everything wrong. I think we might be a bit more humble and say we are asking them to give something up.

And that before we expect them to give something up, we have to offer them something better in return. I remember someone relating this story, can't remember who it was, talking to someone in Saudi Arabia. And they, you know, someone saying to them, like, how can you cut off somebody's hand for stealing? Like, how is that not horrible and cruel? We in the outside, we look at this, we think this seems to be barbaric, right? And the person replied, look what you do to your elderly people, that you have them put away from

away from their families and they die among strangers. What could be less human and more horrible than that? And the person who related the story to me was just struck by that. And it kind of touched a nerve. And I kind of get that, that we in the modern world have made enormous gains. And I think that the people who sort of, you know,

Ate everything about America or everything about the West. I think they're really missing a lot and that there's a lot that they take for granted, like not dying in childbirth that, you know, that we've we've gained. But I think we've also lost some things that are really valuable and that more traditional societies.

have preserved for now, right? And that they feel in danger of losing. They feel in danger of losing their connection to a specific place. They feel in danger of losing, you know, a livelihood that makes sense to them and a world that has meaning and purpose.

And, you know, taking like, for example, Robert Putnam's, you know, classic Bowling Alone, speaking of the kind of the erosion of community in places like the United States, like that's real, right? And so I think that when we're imagining what would a happier world look like that takes the best of the West,

but also acknowledges what people in Saudi Arabia and Iraq value and feel like they're fighting to preserve, what can enable us to provide the kind of meaning and connection?

Yeah.

Nicely put. All right, last subject. Moral Tribes is the title of your book. There it is again, Moral Tribes. It seems like we're more tribal than ever. Is this the availability heuristic? And I'm just forgetting how bad it was in 1968 or Watergate or whatever. We're coming up on this election. It just seems like everybody hates everybody. I don't know. What do you think? I think we're going through...

some growing pains. We're going through a kind of adolescence that, you know, the United States is headed in a direction where, you know, traditionally the United States has been dominated by both demographically and politically by white Christians who, you know, come from Europe and share a certain worldview.

And, you know, that that is changing. Right. And people look at, you know, the ads on TV and who whose faces are in the ads and they look at what's around them. And a lot of people, especially people who don't have college degrees and and don't feel like they have who, in fact, economically speaking, are doing worse than

than their parents did and feel like there's no way forward. This is double whammy of a sense of loss of status in the country and in the world combined with a sense of alienation of feeling like I don't feel at home in my country anymore, at least not outside my town. Right.

And they don't feel part of this larger us that includes more people from other cultures and that they feel like that move towards that larger us has been a loss for them, right? And this is happening in the United States, but it is happening in Germany and it is happening in Brazil and it is happening in places all over the world, right? So populism. Yeah. Populism is a response to that. And I think that what's going on here is there has never been a democracy that

that has had good, strong, inclusive institutions. Every vote counts. Policies are designed to benefit everybody and has been and has the and it's the case that the dominant majority group that was, you know, had been there for hundreds of years now fully shares power with others. So the United States is diverse.

And a lot more than other countries. But historically, it has been, you know, you look at the faces of all the presidents except for Barack Obama, right? You know, that's what's been in charge, right? And there's never been a democracy that has maintained its strong institutions and its inclusion while crossing the threshold from being a group of

dominated by what has been historically dominant to one that is truly open and pluralistic. And so in a sense, it feels like we're backsliding. We've become more, more, more tribal. But what's really happening is we are going through the painful process of becoming less tribal, right? And, uh, and that's what I think is really going on, that we are feeling growing pains

Rather than completely regressing, but it's dangerous. And we might not, you know, we might be the adolescent that gets drunk and drives into a tree and dies. Right. So it's not like it's all going to be OK. We might know there's a chance that we won't make it through this process. But I think that, you know, what's happening is that we are undergoing a kind of difficult growth.

Um, and, and, you know, I hope that we can figure out ways to, to, to, to, to get through it and come out stronger and happier on the other side. What would you advise Elon, uh, for colonizing Mars and starting a whole new civilization? You know, I mean, I think it's good to have a backup planet. Um, but I think that our terrestrial problems are much bigger at this point. And I think that, that if, if we end up destroying ourselves here on earth and needing that backup planet, um,

We're going to destroy that backup planet too if we don't solve the fundamental problems of uniting as a species. So I think that colonizing Mars well is not just a technical problem of growing plants and providing oxygen and finding the water, right? We need to solve the human psychological problem first.

Well, right. So the moment you transition from maybe there's a couple dozen people to a couple hundred people living there to 10,000 people living there. Right. Well, you're going to end up with borders and committees and you're going to have representatives and leaders. Yeah. I mean, something has to be structured. Yeah, that's right. Our angels and devils will follow us to Mars, right? And so, you know, the real problem to solve is...

is in here and in here. And it's not the technical problem of, you know, setting up an outpost on a faraway rock. Right?

As romantic as that is. I mean, with that said, it's cool. I mean, you know, I love reading about this stuff. You know, I can't say I'm not a fan, but I don't think that that's our most fundamental problem. But thank you so much. This has really been fun, and it's great to connect with you after, you know, getting your ideas all these years. So thanks so much for the opportunity. All right.