Oh, by the way, before we get into this episode, I would love to tell you a little bit about Life Notes. Now, Life Notes is a weekly-ish email that I send completely for free to my subscribers, and it contains my notes from life. So notes from books that I've read, podcasts I'm listening to, conversations I'm having, and experiences I'm having in work and in life. And around once a week, I write these up and share them in an email with my subscribers. So if you would like to get an email from me that contains the stuff that I'm learning, almost in real time as I'm learning it, you might like to subscribe. There is a link down in the show notes or in the video description.
I have a vision for the world where everyone has this kind of shared goal of just making the world better. And it's just agreed, we've got this like impartial goal and everyone's just willing to reason about it. And that's
that's understood to be just like part of living a good model life in the same way as like not being a racist, you know, not like murdering and so on. And I'm just like, that is achievable. Like we could have that world rather than the world where it's like us versus China and like the, you know, the woke and the anti-woke and everyone's fighting. And I'm like, no, instead it's kind of like, like within EA,
is just like, you've got all of these people that are like, okay, we want to do as much good as possible. We believe in like reason and argument as the way of like figuring out things. And it's like, people can have disagreements, but there's always this feeling of like shared project. And I'm like, okay, I want to just expand that to the world as a whole.
And I'm like, okay, how do we get there? And then it's like, well, one path on this, and like, obviously I'm just trying to move the world in that direction. But one path in that direction is... Hey friends, and welcome back to Deep Dive, the ongoing podcast where every week I sit down with authors, entrepreneurs, creators, and other inspiring people. We found out how they got to where they are and the strategies and tools and mental models and mindsets and stuff
that we can learn from them to help apply to help live our best lives. What you're about to hear is an incredibly exciting interview with Will McCaskill. Will is an associate professor in philosophy at the University of Oxford and was in fact the youngest ever associate professor. I think at the age of 27 he got his professorship there. His research focuses around the idea of effective altruism, the idea that how can we do the most good with our careers and with our lives and with our money.
and he's written this new book, "What We Owe the Future," which is basically all about the idea of like, what are the things that we can do right now that will help impact future generations disproportionately highly? This is an incredibly long and wide-ranging conversation. Normally we cut down these conversations quite a lot,
But we thought, you know what, we tried a bit of an experiment and give you the whole four hour and something long conversation that I had with Will. We covered all sorts of things from how to figure out what to do with your life, productivity tricks. We talked a lot about the idea of long termism and all the different existential threats that humanity is facing and what we as individuals can do about it. And just like generally an absolutely sick conversation. His book has just come out. We'll put a link in the chat. It's really, really, really good and has completely changed my perspective in terms of like understanding what are the risks that we as humanity face.
potentially even existential risks. Could we become extinct in the next 5,200 years? Maybe. And there are things that we can do about it. So hopefully you enjoyed this conversation as much as I did. Will, it feels kind of weird to call you Will because you're like a professor of philosophy at Oxford and you've got all these like... You're welcome to call me the professor. The professor. Well, professor. I'm so intrigued by your backstory. So you're like one of the youngest, if not the youngest professor at Oxford. Yeah. So the little factoid is...
At the time of my appointment, so it was 28, I'm in 2015, I was the youngest associate professor of philosophy in the world. Okay. So at the age of 28, you became the youngest ever associate professor of philosophy in the world. Yeah. And you founded, or rather, you were one of the originators of the effective altruism movement. You co-founded Giving What We Can, or...
80,000 hours in various capacities. How did you become the youngest associate professor of philosophy at Oxford? And what does that mean? Sure. Yeah, I mean, a good dose of luck involved in that. In particular, at the age of 22, when I just moved to Oxford to begin kind of graduate school, I was just very moved by the problems in the world. In particular, I've been very influenced by Peter Singer's arguments, where the kind of idea is that
We live, even if you're in a middle-class income in a rich country like the UK or the US, by global standards, you're really in, you know, quite intense luxury. You have something like a hundred times the income of someone in extreme poverty. Given that, don't you have a moral obligation to be using your resources to help the very poorest people in the world? And he uses this story of, imagine if you're just walking past a child drowning in a shallow pond, you see that child. And now imagine, you know, you're wearing a very nice suit. You're on your way to a job interview. It's a high stakes thing.
And the suit is worth hundreds of dollars, maybe thousands of dollars. And imagine that you just see that child learning. You think, this suit is just kind of nice.
I'm just going to walk on by. Don't want to lose the few thousand dollars. In moral philosophy, we have a technical term for people that would do that, and that's being an asshole. And the thought, Singer doesn't use that term, I admit. But the thought is just if you don't do that, if you were to do that, you'd be an asshole. Like the loss of a few thousand dollars to you is just not nothing in comparison to the loss of a child's life.
And that just really moved me. So I'm at graduate school. I'm planning to be a philosopher. I'm working on logic, philosophy of language, very esoteric things that are very intellectually interesting but don't have a clear path to impact. And I'm like, I need to alter this. I need to somehow start living life in accordance with my values.
And it was there that I met Toby Ord, who was another graduate student at the time. And it'd been interesting that I started kind of shopping around for a set of ideas that could really move me. And so, you know, I attended the Socialist Workers Party, I went to some climate kind of activism rallies, got involved with Vegetarian Vegan Society. But a lot of the ideas were
a lot of self-hate. So sort of feeling very bad about the problems in the world and not actually that much action. Not that much thought about how much good you could do. And then Toby told me he'd made this pledge to give away everything he ever earned above £20,000 per year. And not only that, he had started really thinking about with that money that he could give, where could he give it that's the most effective places that would help other people the most? And so he had this incredibly in-depth analysis that was extremely action-oriented
And not only that, he was really upbeat about it. He wasn't like, oh, I'm wearing this hair shirt, self-flagellating. I hate myself, but I guess I must do it. Instead, he was just like, look, this is an amazing way of living. I can live in accordance with my values. This is going to be a very small hit, if anything, to my own well-being. But yet I can do enormous amounts to improve the lives of hundreds or even thousands of people. And so we had this five-hour chat. From there, I was completely sold. Okay.
And honestly, from that point, philosophy kind of took the backseat. I helped him co-found Giving What We Can, which encourages people to give at least 10% of their income over the course, you know, every year over the course of their lives until they retire. I believe that you're a pledger as well. Yeah, I'm a pledger of Giving What We Can, pledger, whatever the thing is. I made a video about it a couple of years ago. Well, thank you for doing that. I mean, it's just, you know, a huge impact that you're able to do.
And so since that point, then I split my time. I was about a third on a kind of academic philosophy. I wanted to kind of keep the option options open, but then the rest of the time I was helping to build what ultimately became known as the effective autism movement via setting up, giving what we can focused on donations. Then a couple of hours, a couple of years later, 80,000 hours, which is advising young people on how they can use their careers to do as much good as possible. And ultimately,
I really thought, you know, I'd always hoped to be an academic, even since it's a bit embarrassing, like age 18. Yeah.
Normally you have like cooler aspirations, but I wanted to be a philosopher. Okay. And so I really thought I was going to be making a sacrifice that was just ending that as an academic career. But through some combination of luck and being somewhat strategic in what I worked on and like efficient in like how I was using my time, I was able to just publish well and then get, yeah, this position at Oxford. Yeah, I feel very happy about. And now, despite, you know, I had many years of living this slightly schizophrenic life of being an academic on one hand. Yeah.
something more like you know social activist or social entrepreneur on the other hand and now i've been able to kind of blend the two where yeah the work i'm doing is both you know i think at like a kind of high academic level a little bigger but yeah it's all too squarely focused on the most important issues love it oh man so many so many questions along that that's firstly
What is a philosopher and what does it mean to at the age of 18 to decide you want to be an academic philosopher? Like what what does that even look like? It's funny when people ask me what I do. I'm like, I still don't have a good answer. I'm like, I'm a lecturer, but I don't actually really give lectures. Like, oh, I'm an academic. People don't know what that means. Yeah. I'm starting to find own it and say, like, I'm a philosopher. Yeah. What is that? What does a philosopher do? I mean, we like the stereotype we sit and think is kind of true. So I think.
The closest thing is, like, what does a mathematician do? And a mathematician just ultimately kind of sits there. They read proofs. They read, like, large amounts of mathematics. There's some cutting edge of thought where there have not been further proofs. And they try and, like, further that. And it involves a lot of, like, sitting, like, pencil in hand, you know, trying to figure out new proofs. You can think of philosophy as a discipline as, you know, imagine, like, the entire sphere of thought.
Okay. Of questions you could ask. Yep. There are some questions that we can answer with experiments. Sure. That's natural science. Yep. There are some questions we can answer with proofs, and that's like the main of mathematics. Yeah. There's still a whole bunch of kind of questions that we can't answer with either of those things. So is there an objective morality? If so, what ought we to do? And then lots of the other class, do we have free will? Does the external world exist? And so on. Okay. And...
What can you do if you've got those open questions that are extremely important but you can't use a proof or you can't use experiments? Well, we've just got the best we can do is kind of informal reasoning. So just very high quality arguments, clearly stating your premises, clearly stating what you're trying to argue for. And that most resembles the mathematician sitting at a desk doing proofs. But what you don't have is proofs. You don't have proofs. Instead, you've just got arguments.
And so you've, again, got to know the kind of literature, the state of thought on this topic, like, you know, what's the correct model view? And there's arguments back and forth. You try and generate new arguments. You've criticized others' arguments. You turn them into papers and books, and then you keep kind of having this. Okay. And then at some point along this journey, you start calling yourself a philosopher, slash other people start referring to you as a philosopher. That's right. I mean, or is there like an official, like with Doctor, there's like, you know,
Yeah, there's not a time, even as an undergrad, people start referring to you as a philosopher, even though you're studying. I mean, I guess the natural time at which you can really say you're a philosopher is when you've finished your PhD and have your first academic position. Ah, okay. Then you're an official philosopher. There's no like knighting ceremony or anything. Okay. So it sounds like as a philosopher, you're spending your time essentially...
Basically taking a question, which is the sort of question you might get in, I don't know, a university essay, like, does free will exist? Exactly. But actually seriously tackling that question. Exactly. By reading all of the things that anyone has ever written about it. Yeah. And then trying to add your own flavor of reasoning to that. Yeah, exactly. So it might be, you know, there are these, let's take the example of free will.
well, does it exist? You might think, yes, of course it does. Like I make the decisions all the time. You know, that's like one argument. It's kind of an appeal to introspection. But then you might say, okay, no, it doesn't exist because there's this argument that you take the laws of nature, you take the starting conditions of the universe and
If I had a super powerful supercomputer, I could predict exactly what was going to happen through the entire course of the universe. It's actually not quite true because of quantum physics and quantum uncertainty, but we can put that to the side. I don't think it ultimately affects the argument. Okay. And so therefore, if I could predict everything that happens, including every single action of yours, you don't have free will. Hmm.
That's an argument. That's a very influential argument, of course. And what the philosopher might do is criticize it. So they could say, okay, the fact, yes, it's true. They might accept, yes, it's true that...
In a relevant sense, you couldn't act otherwise than in fact you did. Yeah. Because all of your actions are determined by the laws of nature and the starting conditions of the universe. However, nonetheless, free will does not require you to be able to act otherwise than you in fact did. And would give some examples where you give this kind of situation and you say, look, look.
This is a situation where the person seems to be acting freely, yet it's also true that they couldn't have acted otherwise. And I can give you an example if you want. Yeah, what's an example of that one? I can't think of. Imagine that there are three people, we'll call them Jones, Brown and Smith. So Jones wants to kill Brown. Yep. And he, you know, goes, buys a gun, goes to where Brown is living, has his attention. I'm going to kill Brown. Yep.
Smith also wants to kill Brown. Yep. And Smith has implanted an electrode into Jones's head such that if Jones were to decide, oh, actually, I've changed my mind. Yep. Have I kept the names correct? Yes. Jones is the sniper. Yep. Jones is the one who's going to kill Brown. Okay. Yep. Smith is the neuroscientist. Yes. Who also wants to kill Brown. Who also wants to kill Brown. Got it. And he's implanted an electrode into Jones's head. Exactly. And the electrode is such that if Jones changes his mind. Yep.
It'll take over his body and he'll shoot Brown. Okay. Now what happens is that Jones goes to Brown, shoots him, dead, intentional. The electrode never needed to go off. Yep. Seems like that's a free action. Sorry, does Jones know that he has this electrode? No, he doesn't know. Oh. He doesn't know. Okay. Yeah. Yeah. So from Jones' perspective, he's just like, I'm going to kill that guy Brown. Yeah. Buys a gun, goes, shoots him. Yep.
Seems like free action if anything is free. Yeah. However, he couldn't have done otherwise. If he'd chosen not to kill Brown, the electrode would have activated and his body would have gotten taken over and he would have shot Brown anyway. But that would have been a clear case of him not having free will, right? If the electrode had taken over.
Right. So it's, if he had opted otherwise, perhaps he wouldn't have free will. However, given how he did in fact act, it seems like, you know, if we're going to call anything free will, it seems like that's it. But that suggests that you can act freely without needing to have acted, without it having been the case that you could have done otherwise. Yeah.
Got it. So it's like, even though you can predict exactly what the outcome is, that doesn't necessarily mean that he didn't have the free will to execute the outcome. Exactly. Yeah. Okay. So that's a sort of defense. Which invalidates the, argues against the argument about the whole supercomputer analogy. From determinism. Exactly. Okay. So instead, perhaps what acting freely is, is in some sense, it being an authentic decision. Yeah. It coming like it.
You know, it being something not only that you want to do, but you want to want to do. But in some way, a decision that you yourself endorse. And that might be perfectly compatible with all of these decisions being utterly predictable. So we just had a little one minute taste of what philosophers do. And then the next round would be people criticizing the things that I just said and back and forth. How do philosophers make money?
How do they make money? It's a good question. Most of them don't make much money. I mean, it's almost all paid for by universities, which are ultimately funded by the government or large foundations. Is the dream for an academic philosopher to get tenure at a university so that...
They're sitting and thinking about stuff gets funded by someone else. Exactly Yeah So when I had this first position about half of my time was on teaching and university admin and things like that in a sense That's what pays the bills as long as to some extent research funding Research funding is like very small in the grand scheme of things It's now the case that I'm not on a teaching position because you know effective altruism is an area where moral philosophy is actually having a real impact in the world It's advising
now how many billions of dollars of philanthropic money are being spent. And so there it's actually like, it's a really worthwhile investment to be trying to get greater clarity on some of these crucial issues that affect, well, really, how can we do as much good as possible with our resources? Yeah, absolutely. Okay, so definitely want to talk way more about effective altruism. I have been a fan
fan and a sort of proponent of the movement ever since lucia who's sitting right there told me about it in like 2015 when we were in clinic in clinical school together one thing i was gonna ask so you said your your friend toby donates everything above 20 grand 20 000 pounds a year to effective causes and he said a thing that that would only be a small hit to his well-being yeah
Most people listening to this would think, crap, if I was earning just 20K, most people wouldn't be keen to accept a 20K job. And they'd be thinking, I need to make more money than that because that will make me happier. And a lot of the studies that I have, the pop psychology studies that I've seen in books, say 70K is the point beyond diminishing returns-ish where more money does not actually contribute to more happiness. So where does this 20,000 figure come from? Yeah, I can even talk about myself. So a few months after meeting Toby, I made a...
A similar commitment. In my own case, at least, I can't speak exactly for Toby, I'll kind of clarify it. So that's £20,000 post-tax in Oxford 2009. So that's now about £26,000, £27,000 post-tax in Oxford. Okay. Like taking inflation into account. Yeah. And then, you know, there's lots of possible caveats and so on. Like, if I have kids, you know, it would increase. It's not about me. Okay. Also health and so on. Yeah. Yeah.
But then, okay, what's the impact on happiness? Yeah. Well, money does buy happiness. My best guess is that just at least until you're at a very high level, that actually keeps going up. These studies saying, okay, it's like $75,000. But remember, that's where you just, at which it makes no difference at all. Yeah.
you're already starting to plateau quite a bit below that. On some other studies, it keeps going up, but every doubling of your income results in the same increase in happiness. So you go from $1,000 a year to 2,000, that's the same as going from 100,000 to 200,000 in terms of your increase in happiness. So then when you actually just look at the studies and think, okay, if I'm going from, let's say it would be 100,000 pounds a year to 26,000 pounds a year, what's the happiness loss?
just from the kind of financial loss. And it's really pretty small. It's like, I mean, in my own, like, well, let's start off just looking at the literature. It's just like a small increment loss. Maybe it's 10% or something. But I think probably not even that.
But then you have to compare that to the good you can do. And so we can, you know, we'll talk about some of the issues that impact the long-term future, where I think the stakes get even greater again. But even just looking at the amount by which you can benefit people in poor countries, you can do about 100 times as much good for someone else as you can for yourself. Because there's two ways of thinking about this.
One is in terms of if you want to provide a certain sort of health benefit to yourself. So in the UK, the NHS, in order to prioritize kind of what drugs do they fund or treatment do they fund and what don't they, they use a metric called a quality adjusted life year. It's like one year of very high quality health. You obviously know this as a, in the course of being an ex-doctor. How much does it cost to buy one year of high quality health?
It costs 100 times as much, actually, but maybe about 1,000 times as much if you're living in the UK than if you're in a very poor country like Nigeria or Burkina Faso or Uganda. So if I'm like, oh, I could spend that money on myself, I could spend it on other people, I'd be providing 1,000 years of healthy life to other people.
Compared to one year for myself. Yeah. Alternatively, I could just think about transferring cash. Yep. Because of this fact that every doubling in income increases well-being by the same increment. Yep. And because of the fact that a typical member of somewhere like the UK or the US is about 100 times richer in financial terms than the very poorest people in the world.
My reducing my income by 1% increases the income of 100 people by 1%. And that provides 100 times as much benefit. And so that's the second part of the consideration. And that's also just not how I think you can even do the most good. I think you can get much bigger again. Sure.
That's the kind of baseline. And so the thought, I mean, here's an analogy. It's like, okay, you can, you're out on an evening, you can buy a beer for yourself. You can buy a hundred beers for other people, one or the other, what you want to do. I don't know. I would feel like, okay, the next round's on me if it was really that cheap. But that's the situation we're in at the moment. It's a small, you know, hit to your own wellbeing, enormous benefit to others. It just seems like a really good deal to do that. The final point is just that all of that was assuming that giving money is like having an income hit.
like just having a lower salary. But it's not, because giving is enjoyable itself. So if I think about what's been the impact on my well-being from being involved in effective altruism, donating most of my income, well...
I have a much more meaningful life and now part of this like wide community of people who I just love and admire and source of joy in my life. It means that like when I'm feeling down and like, you know, not liking myself very much and like, well, at least I'm doing something like, you know, I'm really helping other people. And there's just this literature as well on volunteering on giving gives this kind of warm glow.
And sometimes people give only for the warm glow, but you get it even if you're aiming just to help others as much as you can. My honest guess is just that it's probably just been good for me overall, rather than actually a well-being hit, as I expected it might have been. MARK MANDEL: OK. Lots of questions on this front.
If I could buy 100 beers for other people versus one beer for myself, I would do that. But there is a difference in practice between the warm glow I get from being able to buy beers for everyone in the office than it is for me being able to buy beers for people in a far off country that I will never see in my life. This, I feel, strikes at one of the issues that I guess Peter Singer is very against. And also the effective altruism movement is like,
whereby I know that intellectually I should value the life of, and I do value the life or I should value the life. I would like to value the life of someone who I don't know as much as I value the life of my brother. But in reality, that's just not true. And so how do you think about that thing of helping myself, helping my family and the people closest to me versus not?
Helping people that I will never see or meet somewhere in the world. Yeah, so it's really tough I don't think moral philosophers have a great kind of answer to this. I think there's you know, there's two different moral perspectives one is the you know, impartial totalitarian perspective which just says well
You know, imagine if you were making a decision about how the world should be and you didn't know who you were going to be in society. It could be randomly, it could be anyone. What would you do? Well, you'd want everyone to care about everyone equally. You'd want to have much more concern for the very worst stuff in the world. There's another moral perspective which says, look, no, we are kind of, we're not behind this veil of ignorance where these engaged people, we have relationships that get, these give special obligations.
I think in the way that the world is today, there's not that much tension between them because the stakes are simply so high. So for sure we can have these, so philosophers, you're asking what philosophers do. They come up with thought experiments. They love thought experiments. And they might be like, okay, you can save, there's a burning building. You can either only save kind of one group. There's your brother in one, or there's 10 people, strangers to you in another. A hundred people, a thousand, 10,000. Probably at some point you switch. Yeah.
It's not clear kind of where exactly. When I ask people, it tends to be like a hundred or a thousand. Yeah, I was thinking somewhere between that amount as well. For sure, you may at some point in your life face a very difficult situation where your brother's life is on the line and you can use a lot of money. And there it's just a very hard moral decision, like given the way, you know, the enormous amount of injustice in the world, yet also the special obligations you have to your family.
However, we're just most of us most of the time are not in those situations. Yes. The decision instead is do I buy this nicer car or give the money away? Yeah. Or do I go on this nicer holiday or give the money away? Honestly, people just waste money most of the time. Do I have a slightly nicer house? Those are the decisions that I think we really want to focus on, at least in the first instance. Let's at least get to the point where we're not all frivolously spending money on luxuries that
don't have an enormous impact on our well-being. And then once we're at that point, then at least we can start having a conversation of, okay, when it's a really serious issue for our friends and family, like at what point does impartial concern for strangers make a difference? All right, a little quick interlude before we continue with the podcast. There is this big-ass book, Principles by Ray Dalio. People have recommended this book...
for absolutely years, but it's just really huge. And it's more like a reference book than it is a book that you read cover to cover. So I've read little segments of this book rather than the whole thing. And the way I found which segments to read was actually by reading the summary of the book first on Shortform, who are very kindly sponsoring this episode. Now, if you haven't heard by now, Shortform is the world's best service that summarizes books, but it's way more than just book summaries. They've got one pages that summarize the key idea of the book.
a single page. And they also have detailed chapter by chapter summaries that you can then sort of dive into one at a time if you want. They also have these interactive exercise segments in between some of the chapters so that you can engage more with the ideas of the book. And even better, if the author says something that's particularly controversial or that another author has disagreed with, then on short form they'll include a short form note which says that, hey, this thing that Ray Dalio said is actually not entirely legit. Like Marie Kondo or someone disagrees with him and here is what here is more details about that.
So it just gives you a little bit more of a balanced viewpoint of the book that you're trying to read. Now for me, there's two main areas of my life in which I use short form. The first one is that if I have already read a book, like for example, "Steel Like an Artist", then I will often look at the short form summary of the book just to revisit the ideas if I wanna revisit the ideas in the book rather than rereading the book I've already read. But the second one is what I did for "Principles" by Ray Dalio. And that is usually if a book has been recommended enough times, something like "Principles" by Ray Dalio, and I'm not 100% sold on whether I wanna read the book or if I find myself short on time,
then I'll often just look at the summary of the book on short form. And then sometimes I'll decide whether I want to read the book or not based on that. But in the case of Principles, I decided that, okay, I need to have this on my shelf. I need to get this on Kindle. This is actually a genuinely good book. But right now, most of the points are not relevant to me. So let me figure out on short form which point is most relevant to me. And then I can dive into the book and read that particular chapter in a focused fashion rather than trying to read the book cover to
cover. There's nothing wrong with this way of reading. You know, I often think we should treat books more like blog posts. So please don't cancel me for kind of besmirching the name of a book by not reading it cover to cover. But I think a service like Shortform is really useful in the sense that it actually builds on the ideas in a book and helps you engage with them in a way that you might not have done otherwise. Anyway, if any of that signs up your street and you want to get access to the world's best book summaries, more than just book summaries, then head over to shortform.com forward slash deep dive. And that URL will give you 20% off the annual premium subscription. So thank you very much Shortform for sponsoring this episode.
Yeah, yeah. I think in my mind, and certainly in the minds of a lot of people I've spoken to about this, it's almost uncomfortable to think that...
I bought Thing X. Thing X cost me $3,000. That could have been the life of a child through the Against Malaria Foundation. Let me not think about that and instead think about the hypothetical scenario where I would still choose to save my brother rather than 100 strangers. And in a way, the extreme case is a very good, easy... It's easy to dismiss. Easy to dismiss. Be like, oh, obviously in that case, I'm not that utilitarian. Exactly. And something that people often get wrong about effective altruism is that in economist language, we're talking about what we do on the margin, or like...
Or like, say we take the world as it is, you make a small difference, like, or how you're spending your money, like, you know, comparatively small change. Does that make sense? Not, how should the entire world be different? Where, you know, people say, oh, if everyone was funding bed nets, no one would be funding the arts. Wouldn't that be a worse world? I'm like, let's have that conversation once we've increased kind of, you know, global funding for global health and development issues.
up for them. You know, I mean, it's a few hundred billion per year, but it could be much more than that. Once we've eradicated extreme poverty and we could do that while still having enormous funding for the arts, let's then have that conversation. Similarly with some of the causes we prioritize, you know, maybe we say like this particular area, I don't know, let's say it's pandemic prevention. Look, this is a really top priority area. We need to be funding it more. And if someone says like, oh, well, what about other cause like climate change or education or something? Is that
You're saying that's less important? And the main thing is like, it could well be more important or as important. But if it gets like a thousandth of the current funding, then on the margin, given the way society is currently prioritizing, we want a bit more of this thing. And that's why it becomes the most important priority at the current time. So kind of, yeah. Yeah.
the way in which we should be nudging the world at the moment doesn't in and of itself have a lesson for how should the world be constructed in an ideal society. Yeah. Oh, yeah, that's good. I was actually giving a talk at the Oxford Union two weeks ago. One of the questions I got afterwards was I left medicine to pursue other things and in a video where I talked about why I left medicine, I
I sort of the fourth thing on my list other than money and enjoyment and autonomy was impact. And I talked about the impact of an individual doctor, cited the article in 80,000 Hours about this, talked about my different sort of impact at making a YouTube video that encourages X number of people to take the giving what we can pledge.
And the, you know, how I convince myself that that analysis is reasonable and it is a reasonable decision or at least not morally reprehensible to leave medicine for the sake of making YouTube videos, which would at first glance otherwise seem to be a morally reprehensible decision. But someone asked Bill, like, well, if you encouraged all of the, if every doctor thought like you, we'd end up with no doctors and then that would be bad. Yeah.
I was like, oh, I don't think we're there yet. We don't need to like, yeah. Yeah. So I've been asked the same question. My response is just, if only I were that persuasive. Yeah. I'm not like, obviously that would be a terrible situation, but we're very, very, very far away from that. And so again, it's just kind of what has happened
What is the world as a whole over-prioritizing and under-prioritizing? And at the moment, I'm claiming we're over-prioritizing philanthropic spending on the arts and opera or causes that are focused on issues affecting people in rich countries rather than issues affecting people in poor countries. And as we'll get onto later, under-prioritizing issues that impact the whole course of future civilization. Absolutely.
I would love to hear your philosopher's take on this. If we accept that the argument of if everyone did x, then
then that would be bad is not necessarily valid for encouraging a small number of people to do x can i use the same argument to say i don't want to recycle my tin can of coke because and someone else would be like yeah but if everyone thought like you then we'd end up with so much more whatever in the environment and i would say but they don't and like unless china and india and stuff us change their industrial stuff like my contribution to climate change is very is very negligible yep
It's a very unpopular view. I mentioned it in a video one time and I got lambasted in the comments for being a terrible person. But I wonder what would be the philosopher's argument? Or in my mind, I don't currently have a consistent reason as to why I as an individual should waste my time to vote or to recycle when I could theoretically be making a YouTube video in that time that theoretically reaches more people and has theoretically more impact. So I think in a lot of these cases where...
it seems like, oh, my action as an individual doesn't matter. I think actually it does. Oh, OK. At least in expectation. So going to use some-- Nice. Yeah, a little bit of math that's very helpful. So let's take the example of voting. Yes. So you might actually think one vote doesn't make a difference. It's going to be the same outcome either way. And probably that's true.
They're not certain. Yeah, sure. Your vote, like sometimes votes are very close. Sometimes they can go to a B count. The votes, like who wins, is just the totality of individual votes that people have made. Yeah.
And so for sure, the chance of your vote swinging the election is very, very small. But if you do, that's a truly huge impact. I mean, the GDP of the UK is three trillion pounds per year, something like that. The governmental spending as a whole must be about a trillion. Government makes enormous decisions. Should we stay in the EU? Decisions affecting all of our lives. Once you do the maths...
I actually think voting comes out as quite a high impact activity. It depends on where exactly you live. So if you're voting in a very, very safe seat, perhaps it just is astronomically unlikely that your vote is going to make the difference.
But if it's competitive, and the analysis has normally been done in the US, if it's competitive, and if you're pretty confident that you've got a better guess about who's going to be the better leadership than the general public, or the kind of median voter, then once you take the very, very low probability of making a difference, multiplied by the enormous value, if you do have a difference, that's called multiplying those two things together, it's called the expected value. Mm-hmm.
Then it comes out that voting is like donating tens of thousands of dollars to charity. Oh, okay. So it actually comes out really pretty well. Okay. And it varies from election to election. If you're in the UK, I mean, if you're a member of a political party and voting on who the leadership of that party is, I think that comes out very good. For other things, maybe much less. But it does mean that voting can often be actually, yeah, is quite a good thing to do. And I think similar analyses can apply elsewhere.
to other things where people claim, I don't make a difference. Like, oh, if I buy the factory farm chicken, it'll just, the stores will stock it anyway. I don't make a difference. Well, probably you don't in a particular case, but occasionally the supermarket will order 11 crates of chicken rather than 10. And then you've made this like big difference. In the case of the cycling, actually, I think it's clear that any of your elections are making a difference, even if
They're very small. Oh, it's clearer. Okay. Well, I mean, yeah. So the best case for the cycling is aluminium actually, because that's so energy intensive to make. That's the thing where you've got the kind of biggest bang for buck within the cycling. But there it's just, it's not like, oh, I have only some chance of making a difference. Like there you're literally just putting this, putting aluminium in,
into the recycling bin, it goes to the factory that gets processed and just does mean that slightly less aluminum will be created. That's slightly less energy usage. That's slightly more efficient. Now, you might say the impact there is very small. And perhaps there are better things you could be doing with your time. So let's say you really don't enjoy recycling. You could instead be volunteering for high-impact organizations or working and making money that you can donate to other places. Perhaps the impact there is much larger and--
My guess is that probably is. But I guess if I seriously think about that, like how many times in my life is the decision to recycle or not recycle actually swung by the fact that I am currently in the middle of volunteering for a high impact organization? Exactly, yeah. So you don't want to kind of create stories that are just kind of self-justifying. True. But if you really were in that situation, you're just like, look, every single minute of mine is just, it's devoted to doing as much good as possible. I just like, I don't have time to worry about the cycling thing.
MARK MANDEL: It's fine. MARK MANDEL: I guess very few of us are actually in that position. MARK BLYTH: Going back slightly to a thing we've already talked about, this idea of you could be making easily six figures a year, if not more, because of all of the things. But you choose to only make 27k a year, for example, and donate the rest.
you are taking a sort of material quote hit on the sort of standard of living that you could have and it sounds like your two reasons for this were number one it's not that much of a hit anyway, because whatevs and Number two that that money could be way be better spent we're on helping other people and therefore sort of number three you in fact feel better about taking that hit than you do about having a six-figure salary and
Absolutely. And therefore it's not really a hit. In fact, a net benefit to you to, as if you were thinking completely selfishly, to be able to donate 80K a year to stuff other than, rather than to yourself to buy a fancier car or a fancier house. Yeah. So there's kind of three aspects. Is it a financial cost? Of course. Yeah. Is it a hit to my wellbeing? And I'm like, I'm not sure there's,
maybe some negatives, but there's a lot of gains as well. Like I say, having a more meaningful life, being part of a community. But then the third thing is if I'm thinking not about kind of now there's self-interest, but just given my values, given like, how do I want to have lived my life? If I'm on my deathbed, when I look back at my life and the trajectory of my life, do
do I want to be someone who kind of went along with a crowd and bought the like nice car and so on because I thought it's like what I ought to do when I was concerned about my social status yes or do I want to be someone who's like no there are certain things I believe morally important I'm going to live a life in accordance with those values nice and then it's like pretty clear that it's the third that and I like I feel really good about it feels empowering can we talk about feelings for a moment so what's on your mind I
Here's the thing that's on my mind. I often have conversations with my housemate, Lucia, about this. Lucia is the co-founder of an organization called Leap, Lead Elimination Exposure. Lead Exposure Elimination Project is the one, which you have actually donated a large amount of money to this year. Yeah, I mean, it's...
small by the standards of their budget or the giving that's going around. It was like large. Yeah, I split my donations into two and they got 50% of my donations. I'm a big fan. Nice. Yeah, so she was in my year at medical school. She also practiced for two years and then as a doctor and then left medicine instead of making to make YouTube videos instead to found this high impact nonprofit charity thing, which is having great impact in the world, like reducing, you know, exposure to lead paint in particular in the developing world. Lucia has been strongly influenced
driven by the Peter Singer stuff since like the age of 11 when she first read I think the life you can save and Really found that it struck at the core of her in terms of feelings that I am Sleeping in this fancy house and I know that there is a child dying in Africa somewhere and she really feels that now when I first heard about this stuff and sort of she introduced me to the effective altruism movement in like 2015 2016 and
very much agreed with all of the things intellectually, but I didn't feel it internally. I didn't feel like... It doesn't actually keep me up at night that there are people dying in the world, but it seems to be a thing that genuinely does affect Lucia in terms of feelings. I wonder where do you stand on the I'm doing this because I know it's rationally the right thing to do versus I'm doing this because...
it feels like the right thing to do, if that makes sense. It's a great question. And I think I'm a mix and I also value over time. So going back to age 22, a lot of feelings. Like, oh, I'm really motivated by this. It feels like, you know, intense tension. When I was deciding on how much was I going to plan to give and thinking about this kind of everything above what was £20,000 at the time,
It was a tough decision. It was a very major decision. And I actually just, so I had visited Ethiopia before, and that had made things much more visceral in terms of getting to know some people who are living in extreme poverty. But what I did was just, I loaded up images on Google of children suffering from horrific neglected diseases.
And in course of making the decision was just kind of looking at looking at these photos. And I remember one in particular that still kind of burned onto my retina of a child who had lymphatic filariasis of the face. So also known as elephantiasis, I think, but causes these huge swellings. Normally it's of the arm or legs, but the child's entire face, maybe eight year old child was like kind of swollen and drooping down. So kind of very, very gruesome image.
I just thought, man, if my giving can just stop that once, like just that child, like putting aside the fact that, you know, probably it's hundreds of thousands of times that just seems worth it to me. And so it seems like the way it feels to me is kind of like the rational kind of pushes, like decides what direction I'm going in. And then the like emotions or the feelings are like actually the engine that are pushing me forward. And I think that's, you know, been true over time as well.
now it's the case that like i've structured my life i'm really trying to in a sustainable way over the course of my life trying to do as much good as i can a lot of the time it just feels like oh i do my job but like i'm like oh i'm getting tired you know i don't know it's more like it becomes like as if i mean again you must know from being a doctor like you're dealing with these incredibly you know horrific situations you can't be endlessly empathetic all the time because you wouldn't be able to do the job instead it
I'm sure you can speak to this, but becomes like a job. That's how it feels most of the time to me. But still often I'll have things that come up again in terms of suffering in the world. Think about animals in factory farms or think about the threats that are facing us. Think about how we really messed up in terms of our COVID response. Think about civil war that's happening in Ethiopia. And it's just like, yeah, it's...
still really moves me. Yeah. That being said, I do think we should always go with the arguments. Like it's a mistake, like it's a very noble, honorable thing to just want to be, you know, oh, this really feels me. It really engages me. I'm just going to help this person. Yeah. But again, take the, you know, case of being a doctor. Imagine being in a
Imagine being in a conflict zone. So there's someone who's potentially dying over there, someone's screaming in pain over there. What do you have to do? You have to triage, exactly. And that requires a little bit of emotional distance, but that's how you help the most people, rather than just immediately rushing and tending to the first person you see.
I've got another friend. His name is Joey Savoy. I think you might know him. Oh, I didn't know you knew Joey. Yeah, of course. He actually gave me a few questions to ask you about long-termism, which we'll definitely come to. I initially got to know Joey because he came on the podcast, the other podcast that I run with my brother, and we kind of talked about some of the EA ideas and the idea that he also pays himself sort of the median salary for the UK, which I think is around about like 20 something thousand a year. Okay.
One of the things that first sort of that about that kind of thinking that seemed very appealing is that you decide once you create a rule for yourself and now you don't have to worry about money. Whereas I feel like for me,
I don't have a number in my mind of like how much money I actually want slash need slash care to make. And so almost every day, whenever I'm like looking at numbers on my YouTube channel and stuff or thinking, oh, we've got 20 people in the team, but like we're a bit, everyone's over capacity. Should we hire a few more? And then it's like, oh, but then there's this cost about money and margins. And then there's like,
oh but over the long term will this business crumble and will i end up i don't know having to do it's just like i go into this loop of essentially thinking thinking i need more and more money to have more and more of a financial safety net so that i can then continue to do the thing which i actually enjoy which is just sort of making videos and doing podcasts with cool people that help people live their best life and one thing that lucia was actually uh challenging me on the other day was
If money, if you agree that money is instrumental, it has no value by itself and its only value is in the effect that it has, whatever that effect might be to you. And you know that the thing you actually want to do that you're just thinking completely selfishly. If I know the thing I actually want to do is just honestly, if I won the lottery, I had, I don't know, whatever, a hundred million in the bank or whatever that number is.
I would just continue doing this stuff. I'd do podcasts. I'd probably have a lower threshold for just like flying to the Bahamas to be able to have a chat with SBF in person or whatever. But I wouldn't be spending some amount of my brain space concerned about optimizing more and more of my business and my life to make more money for the sake of a safety net, which is all instrumental anyway. And it's like...
It kind of becomes this loop. And I like the idea of just being like, you know what? Anything over 25K, I'm just going to donate. So now I don't need to worry about it. I can just focus on the thing I actually care about. How would you advise me in this situation? It's huge. It's exactly what you should do. So yeah, early on in my thinking about this, I was treating it more like every day was a decision. You know, Sainsbury's, Santa do a cost benefit analysis of what sort of cereal to buy. Like how many calories or like the cheaper cereal versus more expensive cereal. Is it worth it?
You're just, you're not having a good time. And so instead, just very occasionally, like in my own case, you know, it's just, I make a donation decision every year or two. In fact, it's not very, I make these kind of lump sums. Other people just do it. The money never even enters their bank account. It just goes to the...
But very occasionally, maybe even just once in your life, you decide, okay, this is how much I'm going to live on. This is how much I'm going to donate. Perhaps occasionally you make decisions about your career choice as well. And then after that, you think, okay, I've thought about this now. It's not something I need to worry about this about. Yeah. It's just very good. And in your own case, you know, perhaps you don't want to go for this cap of, you know, 27,000 or whatever. Perhaps you want to say it's...
well, you've already said 10%, maybe you want to go higher, 20, 50%. Yeah. Or you want us to say, no, it's a cap, but it's above some higher amount, a thousand pounds. Figure that out for yourself. I would absolutely advise you to do that. Maybe you take a couple of days when you schedule conversations with people close to you to really figure out what's the right level. And then two,
Two things. One, it means you can start like publicly talking about that, but easier to make a commitment. You're like, oh, it's now embarrassing if I go back on this. And then, yeah, secondly, it just takes a lot of the stress away. One thing I worry about is like, let's say I want to have kids and let's say, you know, my mom always tells me that, oh, you know, you're saying you don't care about making money right now, but just wait till you have kids. Your views on this will change where I'm thinking, okay.
But let's say that's in maybe five, six, seven years or however long it is. Right now I'm in a great position where I can make a lot of money because the YouTube channel is doing well and so on and so on. So surely I should make Hey, While the Sun Shines and make even more of a financial safety net because who knows five years from now when I have kids,
will I end up broken, homeless and alone? Clearly, I think there's some flawed reasoning here, but how would you? Well, again, this is the sort of thing where take some time, figure out like how much is a reasonable amount to spend on your kids. I think the median expenditure per child in the UK is like six or seven thousand pounds per year per child. But, you know, maybe you want more than that. It's fine. Make those decisions. And perhaps that means you're like, yeah, you're not giving as much as you would otherwise be. Yeah. Perhaps or have a
higher cap, but make a kind of informed decision about it. Something you can think about now, something you can commit to now. It seems to me that there's this enormous marketing engine
designed to squeeze as much money as possible out of parents in order by in virtue of making them feel guilty. Yes. So we don't have this in the UK, but in the US, like medical advertising is mad. Medical advertising. Yeah, exactly. Because it's a private healthcare system. So you're on the subway, these big billboards that are like, have you heard of incredibly rare disease X?
it could kill your child by this medical test, by these drugs. They don't need to say it's like one in a million chance or something. Or just plams as well. Like, you know, it's like, you've got to get the like nicest possible plan. It's like a thousand pounds or something. And it's just like, oh, cause I feel like I'm going to be a bad parent unless I do this. And that's just like,
That's corporations, like, ticking people, I think. And so I think you can have, like, you can give your children, like, a very nice life. I agree you should carve out money, like, in order to be a good parent. But that's perfectly compatible with giving a lot as well. I mean, Toby, in fact, has a child. She just seems to have, like, a blessed life, like, perfect life. And he and his wife are giving most of their income. So it's, like, it's perfectly compatible, I think. With decisions like this, actually, it is just about sitting down and taking the time to...
Actually make an informed decision and as long as that decision is in a way like directionally correct it's that's better than a general uninformed of like where the the inertia tends towards More and more and more and more for the sake of more and more and more exactly So this is yeah There's a kind of motto that people in effect autism often say which is don't let the best be the enemy of the good So you might think oh no, I can't do this because like I wouldn't still wouldn't be giving enough So I'm just not gonna worry about it. Yeah, or like you were saying like oh, but I
I just would privilege my brother over this thing. Just like, okay, fine. But like you're giving 10% at the moment. Yeah. Can you give more? Yeah. And you're like, okay, 20%. He's like, oh, by ought to be giving more. It's like, fine. Still 20% better than 10. Yeah. That's like, you know, we're all morally imperfect. We're never going to be saints.
We're going to have competing interests, but like relative to where we are now, like how much can you boost up the kind of helping others aspect? And generally, I think it's like quite a lot. On that note, one thing that you sort of alluded to is spending a few days to at least think about a big decision. And one of the things you said was the decision you make with your career. What's the deal with that?
So this is huge. So the decision you make over your career, for those of us who are lucky enough to have a choice over careers, it's arguably the biggest choice you'll make in your life. Certainly one of the biggest choices. And I set up this organization back in 2011 called 80,000 Hours. We chose that number because it's approximately how many hours people work over the course of their life. So if you think 40...
40 years, 40 hours a week, 50 weeks a year, it's 80,000. Choosing that number just to get across like, this is a big decision. 1% of that is 20 weeks. Do people spend 20 weeks thinking about their career? Probably not. Certainly I didn't. I had the most shallow, naive understanding when I wandered into graduate school in philosophy. So the first thing is just really think about this if you're a young person.
And then the second Lee is thinking about it in terms of social impact. So very, very many people want to do good with their career, but they find the self in a situation. This is certainly how I felt of just like overwhelm. What on earth could I even possibly do?
And so I was like, oh, maybe I should work in nonprofits, but aren't they just a band-aid? I could work for the UN, but I need a six-year internship, six years of work experience. How do I even get that? I'm just completely lost. And so what 80,000 Hours tries to do is just provide very concrete, actionable advice for people who are trying to do as much good, at least a significant amount of good, through their career. Mm-hmm.
And that's via a few mechanisms. So there is a podcast, the 80,000 Hours Podcast. There's a website that has enormous amounts of content on both how to think about this in general and specific cause areas that we think could particularly benefit for more people working on them. And then finally, one-on-one career advice as well. So once you've read all of that content, engage with it.
You know, careers are very different than donations because a big part of it is what your personal skills, talents, and how is that a best fit? Yeah. A good fit for different problems. So this one-on-one advice can help match people to the areas that actually they're going to be able to shine in. And yeah, also very impactful. Also perhaps set people up with mentorship and further connections. Yeah. I think, I think the thing with the career thing is that like,
Often the way we choose our careers is based on our 15. I enjoyed maths Therefore let me do economics for a level cool. That was fun. Let's do at university cool economics at university All these careers companies are scouting me out and now my options are investment banking hedge fund or private equity exactly in my own case It was like I was good at math. So I was like, oh, maybe I should do maths, but my dad did maths And so I don't really want to be like him so I'm gonna do philosophy instead of a child for like and that's like I
Now you're a philosopher, you don't get as many of these consulting firms after you, so you go to grad school in philosophy and that's where they are. This is your life. I mean, this is just the entire course of your life and it really requires kind of more reflection than that. And what people often do is they ask, you know, friends, family members, people they know, but like,
Having a good understanding of two things one is just like the nature of the labor market Yep, you know when I was at university no one said hey, you know that you can do a coding bootcamp It takes three months and then you can basically say off go into a six-figure salary. This wasn't a meme Maybe it is more now. Yeah, so there's a state of the labor market Yeah, and then secondly is just the state of the world where there's this idea that oh well what you should do is follow your passion and
And there's an element of truth in that, which is that you want a good fit between a job you can really excel at, develop masterly at, and come to really enjoy. And that's really important. But this idea that I'm born with like, oh, I just always wanted to be a management consultant. That was my dream. It's just never going to happen. You ask people's passions and they say sports, music, exactly those careers that are enormously oversubscribed, very hard to get into.
Whereas, I don't know, maybe nowadays the kids would say a YouTuber is the sort of thing they want to be. But through the course of setting up these nonprofits, there are tons of things I found enormously rewarding and interesting that I would have never thought of because it's not something I had experience with, like managing, fundraising. I hated giving talks. And so instead, the framing that we have with 80,000 Hours is start off having an understanding of what the world most needs.
Because if you're just focusing on your passion and not paying attention to the world, we're going to have another sportsman or musician or artist or something. And these things can be good. But if they are, you've been one of the very few lucky people to break through. Instead, think, OK, what does the world really need? Is there a pressing shortage of people working on the next pandemic or AI safety or climate change or ending factory farming? And then secondly, asking, within those top problems,
What are the skills that are most in need? And then thirdly, thinking, and what do I most provide? Where am I the best fit? So that's the kind of difference in mentality is like starting with the world's problems rather than starting with kind of myself and my particular interests. Yeah.
I think one of the tricky things I find about this is I think it always comes back down to this balance of how much do I value my own selfish desires to make money, have status, live in a nice flat in London, whatever, versus what does the world need? Yeah. And I think that's a really good question.
Some of the stuff that I see from my friends who are way more into EA than I am is that actually those things overlap. Like you actually can have, you can do good in the world while at the same time having a great time and loving life. Do you find that it tends to be that there is a little bit of sacrifice or is there like once you've found the fit, you're like kind of just kind of going full speed ahead on both fronts? Yeah, I mean, there's definitely, I mean, for my own case, the thing that I have most tension with is just how much do I work versus how much leisure time. The kind of financial sacrifice is just...
I don't know. I mean, I'm also just personally, it's like if I had loads more money, what would I spend it on? I'm like, okay, well, I live in this like slightly run down house with like housemates. I kind of like it. I mean, I certainly like living with housemates. Maybe like we could get the plumbing fixed more and it'd be less. I don't know. There's some things, but like it's not huge. The thing that feels more like an ongoing thing is just like,
you know, how often am I like working weekends, how much am I taking like time off and things. And that's this, but even there, it's like, how often is that doing something that's like hard in the moment, but like very rewarding overall, you know, I could have had a life that,
had like a lot of time off. I could have that life now. In fact, it's at least unclear to me, like, would I be happier with that life? Yeah. Maybe I'd have like higher utility over time. Like I'm on the beach kind of half the time. It would definitely be less rewarding. So again, coming back to this, like, maybe it's better than the narrow kind of self-interest, but in terms of this, all things considered, like what's the rewarding life. But then also just loads of people, you know, within effective altruism, there's a big spectrum of people
There are some people who are just like, I'm in this all the time. I'm just like so absorbed. I'm finding it so rewarding. I'm just so committed to this. And then there's other people who are like, this is my job. And I've got loads of other interests too. And I give it my all just as I would like any other job. But then...
weekends, holidays, and just like have a great time. Yeah. And that's like a very valid and great way to live too. Yeah, fair. And I guess it's very, it's again too easy to say that like, oh, but if I think of the extreme scenario, I wouldn't want to spend all my time thinking about these good causes. Therefore, I'm going to spend none of my time thinking of the good causes because it's too hard to think about. Exactly. So we talked about 80,000 hours. Oh, one question on that front. 80,000 hours have recently been on a bit of a sponsoring spree, sponsoring the Tim Ferriss show, which you've been on like ages ago. I heard your episode and I'm like,
2015 or something. Yeah, it was years and years ago. I'll be on again later this year. Nice. But you know, his podcast is famously expensive. They've actually reached out to sponsor my YouTube channel as well. So they're sponsoring a video. How does, I think I know the answer. Like why does an organization that is just like purely nonprofit and free materials and everything paying thousands, if not tens of thousands?
if not maybe hundreds, to Tim Ferriss and stuff to sponsor podcasts and things. Yeah. I mean, essentially you can think of it as like an investment. So if you look at the history of outreach that 80,000 Hours has done on what for most of the time was like an incredibly small budget, the return, like altruistic return, like how much impact you're having with the money you've put in, is extremely high.
So it's simplest. So there's many career paths we recommend. Most people who follow 80,000 hours advice are going to do direct work, but some choose this path called earning to give where they deliberately earn, take a larger salary and, you know, deliberately try and pursue a career where they can earn a lot in order to donate most of their wealth away.
So let's even just put to the side the large majority of the impact, which is via direct paths. Just look at the earning to give. 80,000 hours spent. I don't know exactly the future, but some numbers of millions. It's over 10, but not that much more than that. Tens of millions on outreach. And there's a big lag in terms of when we have the impact. Because we're talking to people while they're still at university most of the time. Yeah.
But let's just take one example of impact, which was in 2012. I was giving these talks about earning to give. One of the talks was at MIT. There was an audience member there who was called Sam and he was like, oh, these ideas make sense. He went to earn to give and quantitative trading at ChangeSuite and then moved on to set up his own company. I believe he's now the richest person in the world under the age of 35, has already given
hundreds of millions of dollars through his foundation this year and has publicly said he will give away, you know, essentially all of his wealth, like 99% of his wealth. And so ignore all of the impact that we had. Yeah. Apart from that one example of kind of earning to give. Yeah. Well, the ROI, the amount of money we've raised for good causes by doing that. Yeah. You know, it's like 10 to 1, 100 to 1. Yeah.
Yeah. Now add in all of the other impact as well, which is like greater than that. You just, yeah, by doing this sort of outreach, talking to people, especially because no one's paying attention to young people, even though they have, I mean, Goldman Sachs is and McKinsey is, and they spend like a million dollars per university per year. Oh, wow. On this sort of, yeah, that's about right. At least in the US. Yeah.
It's maybe a little lower than that. Because for them, getting really bright students to work for them is worth an awful lot. Way more than that. For us, altruistically considered, we should be willing to pay even more. Because we're not just getting people for a few numbers of years. We're steering people for the entire course of their lives. It's like the LTV, the lifetime value of a convert to EA. Or just someone who...
takes their career seriously in terms of the good they can have in the world. Exactly. So someone who was just going to, they were going to go do the investment banking, not really have an optimistic life. Someone who switches to like, okay, I'm really going to try and do good. I'm going to do that over the course of my life. That's worth millions of dollars. And so if we're doing, you know, we're just experimenting with
I mean, perhaps, you know, no one cares about what you say about victimism. We'll find out. Yeah. But probably I think the ROI is going to be pretty good. Yeah. It means we're just like investing compared to the value that we're creating, like very small amounts of money, actually. And if it is the case that we're generating very significantly more resources for good outcomes, it's not just a nice thing to do. Like we have a moral obligation to do it because otherwise just less good is going to get done. Yeah.
Yeah. I want to talk about Giving What We Can, and then can we talk about long-termism? Absolutely. So Giving What We Can, what's the deal there? Right. So Giving What We Can launched in 2009. It's a pledge for people to give at least 10% of their income to whatever causes they believe will do the most to positively impact the world. So it's not limited by cause area.
You know, we recommend charities, the Effective Altruism Movement, GiveWell in particular recommends global health and development charities. Effective Altruism Funds has a set of locations where you can give. But it's up to you. The only requirement is that you honestly believe this is the best way of improving the world.
It started off with 23 people who had pledged to give at least 10%. Oh, I should say, if you're a student, you're not on an income as defined in the pledge, but we kind of ask people to give 1% of whatever they're living on. And then once you start taking a salary, the 10% kicks in. So we started off with 23 people. It was a pretty small thing. Now we've grown. It's over 7,000 people have taken the pledge, amounting to a low number of billions of dollars in pledge donations. There's a vibrant community that kind of
have overlaps heavily with the wider effective altruism community. And yeah, lots of people just feel like whatever career path, perhaps again, let's not let the best be the enemy of the good. You might think, look, I want to pursue the career I want to do. I'm not one of those all-in people. But here's a way in which just anyone can have an enormous impact. And that's by giving 10% of your income to the most effective causes. I guess we sort of alluded to this in passing, but for people who aren't familiar, like
What does that look like in terms of impact? Like how much money do you need to give away to have what sort of impact if you're donating to effective charities? The baseline is about $5,000 will on average save a life of a child in extreme poverty. That's if you fund something like Against Malaria Foundation or other organizations working on malaria. Now, normally when you hear these numbers, they're kind of over-egged. So I used to work as a charity fundraiser and
I would say these absurd things like, oh, it costs 20 pence to provide a dose of polio that could save a life. This, in contrast, is like really it's our best guess. This is from enormous amounts of research that's been done by GiveWell. It costs, you know, a few dollars to buy a bed net that protects two children for two years against malaria. To protect enough children after $5,000 worth of expenditure, you'll save a life. Yeah.
life. So that's one of the things you can do. That's kind of among the most promising things in terms of direct impact in global health and development. Other things you could do within climate change, something like averting one ton of carbon dioxide equivalent. Best guess costs something like a dollar per ton. Again, that's really pretty good if you think that
In the UK, you and I, we emit about seven tons per year. Well, we probably emit more because I suspect we travel more, consume more, but, you know, around 10 tons of CO2 per year. Well, entirely offsetting that.
It would cost $10, but there's no reason why you should own the offset. Instead, you could go much, much further than that. Then on some of the other things we'll talk about, some of the long-term stuff, it just becomes much harder to ascertain kind of quantitatively how much, you know, you have to get into much more speculative kind of calculations just in the nature of the thing. But I think it's plausible to me that you can do even more good again.
And this 10% number, where does that, where does the 10% number come from? Why not five or 15 or 20? A mix of things. So it's obviously there's some amount of arbitrariness there. It's not a coincidence that we have 10 fingers. We can't pretend. There's this long tradition of tithing where people for thousands of years, people much poorer than us were giving 10%. But there's also an important issue when we were kind of designing the pledge of getting the balancing act right, where
If we had been promoting a 1% pledge, you might think, oh, you'd have gotten so many more people, so you'd have done more good. But I think that would have been a mistake, because what we care about is how big a difference are we making compared to how much people are donating anyway. In the UK, the average donation is about 0.7% of income per year, and the US is about 2%. And so if we were taking that average person, we'd only be getting 0.3% of difference. In fact, the sort of person to take the pledge is probably already donating 1%. You're doing no good at all. Yeah.
So 10% is large enough that people aren't otherwise doing it. It's like, okay, this is, you're really kind of moving money that wouldn't have otherwise happened. Toby initially had the thought that giving what we can would be about people who are capping their income and giving above that. But the thought is that there you've gone too far and there's just going to be so few people doing it that people aren't going to really get on board. And so this is like 10% is this like not like...
slightly non-arbitrary number that's kind of in between these two ways of failing. Yeah. And again, I guess the question I always come to is if I were to think about this before, if I were to have heard this idea before I initially got, kind of drank the Kool-Aid and realized that this is a good thing and took the pledge, I would have thought, oh, but 10% is a lot of money. Like, you know, my mom, if she took a 10% hit to her salary, you know, fairly, somewhat middle-class family, et cetera, et cetera. But
But money is one of those things. It's like, it would always be nice to have a little bit more. And like 10%, you say to parents, they'd be like, oh gosh, I couldn't possibly do that because kids' private school or whatever. How do you think about that? I think there's a couple of things to think about there. So one thing is just imagine, so people normally make more money, at least in the beginning parts of their career, make more money over time. And you could imagine just, oh, I took this one job that was like,
made 10% less than this other job. That's often kind of decisions that people make. And they often do that because it's like, oh, it's going to be a more satisfying job. People are rarely like, oh, if only I'd taken that job that paid 10% more. It's normally a small consideration compared to other things.
And often, in fact, if you're worried about-- you mentioned your mom saying a 10% hit would feel really big. People tend to get used to the amount of money they have. And so one thing you can do is early on you say, OK, well, this would have been my earnings trajectory. Instead, I'm never going to go backwards. I'm never going to have a year where I'm living on less than I did before.
But the amount by which it increases is going to increase a little bit less than it would have otherwise done, such that I can scale up to 10% and then have that as a sustainable amount. So that's kind of one response is like,
Don't get yourself into the position where you've already made a lot of financial commitments. And then the second thought is thinking about this in terms of global distribution of income. I actually, I don't know how much you earn. My best guess is that you're in the richest 1% of the world's population by income. Even after my pledge, taking that as my income, I'm in the richest 5% of the world's population.
And there's a kind of thought of like, oh, boohoo me. Like, oh, it's so hard, hardship. Like, I'm still richer than 95% of people in the world. Yeah. Like, what's going on there? Yeah. That can't be right. If like 95, you know, let's even like drop off the bottom, you know, 50% of people. There's 45% of people who are in the richest half. Yeah. Yeah.
If they're able to live their lives, then why am I not able to? So that was something that really moved me to begin with. I think people just do not appreciate the sheer scale of global inequality and how much, even if you're just on a kind of middle class income in a rich country, you're part of the global elite because rich countries are just so much wealthier than other countries. Yeah.
Nice. This thing of, you know, people just don't appreciate the scale of this. So like, certainly for me, I don't think I have seen any of the documentaries about like why meat is bad or poverty or anything like that, other than if it happened to be on TV and I happen to be flicking through the channels. But this is a thing that I want to do to get more of that thing that you describe of like having genuinely having that feeling of like, oh shit, these actual problems that, you know, we need to solve fairly urgently. Yeah.
what would be like your i guess recommended reading slash watching list for someone like me who is like i've drunk kool-aid i just don't feel anything and i want i want to feel a little bit more such a good question i mean i have found that many people this is still more on the intellectual side but i have found the writings of peter singer enormously influential so we've got a bunch of his books in the flat some signed copies as well so i should actually read those okay so peter singer on the animal side animal liberation yeah and he's really doing it
There'll be another edition, but maybe out next year. Famine Affluence and Morality was his original article. I still find it very powerful. The Life You Can Save was his follow up. My own good book, Doing Good Better, that's kind of less, again, on the feeling side, but more just making the upbeat, positive case.
Again, on the kind of, you mentioned meat, the animal side of things, speciesism, the documentary has been, yeah, kind of rated very highly. There's also kind of undercover investigations, which is basically investigative journalists that go into factory farms and just record what happens there, where there's an enormous amount of brutality. I mean, it's this bizarre thing where if you ask people like, what do you think about animal cruelty? They will be like, I hate animal cruelty, it's disgusting.
Yet within these enormous silos hidden from the rest of the world, just animal cruelty, like on an unimaginable scale, is happening within factory farms. And that's the meat we consume. So yeah, these undercover documentaries can be incredibly powerful as well. I remember watching them again many years ago. I like the idea of...
Kind of getting into philosophy in the sense that when I have these sorts of conversations, I feel super energized by them. I love the hypothetical thought experiments often have conversations with Lucia about this a lot. And when I, whenever I read a blog post by a philosopher, I'm always like, Whoa, that's so interesting. And I started listening to a podcast called history of philosophy without any gaps, which sort of traces what the pre Socratic and then Socratic and like all of that kind of stuff happened.
but it's a bit long and it's a bit dense. How do I get like a kind of dummy's idea of what the field of philosophy is about so that then I can...
And maybe this isn't necessarily a prerequisite so that I can then dive into someone like Peter Singer and understand the context behind how he came up. So the single best podcast, for the dummies guide for sure, would be Philosophy Bites, which has been running for years and years, way before podcasts were cool. That's just the very short introductions and summaries to particular ideas. For something that's more in-depth and occasionally...
I mean, it's kind of remarkable that this is on the radio, but In Our Time with Melvin Bragg, it's a Radio 4 program, but it's exceptionally everyday. And they get like really top academics on the show and have like quite an intense intellectual conversation. It's very wide ranging. In Our Time covers history, science and philosophy that you can...
On the podcast apps, they separate it out if you want. So you can just listen to the philosophy side of things. Sick. If you want something fun, Very Bad Wizards is a pretty fun philosophy podcast. Yeah, I think those would be my top recommendations. But then if I were you, I would just get people on the show and you just start grilling them. Then it's a win-win, I think. I could make a bunch of recommendations of fun philosophers. Oh, that would be amazing. That would be very, very handy. Yeah, because I love the idea of...
I think one of the glorious things about having built this platform, be able to have someone like you on the podcast is I just have a conversation with you. I can just ask you all the stupid questions I've got in my mind. I'm jealous. Let's transition to long-termism. So as a general bloke who sort of knows about this stuff, I very much vibe with the idea of let's try and do the most good with our time and with our money.
Caveat I probably while having a nice standard of life ourselves But there's no reason why those two things necessarily have to be in conflict Which is why I've taken the the 10% pledge for giving what we can and why I give 10% of my income to charity every year and one thing we were thinking about a lot as well is in a way like for us where the point of our I guess business and the thing that I really care about is Helping people live their best life and I think there's sort of five pillars there of health wealth love happiness and impact and
And I think on the impact front, if we could, for example, get more people into EA. I went on 80,000 Hours and did that little survey of booking one, one, what are all the causes you care about? And honestly, of the 50 of them, the only one I cared about was getting more people to care about this stuff. But I'm hoping after watching and reading more of Peter Singer's that I will actually start to
physically care about more of these. I think we can have some level of public health or public good impact through just encouraging more people to join this stuff. And so that's like my current level of knowledge with
And I guess if I'm thinking of what are good causes, I would go on GiveWell and I would see against malaria foundation, I would donate to Lucia's charity, Leap, by the way, which are hiring, we've been told to plug down below if you would like to work for a high impact nonprofit. Talk about it, what we owe the future. Thank you for the signing. - Yeah, no worries, what we owe the future. - The future starts with you. What's the deal with long-termism? - Cool, so the core idea is just, let's just zoom out for the moment. So we're normally concerned by the problems that affect us today or like the current election cycle.
But let's take just a much larger perspective on our history and our future. So, you know, universe started 13 and a half billion years ago. First replicators about 4 billion years ago. Replicators. Oh, like what was that replicate? The first life. Yeah. Cool. On Earth. First animals about 600 million years ago. Homo sapiens about 300,000 years ago. Agriculture about 10,000 years ago. Industrial revolution kind of 250 years ago. And now we are here now. Yes. What's going on?
Okay, so that's the past. Spans an enormous amount of time. What about the future? Well, how long could the human species last for? We've had about 300,000 years in our past.
A typical mammal species lasts for about a million years. So if so, if we lasted as long as a typical mammal species, we might live another 700,000 years. That's big already. That's a long time, yeah. But things get even bigger. We're not a typical mammal species in any way. We have 100 times the biomass of any animal that's ever walked the earth. We have technology. That kind of means two things. One, it means...
We could end the lifespan of our species much earlier. We could entirely wipe ourselves out, and we'll talk about that in a second. Or we could live much longer again. So if we succeeded extending our lifespan past the typical mammal species, well, there could be hundreds of millions
of years left while the Earth is still habitable. The lifespan of the sun is something like 5 to 8 billion years. If we were able to ultimately take to the stars, that would be hundreds of trillions of years before the last conventional star formations. So when we look to the future and the potential scale of civilization, it's truly immense.
And why does this matter? Well, it matters because future people matter. If I could think about, you know, harms and benefits that I could, you know, I could prevent or bestow on people alive today. And then I think, oh, I could also cause harms and benefits to people a hundred or a thousand years time. Yeah.
It just, the distance in time just doesn't really seem to matter in the same way that distance in space doesn't matter. So what do you mean by doesn't matter? Well, suppose I tell you, you can prevent a genocide. A million people will die. Hmm.
And now I say, oh, but it'll happen in 100 years time or it'll happen in 1,000 years time or 2,000 years time. But the same number of people for sure, and let's just assume for sure, will be saved either way. Are you like, oh, well, 2,000 years doesn't matter. Who cares about that? Like, I don't think so. I think it's like a matter of common sense. The mere fact that someone will exist at a later date and time doesn't change their model status. They have just the same level of model worth.
Okay, yeah, fine. We can dig into that if you want. Yes, let's dig into that. Surely there's a discount function here. Well, what are we discounting for? So it could be that you're more unsure about impacts that you have later in the future. But I guess in this hypothetical world where I was certain to be able to prevent a genocide now versus in a year's time, I'd obviously be like, well, I mean, both...
Yeah, I'd like to prevent both. Exactly. But if you told me I could only prevent a genocide in a year's time, I'd be thinking, all right, let's get everything into gear and figure out how to prevent this genocide. For sure, yeah. So I think uncertainty can, of course, be a reason why some future impact, if it is therefore more uncertain, is a reason to ignore that. Okay. But bear in mind, yeah, all of these future people, they have lives, they're just people like you and me, they have hopes and dreams, like we can interact with them. You know, if I, like...
take, you know, it's other aspects of common sense, like people worry about radioactive waste, that, you know, it pollutes for thousands of years, even hundreds of thousands of years, or carbon dioxide that we emit into the atmosphere.
That will persist. A significant fraction of that persists for like 100,000 years. It's only after that time that it all gets sucked out of the atmosphere. And the geophysical impacts last that long as well. And again, should we just be ignoring that? It's like, oh, it's a future time, future people, whatever. It seems like... Yeah, there's something about that that feels weird to be completely ignoring impact. Because it's like, you know, obviously I care about the world that my kids live in. And their kids, and probably their kids too. Yeah.
And that's where my own personal feeling would probably stop. But like, obviously, morally speaking, and yeah, morally speaking, I should really care about everyone else. Yeah. Not just Mike. Yeah. And I said, look, they're equally as important. There might well be some...
you know, genuine reasons you have to care more about people alive today. So you mentioned your kids, like there are people who are alive today you have these special relationships with, like with your brother and so on. And that can give extra reasons. Or there might reciprocity. So reasons to, you know, people who've benefited you that you can benefit them as well. So there are some additional reasons for caring more about the present. But
Again, like in this genocide case, we should care somewhat about the future. Okay, I fully buy that I should prevent a genocide, whether it happens in 10 years, 100 years, or 1,000 years. Okay, fantastic. And then you combine that with just the sheer scale of what's at stake, where I was telling you this zoomed out view of just how large a future we could potentially have.
And if we should care about future people, well, even on the most conservative of these estimates, future people outnumber us like a thousand to one. So there have been a hundred billion people in the past, about 8 billion people alive today. Yeah. Something like a thousand trillion people to come on more on these more kind of
ambitious or optimistic accounts, but which I think we should take seriously, then that number gets larger again. Trillions or trillions of people outnumbering the present. So the stakes are just huge. Yeah, enormous. Imagine if we found a society that had like a trillion people, a trillion, trillion people alive today. And, you know, it was perhaps under the sea or something and otherwise inaccessible, had currently been inaccessible to us. We'd think, oh my God, how are we impacting them? This is huge. Yeah.
And then the third idea is that we really can make a difference, not just to the present day, but actually to the entire course of future civilization. And I think that's because of, you know, I gave you this long history.
But we are at a time of exceptionally rapid and unusually rapid change compared to either the past So it's only been 250 years since the Industrial Revolution when the scale of kind of technological innovation massively increased Yeah, and we also I think there are good reasons for thinking that can't continue and indefinitely Maybe it can continue for hundreds of years Maybe one or two thousand of years, but not even tens of thousands of years let alone the millions of years that might continue into the future. Yeah, and
And because we're so rapidly advancing through new technology, that creates risks and opportunities that I think actually do shape the entire course of the future. The clearest are risks of extinction. So nuclear weapons had unprecedented, both the atomic bomb and then, the atomic bomb had about a thousand times the destructive power of conventional weapons. The hydrogen bomb had about a thousand times greater conventional power, greater destructive power than the atomic weapons.
That's just like the beginning of like a new era of destructive power. Yeah. I think most scary of all is engineered pandemics, engineered pathogens. Okay.
So pandemics in the past have already been among, have caused the largest death tolls of all catastrophes of all time. So the Black Death killed about 10% of the world's population. Precise figures are uncertain. COVID is utter, colossal tragedy. In terms of global, I think killed about 0.2% of the world's population so far, which on the scale of the kind of catastrophes we've had is actually comparatively small. There's been many pandemics that have killed kind of upwards of 1% of the world's population.
But then when we look to the future, pandemics could get much worse again if they're with viruses that have been deliberately engineered to be destructive. So you could imagine a virus that has the fatality rate of Ebola, the infectiousness of measles. Well, now you've got something that like, especially if deliberately targeted to do so, could kill a very large proportion of the world's population, maybe even. And then in the worst case scenarios, literally everyone.
And now how long does that effect last? Well, if the human race goes extinct, we're not coming back from that. That really is an event that could happen that would have an impact over this entire course of the future. And we've already said this feature is huge. Future people matter. There's this enormous amount at stake. And so that's kind of one way in which there could be an event that could persist indefinitely, extinction or otherwise end of civilization.
The second way is I call kind of value lock-in. Okay, sorry, can I just ask a question on that front? Absolutely, yeah, yeah. There's been lots there already. Okay, so this is all very scary. One somewhat weird question is something I've heard on a YouTube video in one of these Stephen Crowder's, Change My Mind About Abortion, is that one argument for life begins at conception, life begins at birth,
begins before then life begins at like sperm itself is that every time you masturbate, you're committing mass genocide or something like that.
That's an argument against the idea that every sperm is sacred, presumably. Yes, exactly, because that would just be absurd. Now, if we are saying that, for example, future lives matter, what is the difference between that and saying basically sperm matter, and that those future lives have not been realized just yet? There's a difference between the people who think that...
Even sperm have like moral status or persons. And then you're, well, I think there's two things. And then you're saying you're killing them. That's, you know, very different from saying like, there will be people in the future who are fully grown persons. There will be. And can be harmed or benefited. Okay. And if there is an extinction event, then it means that those people are prevented from being born. Yeah.
Yes. So there's two ways in which we can impact the very long-term future. There's like extinction. And that is an enormous tragedy insofar as it kills everyone. Yeah. Like that's extremely bad. Yes. But then it has the second effect of preventing the existence of future lives. Yeah. And there we can get into some of the intense model philosophy, which I now know you're going to love, about just how should we think about that loss? Okay.
And I devote kind of two chapters of the book just to that question. Because, yeah, you really do need to get into some model philosophy to think about that. But then there's a second way of impacting the long-term future, which is just changing the value of the future in those worlds where we do survive. So I kind of think about it like extending humanity's lifespan versus increasing its quality of life.
And I think in particular, there are scenarios that, again, could occur essentially where society gets locked into a particular state with a particular ideology governing a particular set of values that could persist forever. And in particular, I think that the date, the point of time at which we have sufficiently advanced artificial intelligence that AIs can act as agents themselves. Yeah.
could be this point of lock-in. In the kind of extreme, you can imagine this kind of global totalitarian AI-enabled state. I think that is really something that could persist indefinitely, not just an extremely long time, but maybe forever. And again, we can go into that too. And so those are the kind of two things, the two pathways by which
events could occur in a lifetime that I think really do have these extremely long lasting impacts. Okay. So two main reasons. Number one is something bad could happen in our lifetime that would cause the extinction of the human race. For example, engineers, pandemics or like nuclear war. Yeah. Very, very, very worst case nuclear war. Yeah. Or, or,
Okay, so those sorts of things. Or like an asteroid hitting the Earth and us not killing it before it gets here. And secondly, it's like, okay, let's say that even if we don't extinct our entire species...
There are all these people who will come after us, our kids, our grandkids, and so on. Yep. At infinite time for potentially trillions of years. And if we screw up the world in such a way, I guess physically, but also like governmentally in terms of how society is structured. Yep. Or like ideologically in terms of like what people think is good and bad. Yep. That actually has enormous impact on the society.
quality of the life of these people in the future. Exactly. Yeah. If we value the quality of life of people now, like we do with quality adjusted life years,
and the fact that we are keen on preventing illnesses that might not kill someone but might debilitate their quality of life, we should also have at least something close to that same level of moral caringness, to use that word, about people in the future. Exactly. So you can imagine a future that's wonderful, blissful. It's like everyone is free, they can do their own thing, we've figured out the laws of nature, we've got incredible technology and moral progress, and
everyone has these like long, healthy, happy lives, can pursue whatever they want. I could also imagine a future that's almost, you know, all the power in society is held by a single dictator and the entire rest of society are essentially the slaves. I think that developments over even the course of the century could be crucial to determining kind of which of those civilizations we get. And that's like a pretty scary thought. And for that, thinking that the former is better than the latter, you don't need to get into the
kind of intense model philosophy about whether loss of future life is like how bad a model is. Yeah, it's just kind of obvious. We'd rather have a world where people are free and like, okay. Exactly. Okay, great. What comes next, I guess. Okay, and then... If we accept all those as like, yes. So then if you accept all those, the question is just, okay, what should we do? And there are some standard examples. So again, Extinction's some...
The easiest thing to focus on, we are doing enormous amounts of work to try to ensure that we prevent the next pandemic and in particular prevent worst case pandemics. If you're particular, yeah, some of the things you can do. One thing I'm pretty excited about is far ultraviolet radiation. What's that? So basically just...
intense, like far ultraviolet, but certain spectrum of light, sterilizes microbes. So you could put it just like in light sockets and it would just sterilize all of the microbes that are flowing around. Yeah, very recent development. That's cool. Probably we're going to fund it quite a lot. It seems like it would be very good indeed. It's also something that could just be very universally applicable. Because here's the kind of worry with future pandemics is
you create a vaccine against flu but it doesn't work against other sorts of pathogens or some sort of drug it works against some pathogens if we're really worried about the class of pandemics as a whole including very worst case pandemics including things that people are designing potentially for the destructive power as part of a biological weapons program or something you really need very general purpose technology yes this is something that could be very general purpose okay and theoretically someone would
I suppose theoretically someone could design the virus that has a protein that shields it against far ultraviolet. Potentially, but I think these mechanical ways or physical mechanisms for destroying pathogens or protecting against pathogens are at least much more general and much more robust than a vaccine, even like a broad-spectrum vaccine.
So kind of, yeah, this would be one. Another is very advanced PPE. Oh, okay. Personal protective equipment. I was like, wait, what? Oh, yeah. Well, I mean, I am also pro people doing more philosophy, but maybe not so helpful on the pandemic side of things. Advanced personal protective equipment. Exactly.
Exactly. Yeah. So again, if you can imagine, we've got like masks or intense personal protective gear that is just so good that like no viruses can kind of pass through it. And perhaps you've got like kind of nanomaterials that you can breathe through, but viral particles can't pass through. Again, very, very broad protection. So that's kind of on the technology side, there's things we can do. And then on the policy side, we can just start to have people just taking pandemics like a lot more seriously. So
there was this bill that was, you know, we just had like the worst pandemic in a hundred years. You really think that society would be like, this was bad. We're going to stop this again. Yeah. It costs, you know, tens of trillions of dollars as well as tens of millions of lives. There was this bill that was being proposed that we were really kind of pushing for, to be accepted for like about $70 billion in the US to say like, never again,
And what happened? You didn't get bipartisan support. No one was really there to champion it in the US government. And so it got whittled down and whittled down. It's now maybe $2 or $3 billion. But there's not appetite organically for saying, no, we want to have the policy in place to say never again to pandemics. Even though there are things we can do. So I mean, a very simple thing is just constant detection. So you're just...
in a hundred locations around the world, or maybe more, you're just going through wastewater, for example, and just scanning for like, is there something we don't know in this? And as soon as it is, you can maybe get on top of things much faster than otherwise. That's something we don't do. We could easily do it. It would be an ongoing thing. It would protect us dramatically against future pandemics because you'd be able to potentially contain things much faster. And there's more there too. How worried are you about...
How worried should we be about engineered viruses and stuff? I mean, I think extremely worried. I mean, it's already the case that we can genetically engineer viruses. So gain-of-function research is very common. This is done in order to protect against future viruses. So you're worried, perhaps, about how might the flu evolve so that it becomes more transmissible or more deadly. Yeah.
And so, yeah, you take a virus, change its DNA, it becomes more transmissible, then you can study it. So we can already do this to some extent. Technology is just getting better and better. This is actually an area of the world where we're familiar with kind of Moore's law, that computing is a very fast-moving area of technology. Moore's law is like cost-benefit.
for a given amount of computation halves every 18 months. For sequencing DNA in many areas, it's like faster than that. It's actually like extremely fast moving area of technology. So I think it's just utterly, we already can do this to some extent. The ability to do this very well, I think very predictably is some technology we will have even within decade or two.
Secondly, and that means you've got the ability to design very dangerous pathogens. So the genetic recipe for smallpox, it's online. You can just download it. People are not at the moment sufficiently concerned about the sorts of information we're putting out there, especially given future tech. But again, we could make things that are much worse than smallpox potentially.
And then here's the, and you might think, well, no one really wants to do this. However, just if you look at biological weapons programs in the past, even though it doesn't seem to make sense to design biological weapons, because like, how do you protect yourself? Like if you're infecting the other side, then isn't it going to come back to you?
Well, the USSR had 60,000 people at its height really trying to design some of the worst case, like some of the most deadly viruses they could and produce them en masse so that it could be used in war. You can imagine it being like a kind of a deterrence. So perhaps it's a country that doesn't have nuclear weapons. But they say like, look, if you nuke us, then we're going to just let this pathogen go and then just everyone's gone. That can be quite scary. Then the second aspect is that in terms of something with destructive power,
In some ways, we got lucky with nuclear weapons, where fissile material is very easy to contain. And it's easy to monitor, because it's just so hard to make fissile material. It's comparatively easy to identify a country that is trying to do that. In contrast, there are already these at-home, build-your-own-DNA kits, small amounts of DNA. The cost to do this is only going to get cheaper over time. That means, by default,
on the default trajectory, if we don't intervene, well, the number of actors who would have the kind of cost and ability to start playing around with such deadly viruses. So because technology is improving so ridiculously fast,
at some point in the very near future, I could theoretically go on smallpoxrecipe.org, download the recipe, and be able to manufacture the smallpox virus using a mail-home order. Exactly. At least if we don't get our act together in terms of the policy and the regulatory response. And I know that I'm a decent person, but like, you know. Exactly. And most people are. And this is the thing that mostly assures me, is like, how many people in the world really want to destroy the whole world? It's not that many. But it's also not zero. It's like, there's a lot of...
It's also not zero. Exactly. And so when I look over the course of the century, like, what's the risk I put from extinction via engineered pathogens? And I'm like, somewhere between 0.1 and 1%. Maybe I'm like 0.5% or something. Other people I know who are experts in the area put it around about 1%. Toby Ordon, his book puts it at 3%. But here's the thought, like,
If you're getting on a plane and they said, hey, it's only a one in a thousand chance that this plane is going to crash and kill everyone on board. It's not like you're particularly assured by that. And we take enormous amounts of effort to stop plane crashes, which have much lower risks of catastrophe than this. But yet with the all of society, we are not paying attention to that because it's like
It's not yet on people's minds. And even with a pandemic that killed 10 million people, cost trillions of tens of trillions of dollars, nonetheless, people are still not paying attention to like, actually, how bad could it get? We need to be really careful with what we're creating. Yeah.
MARK MANDEL: OK. So that's engineered viruses and bioweapons and stuff, which we should try and intervene with. So other than these sort of broad spectrum UV thingy bulbs or nanofiber PPE, what other things can we do?
Can we do so? Stop the cat from getting out of the bag, as it were. Yeah. So again, then there's also on the policy side. So I said there could be like constant monitoring. I think that's going to be really important. Another thing is just like in terms of like scientific culture and policy, just like choosing what we do and don't like the sort of science we do and don't engage in. Yeah.
Where you know there are certain areas of science there are certain things we don't do scientifically on ethical grounds Yeah, we don't experiment on humans for example We don't know humans for example if we wanted to clone the human like we could that is a technology like you know Maybe it would cost a certain amount of money, but like it's within the within reach we've cloned monkey macaque monkeys all sorts of things We don't do
and have delayed that by like many decades. We could do the same with this. So we could say, look, we're not like, we're just not going to engage in research that is differentially advancing this
this kind of very dangerous technology. Instead, we're just going to focus our research efforts on the things that are, you know, provide... And importantly, like a lot of this, these are really well-motivated scientists. They're wanting to help people. There is genuine like benefit here as well. It's just that there's enormous like risks. And so instead we can just focus on, we can say like, look, this isn't the sort of thing we're going to pursue. Instead, we're going to focus on research that's just more genuinely beneficial. So one way you could do this is just
introduce third-party liability insurance. So in just the same way, like when you have a car, you have to take out third-party insurance so that if you hit someone, you have to, the cost is on you. In the same way, if there was a lab and
and the virus escapes-- I mean, this only helps with private work, not government weapons programs. But if you have a lab and the lab escapes-- and I should say, lab escapes are extremely common. Yeah, this is, again, underreported. But we can go into the history of lab leaks. It's pretty striking. It's pretty shocking. But yeah, if it leaks and escapes from the lab, then you, the laboratory, are kind of on the hook for those costs. I think if you introduced that, all of the most dangerous-- and it was global.
All of the most dangerous research would kind of disappear overnight because even though that's a cost that they are imposing on society without other people accepting that cost, it is not being currently born entirely. Like the labs are worried about their staff getting infected, but not about the world getting infected. But what about like...
Insert government X or Y or Z that will not buy into to a mass agreement that hey Let's not do this research on biological weapons. You can imagine insert government X being like hey the whole world's not researching these weapons Let's do it in secret. It's tough. So it means they don't think we're gonna get a perfect solution unless
The governments that were brought in were also willing to exert significant political pressure. But yeah, consider a rogue state like North Korea. There's a limit to what you can do. I mean, if China were on board with this, then perhaps they can exert sufficient political pressure that they could get on board as well. But you could at least slow things down. North Korea is not the technological leader in the world. And so if the major technological leaders were willing to say, look, we're going to just...
We're going to really put this on ice until our defensive biological capabilities are very strong. I think the world would be in a better place. How did this happen for nuclear weapons? And what are your concerns about nuclear weapons in the future? When you say, how did this happen? As in, how did we put the cat back in the bag?
Mean or have we I mean we didn't really so nuclear weapons first developed by the US at the end of the Second World War That was in 1945 and they were used immediately upon development Four years later the USSR had the first nuclear weapons and then not so long after that other nuclear countries Yeah, the US UK sorry as well as the UK and France China India Pakistan some others and there was
There have been some major successes. The peak of the use of nuclear weapons, of the arsenals of nuclear weapons, was in the late, in the 80s, I think. It peaked at about 40,000 warheads. Now there's only about...
9,000 warheads right that's almost entirely between the US and USSR right so they were all like they were sort of competing to see who could have the biggest arsenal exactly in case of in case of no like nuclear war exactly then just kind of long slow periods of diplomacy were able to get those kind of numbers down I really don't think we'll ever see a point where it's at zero yeah and then again because it was very hard to develop kind of new nuclear
weapons, there can be intense political pressure, especially from the very powerful countries that, you know, we're going back 60 years, US, UK, US, USSR. They have sufficient power that they can ensure that other countries don't start. And so I guess if you're an individual interested in a high impact career and you get into diplomacy, for example,
You could have outsized influence compared to if you did not decide you wanted to go into that for example, yeah Yeah, for example like that guy that stopped in the Cuban Missile Crisis Yes, the guy who didn't press the button even though the communication of that story that I heard somewhere or other. Yes Yeah, this story was so Cuban Missile Crisis Very close call for the US and USSR going to war JFK put the figure that but somewhere between one in three and one in two that
nuclear war was the result. There was a particular submarine that it had lost radio contact with the rest of the fleet. It was nuclear armed there, but ships on the surface were trying to indicate to that submarine that it should rise to the surface by dropping kind of dummy depth charges. However, the submarine had no idea, the leader of the submarine had no idea what was happening.
He thought war had broken out. He was very angry. And he was like, yeah, we should launch the protocol. If I recall correctly, it was that as long as kind of two out of three of the leadership said, yes, that's right to launch nuclear weapons, they should do so.
It was pure luck that there was a high up, a very high up official, Vasily Arkhipov, who was a war veteran himself and actually a war hero, also happened to be on there. And he was like, no, we shouldn't launch the nuclear weapons. It's like too risky. We should go up to the surface instead. And he managed to kind of win the argument.
It was really like fluke that he shouldn't even like it was fluke that he was on the submarine in the first place, right? But yeah, he like in that case single-handedly stopped all-out nuclear war. Yeah, that's a very close call that we had and so if you can be in a position of such influence as Vasily Arkhipov was that's an enormous way to have you know, that's an enormous impact that he had and it does feel weird like
As an altruist, oh, you should go and work in defense policy or something like that. It's not the standard path for altruistic young people. Yeah, it's not like volunteering at a kitchen or whatever.
Exactly. But if you're thinking about what are things that are really consequential in the world, like what weapons are being built? When do people, when do countries go to war? How do they handle these like very like tense and contingent moments in terms of decisions to go to war? Well, those are just like, that's just very high impact and having kind of more sensible, better informed, more altruistic and,
you know, morally motivated people in the room just makes the world go better. Because it wasn't just Vasily Arkhipov as a story of a kind of close call. There are many other examples as well where it seems like individual actors just saying no is actually what was the difference between war and not war. What's the risk with nuclear weapons, hydrogen bombs, etc., like now slash in the future? How worried are you? I mean, I'm very worried about the prospect of a nuclear war. So
What's the chance that we have a nuclear war in our lifetimes like an all-out nuclear war? I'd say at least 20% Really? Yeah, 20% Yeah, I mean we yeah historically historically the lives of ourselves and our parents are Very is like very weird because there's been such a long period of peace, right? So we think of the second what you know, we think of the 20th century's like the most deadly century ever and like certainly World War one and two were
Extreme events in terms of how deadly they were but war between great powers is like the norm throughout all of history Right and so the stranger thing is that we've had this 70 years of comparative peace Yeah, and obviously it's not been a peaceful time. No, you almost numbers of wars. Yeah, but no war between kind of great powers Yeah, partly I think that's just luck that the USSR and US never went to war But then one argument for this is just because the US has been such a hegemon. Yeah, so it's like
other than the possible USSR. It's like, who are you going to mess with exactly? That will change over the course of the century. China is-- I mean, depending on how you count it, already has overtaken the US as an economic power. Certainly will in the future, and will as a military power too. That could change the underlying dynamics.
And so when we look at kind of forecasting platforms, like what's the chance of a-- yeah, what's the chance of kind of World War III war between great powers over the course of the next-- it looks to be something like one in three, one in two. SIMON HUGHES: Bloody hell. MARK BLYTH: And then, yeah, some discount for like what's the chance that turns into a kind of a nuclear war. SIMON HUGHES: OK. MARK BLYTH: Yeah, 20%. Would seem about right. Obviously, you can quibble, but if it's 10%, if it's 40%, it doesn't really change the qualitative difference. SIMON HUGHES: Yeah, there's no way you're going to get on that airplane. Yeah.
So that's very scary. I think that would be extremely bad from a long-term perspective. Not least, you know, it would wipe Europe and the US off the map. I think that like...
There are plausible arguments that like liberal democracy is actually like quite fragile thing We're quite lucky to be living in such that that's the kind of yeah to widespread globally rather than authoritarian governments There's this further question of like would it kill literally everybody? I think that's very unlikely the early kind of Carl Sagan and others like early like anti-nuclear advocates Often, you know sometimes make claims like oh this would be the end like little end of humanity. Everyone would die I think that was seriously overstated
I mean, you kind of don't need to overstate it. It's already you're talking about like hundreds of millions of dead from the dielectric explosions. If there is a nuclear winter of some form, which I don't know, I think significant chance, maybe like one in three that occurs. It's still kind of unclear. Like there hasn't been that much modeling on it, surprisingly. Yeah, but still significant chance that's then billions dead.
Nonetheless, I think like unless things change radically so you could imagine a world where Arsenal's are like a thousand times greater than they are now Yeah, it's not in terms of global GDP It's not that expensive to create new nuclear warheads or where there's like next level again of kind of nuclear of kind of nuclear power Possibly that gets into kind of extinction level territory. But otherwise I think like yeah people are
some large number of people would survive. But the kind of devastation that would invoke in terms of the shifting the trajectory of human civilization in a negative direction would be absolutely huge, I think. Okay.
Okay, so we've talked... I'm sorry about this conversation. We were so lighthearted in the first half. That's not supposed to make money. Who cares? So we've talked about engineered bioweapons. We've talked about nukes. What else are you worried about? What else should we at large be worried about? Yeah, so those are the things I'm most worried about in terms of catastrophe, just like utter destruction on the world. The other side of things is like,
It's a little bit harder to get grips with, but I think is as, or maybe even more important, which is this idea of value lock-in. So value lock-in, yes. Or like, or ideological lock-in. So the idea that, you know, we have been making moral progress. It was only a few hundred years ago, like 1700, like three out of four people were in forced labor, some form of forced labor. And I talk about this at length in the book. It's kind of,
We could go into it takes a while, you know I think that like near elimination of slavery worldwide or like legal abolition, you know, I think that was
plausibly at least a somewhat contingent event. Like it was as a result of like model arguments and model campaigns. What do you mean by contingent event? Contingent in the sense like it could have gone otherwise. Oh, right. So could there be a world that had the same level of technology that we have now, but where, you know, you and I like, rather than your employees, you have slaves kind of working for you. And that's just regarded as kind of morally normal thing. Yeah. Like, I'm not sure. Maybe it was just like inevitable that it was going to...
Because it was like a very morally normal thing for absolutely ages. For ages. I mean, it was really like, at least by the slaveholders. I mean, it was across all societies. So almost every society after the hunter-gatherer era, like agricultural society, had some form of slaveholding. And we listened to like the Stoics and stuff, but they were chill about slaves. Oh, yeah, exactly. Yeah. All of the, you know. Yeah.
Yeah, all of the philosophers and stuff that we look at as being like... They defended slavery as well, or at least accepted it as part of life. These were people that were dedicating their lives to moral reflection, Plato and Aristotle. Immanuel Kant, one of the great rationalists. Enlightenment philosophers defended slavery. Even the kind of Enlightenment philosophers that are regarded as anti-slavery. It's very tepid. Montesquieu is like, oh yeah, well people shouldn't really be slaves. Unless it's hot.
If it's a hot climate, then like, yeah, slavery is okay then. It's like, so people, it's really, I think, the thing I kind of partly want to do in the book is really make people appreciate other people can have very different moral points of view. Yeah, because right now for us, I mean, just to go on the slavery example, like it's one of those things where it's like, well, what about slavery? Is like the trump card in any kind of,
Argument you're having exactly because it's always that it's just such a no-brainer that obviously slavery is bad for the record Obviously, it's so bad. It's basically axiomatic at this point, but
But like 300 years ago, if you, if you and I had been born into basically any other society and were part of the kind of the elites of that society, we'd be rationalizing. Yeah, exactly. We'd be rationalizing it. Probably we wouldn't even be talking about it. Yeah. Cause it's just such a normal, like kind of because yeah, exactly. We wouldn't have been talking about vegetarianism. There's lots of things. And so I think it really could go either way. And then in the future, I think it could be that the world settles and some kind of set of moral values that then just persist indefinitely. And yeah,
Yeah. Okay. As in, so at the moment, there's been like a lot of moral change, especially at the moment. It's not inevitable that that could continue. And I think we've seen a lot of moral progress, in fact, as well. But we could see the kind of untimely end of moral progress where that could happen in a couple of ways. One, if you end up just having like the single world culture or world government, then it could be...
That it could be that that's like not a very morally exploratory single culture. Yeah, what kind of one world government? And so you just have this one ideology gets locked in forever So this is kind of clearest if you look at you know totalitarian regimes that we saw in the 20th century Yeah, so the Nazis they aimed for a thousand-year Reich. They were aiming for you know, at least aspirationally Yeah control over the world. Yeah forever. Yeah with a single set of like with a single ideology and
And part of the reason we see model change over time is that we have kind of competition, model competition, either across societies. Yeah, so why did the Soviet Union fall? Kind of in part, at least it's like, you know that there's these other countries with liberal democracies and there's a better life. If you'd not got that globally, then that's one reason for kind of model change.
Yeah. As in you can see other people like on TV and stuff. Exactly. Oh, they're free. Hang on. That seems good. Yeah, exactly. Or internally. So we live in a liberal society, so people can pursue all sorts of different model forms of life. And then you can say like, oh, hey, actually, like, for example, you know, there are people in same sex relationships. And actually, that just seems better for them. And like over time, like the better model arguments can win out.
But again, if you have this kind of totalitarian regime, you've got like slick adherence to the ideology, you don't necessarily have that. And that's the sort of thing that will then reinforce itself. Exactly. And you get these sort of, you know, the final empire by Brandon Sanderson type situations where you have a government ruling for a thousand years and you've got these little resistance rebels, but like they can't take on the might of the empire. Exactly. Yeah, exactly. And again, like,
We've had kind of in some sense near misses. I don't think that the Nazis ever had a real chance of getting global governance, but you could imagine it's not such a different kind of way history could have gone such that they could have really won out or the USSR could have really kind of conquered the world.
But then the most scary thing of all or like the most worrying thing of all is like future technology. Okay So one yeah one other reason for like change over time is just that people die and that creates this kind of churn over time and
That, again, is something that may change. This is now... I mean, one would be if we just simply stopped the biological aging process, people would have natural lifespans of many thousands of years. But a second, and I'll caveat, this is where things start feeling like the most sci-fi, but I really want to defend them as not sci-fi, actually, like things that...
could completely happen in our lifetime is, again, once you start having AI systems that are agents in the relevant sense. So we have AI systems that are like NaNo at the moment. They're very good at playing Go or playing StarCraft or predicting or writing text. We don't have things that are realistic that can perform many different tasks in the way that humans can. But probably that will change.
And there are good arguments for thinking that change in our lifetime. So in particular, the latest machine learning models, the amount of computation they use, it's kind of comparable to like a honeybee brain. It's nothing, isn't it? But predictably, so yeah, all of that stuff, amazing stuff computers are doing all like less than a bee. Predictably though, like it's very unclear, like yeah, neuroscience, neuroscientists, it's like very unclear, like how much computation the human brain does. But
We can give ranges. It spans many orders of magnitude. It's really very likely that unless there's some other catastrophe that preempts this, that we will be creating ML, machine learning models, AI models that have the same amount of computational power as the human brain. And
again, pretty likely by the end of the century that we will have also done a comparable amount of training, what air researchers call training. And so it really looks that within our lifetime,
We're kind of within the developments of AI. We're moving over the kind of crucial boundary of like artificial intelligence, rivaling of exceeding human intelligence. It's plausible we're moving to a world where, yeah, the kind of smartest beings on the planet are digital rather than human. One set of concerns there is like, well, what if this, the...
AI is like take over, disempower humans. This is very familiar now for them. Nick Bostom, super intelligence and other discussion. I talk about this in the book and I think it's really important. And like, there's like tons of good work on it. And we can talk about that.
But even if we navigate that challenge, there's still this new thing of just, this creates a moment of lock-in where as soon as you've got artificially intelligent agents, you now have beings that are essentially immortal. Because any piece of software is infinitely applicable. Any piece of hardware will wear out. But if you can just copy and paste the software, then if you have...
Either like it's just like an AI enforcing a constitution or kind of sorry an AI enforcing a constitution Yeah, so you could imagine that like so let's suppose you're the kind of global dictator. Yes Yes, I've got a I come on which you've already wanted to do. Yeah, it's been on the bucket list for a while You have your ideology. You're like fervent zealot in the way that Hitler Stalin was you want that ideology to persist forever? Yes, what can you do? well
One thing is you could just try and create an AI agent, like artificial general intelligence, in your own image as closely as possible. So that it's like your ear and just kind of rules. Or instead, you could say, look, there's certain constraints I want for the world. And that will be in the power of this very powerful artificial intelligence. That's the kind of AI constitution I was imagining. So if my ideology was that the only books anyone should ever read are books by Brandon Sanderson...
I could theoretically make that a law and enforce it through some kind of AI agent that eliminates anything that is not a fantasy novel. Exactly. And is enforcing that eternally because it's immortal, essentially. And so that's why even if we avoid this issue, the risk of kind of misaligned takeover by AIs themselves...
we still have this possible opportunity, possible choice moment, which is a choice moment for the whole future of civilization between do we use this power to create like a liberal flourishing world where people can pursue their own different model conceptions, try and get to a point where we've really like figured out what's correct morally pursuing that. Or is it the case that someone is taking power using that ideology to
I'm taking power and entrenching the ideology and then using these immortal beings to enforce that ideology for all time. That is crucially important because it means that, well, I spoke earlier about these like two possible worlds. One is this flourishing liberal world where everyone is free and
can do what they want or this other is like all of the powers in a single dictator and can the rest of all the other beings in society are their slaves. Like I think the point at which we get human level or greater than human level intelligence could really be a hinge point between those two futures and maybe kind of everything in between as well. Okay. What do we do about this stuff?
So what do we do? I think there are a couple of things. What do we owe the future? What we owe the future. So on the kind of misaligned like AI takeover worry, there you can, there's this huge kind of technical research program now on aligning AIs with kind of human
So basically ensuring that AI systems do what you want them to do, but then ensuring that they do what you want them to do, even if they were much smarter than you are. So you can imagine you're a child and you have come to kind of rule a country and you're trying to appoint, you need to appoint some ruler in your place because you know, you're just a child.
And so you're trying to figure out which of these adults who are much smarter and better informed than you, which are lying to you, which are just sycophants, schemers, or which are the ones that actively trying to make a flourishing society.
That's a big challenge. And so in terms of what you can do, you can go into machine learning. Many people go and do PhDs there and then start working on this technical research program. Also can actually benefit, again, from philosophers, like the numbers of philosophers involved, because it's still such an early stage field. MARK MANDEL: It benefits from philosophers because they sit around and figure out
what is the right moral thing to do here? Actually, more just like conceptual clarity on like the nature of the problem that we're facing and therefore kind of what are the best paths to reducing that, yeah, to addressing that problem. So that's kind of one set of things. And then the other big thing, and it is messy at the moment, like,
There is this big spectrum when it comes to issues affecting the long term. There's a bit of a trade-off between what are the things that may be most important and like really future shaping and what are the things that are most tractable. There's something like, you know, climate change, clean energy, just robustly good, very tractable. We can put like billions of dollars into this. Everyone can be working on it. It's robustly making the world better.
And then the thing that I'm coming to is like AI governance, which is, okay, we're getting more and more powerful AI systems. Yep. Who has control over them? Who has power? How is that power being shared? Is there just like a race between the US and China? It's like a new arms race over artificial general intelligence, or is it some collaborative thing?
Very messy hard to know that you're making progress in it But people can go into like policy and government work and there's a bunch of stuff happening there That to try and make progress on this problem. It feels like there's all these potential problems that have a pretty high By our standards probability of causing significant problems in our lifetime. Yeah, and I
In a way, I think even, for example, the extinction argument, even if you don't buy long-termism for whatever reason, it's just like, well, significant risk of really bad stuff happening to you in your lifetime and your kids and your grandkids. Within Effective Autism, this is a line of criticism over the book, not the content of the book. But there's a lot of people who just like,
man, these things will just kill us in our lifetime. We'll like have a significant probability of that. We don't need to go via this detour. And so on the one hand, I just like, I want to convey what I believe. I think that like the fact that this is impacting the long-term future is like really fundamental. And maybe we're confused about the pandemics, maybe we're confused by AI and change our mind in 10 years. I want something robust to that. On the other hand, maybe I'm just being a philosopher and I'm like, I'm writing the stuff that's like
philosopher finds important. But yeah, I agree that like,
You don't need in any way to be a long-termist You don't need to think past hundreds of years to think that the things that we're focusing on are enormously important Yeah, because yeah, they're having like very major impacts negative effects like in our lives Yeah, so let's say someone's listening to this and they're like right I'm sold This is like sounds scary as shit and I need to do something about it Maybe with my career maybe with my money maybe with my time
What are the starting points? Okay. I think the single first thing to do, obviously, other than reading the advice. Is it out yet, officially? It comes out, yeah. By the time this podcast is out, it will have been out. 16th of August in the US and the 1st of September in the UK. Okay, sweet. Beyond that, I mean, 80,000 hours, I would say, is the first stop. So it was an organization that takes this opportunity
The paradigm of worrying about essential risks in long-term is very serious indeed. And so it has recommended career paths in areas like pandemic preparedness, in areas like AI safety and AI governance, as well as other areas like some of the things we've talked about, global health and development and climate change and so on. And it's got enormous amounts of content there, these long career profiles as well.
So that's something, that's probably like the first place to go. Another book that's also excellent, like these two books are complementary, is The Precipice by Toby Ord. I mentioned Toby right at the start of this podcast. He was the person I co-founded Giving What We Can with, and he goes into a lot of detail on these catastrophic risks that we could be facing. And then for other sorts of information, places that you can go, Open Philanthropy is a foundation that has some kind of
really incredible kind of research reports and sometimes just blog posts on just like different issues that they faced when trying to make grants in this area and issues that come up. So like, I don't know, looking at AI, for example,
I said artificial general intelligence could plausibly come in our lifetimes. At what date and with what probability should we think that will happen? They've put enormous amounts of work into this. So one was just this question of how much computation does the human brain use? And they spent months and months and months interviewing all the top neuroscientists to try and get answers to that question.
What does the trend in terms of cost for a certain amount of computation cost over time and what should we expect investment to look like? Again, put enormous amount of research into that. And between that, then created the model to integrate that to try and figure out, okay, this is approximately when we should expect artificial general intelligence to...
appear and you get significant probabilities on that within the next 20 years and you get it happening like more likely than not within our lifetimes so yeah they have enormous amount of research as well and then for learning more and more casually there's the effect of altruism for them has discussion on these kind of long-termist topics and a lot of other stuff that's going on too nice and i guess if someone's like oh there's all these causes and they all they all seem good and you made a compelling case for a lot of them it's like
How do I know which path to go down? Is that by going on 80,000 hours and doing my figuring out my sort of personality type and what I buy with or? Yeah. I mean, I think the first thing to do would be to, if you can try to reduce that uncertainty. So read these books, read 80,000 hours in depth, try and get to grips with it yourself.
If it does come to the, you come to a conclusion like, oh, they all seem equally good. Then yeah, getting involved with 80,000 hours, providing if you can try and get career advice from them and they can maybe guide you to somewhere that you have a particularly good fit. And it could be that the thing you best fit at, like for you, for example, if you're going to start
hosting more kind of things related to effective altruism. You can do that because that's a wide variety of causes. So perhaps you can have your cake and eat it too. You can. Yeah. If you were me, what would you be doing? Let's say you've got this platform, you've got 3 million subscribers of the main channel, you've got the podcast, you can kind of access a lot of people, probably not everyone in the world, but like a lot of people for an interview. What would, what should I be doing to maximize, I guess, impact? I mean, the very most obvious thing is to do kind of to, you know, move your podcast and YouTube channel
in a direction such that you can both keep growing it you're providing the content that you know they leave things people in and get to make passive income and stuff exactly yeah and then at the same time host people who you think are expressing the most important ideas where my guess is that i mean it could be that you just go all in on most important ideas but potentially
Yes, my guess would be kill the goose that lays the golden egg. If you did none of it, though, then I mean, I think the sort of advice helping people live their best lives, as you put it, like it's making the world better. It's probably not making the world better by as much as getting people to stop the next pandemic. And so there'll be some like ideal middle way. But you could stack that. You could look at like what's the sort of conversion you get from
how to have a passive income versus... That's a very good idea. Yeah. We could almost do, because we do a bunch of ROI analyses in terms of like how much money the YouTube channel makes and stuff. And then sponsors are obviously super interested in that. Yeah. But I guess we could make a model that's like,
what is our impact as a YouTube channel and as an organization as a whole. Because we got an email from Luke Freeman, who runs Coming When We Can at the moment. When I made a video about it, they got, I think, within a few week window, an extra million dollars in pledges. And he was like, oh, that's however many divided by $5,000 lives that a single video saved. And that was a really badly performing video in the grand scheme of things. So if we could do more of that, if we could get more people
talk to 80,000 hours and just be a little bit more thingy with their career if we spawn the next SBF. Exactly. You can expect that impact. Yeah. I mean, and it's not crazy at all that you would, you know, create someone who's generating, you know, billions of dollars worth of value for good causes. Yeah. And this has happened in the past. I mean...
the other podcast interviews I've done, the most impactful of which was Sam Harris, in significant part because he, like you, took the Giving What You Can pledge, started giving every podcast he did, he gives $5,000 to, I mean, he's now giving actually much more than that, but that was the start. So he could save a life. That just meant, yeah, enormous number of people took the pledge, got involved in effective altruism. - That's cool. I didn't know he did that.
That would be like every podcast episode saves a life. Yeah, exactly. That's sick. That's sick. You could do the same. Yeah. I'm going to look at the financials. That would be fun. Yeah, exactly. And then just that idea. I know like on James Clear's website, at the bottom of it, he donates whatever percent. I think he's taken Founders Pledge or something. Okay. So 1% to the Against Malaria Foundation. And it's like a little tracker in the footer of his blog about how to build better habits. It's like, oh, perfect.
proceeds from my book sales have late have saved 24 lives yeah which makes it a very tangible yeah exactly very tangible and yeah and it would make sense for you as well to be covering
A broad array of causes. I think if you were just kind of pandemics in the eye of the week, it might get a bit stale. Yeah, then I'm going to become the tinfoil hand guy. Exactly. Oh, yeah, exactly. It's got particularly a band. But there's just like enormous numbers of good things that people in effective altruism are doing. Yeah. Yeah, you can highlight the... And there's also just like demand, I think, for kind of positive, optimistic stories about doing good. Yeah, how are you so cheerful about this stuff? It's weird. It's funny, isn't it? Yeah. Oh, yeah. You know, I put the risk of like an all-out nuclear war at 20%. It's like, what the...
It's so funny because there's some people on the spectrum of people worried about these risks. I'm like comparatively optimistic. Like my risk numbers are quite low. There are some people in particular in the Bay Area who think that AGI 90% is coming in the next 20 years, let's say.
given that 99% that everyone dies. And some of them are just like, yeah, I'm just a cheery guy by nature. And I'm joking while talking about this. Yeah. So I think this is another good example of like, you know, the kind of doctor mindset where you're just like, it's now a job and you just, you kind of revert back to what your natural personality is. Yeah. But the war stuff, like really gets me. Like I really like,
War is very common throughout history. You can read about it. Like millions of people have died. We were very close to nuclear war. Like...
Two cities in Japan have been, were nuked. Yeah. This like animosity between the US and China is growing already. India and Pakistan do not have good relationships. There was a war, there was a skirmish between India and China in 2020, June 2020, where people died, like a fatal skirmish between the two most populous countries in the world. Yeah.
This is not like, I know the AI stuff, at least I can understand if people are like, "Ah, this feels like circulation." But war is just a very real thing. And it feels viscerally very scary to me. - But you stay cheerful 'cause of just personality vibes. I guess it's kind of nice to be able to talk about this stuff in a nice, well-lit room with a natural light. You're like, "Yeah, risk of human extinction." - Yeah, well, there's someone, perhaps if you do get more guests on, a researcher at Future of Humanity Institute called Anders Sandberg.
And he's one of the most just broadly knowledgeable people I've ever met. And he often works on really just thinking about like what are worst case scenarios. So cobalt bombs or like these kind of, which are like possible designs of nuclear weapons that would be like maximally destructive. He's also just the happiest person I maybe know. So he's just like cheerful, like joking about like, oh yes, this is...
Yeah. And I think like, that's just his personality and Anders is just like a joy. Yeah. But yeah, it can't like, I do need to be checked sometimes. It's like, these are talking about like model horrific and like very serious things. So the chief, yeah. Like it's a balancing act on model attitudes. So what do you, what, what does your day to day look like? It's a great question. It's very weird at the moment, the last two years during the pandemic. So I'm often very split. Like I'm a,
you know, philosophy professor. I'm also on the board and co-founders of a number of nonprofits. I'm also like doing a lot more outreach, like this sort of thing. Also doing like writing and research. So it's often kind of very split. When the pandemic happened, I had this two year period almost of just like
I was just like, this is the best time I will ever have in my life to write a book. And so this is going to be the biggie. This is what I'm really working on. That day-to-day was very different than it is now. That day-to-day was, yeah, waking up at 7 a.m. or 8 a.m. and make myself a nice flat white coffee or something. And I'd have like two...
two kind of intense lighting periods, like three hours in the morning and the afternoon. Maybe do some other stuff outside of that. Work out every day, have a nice long lunch break. Because for lighting as well, it's more about peak performance, so my hours weren't that high.
What does your writing process look like? Like what do, what if I, cause I'm writing a book at the moment, so I'm just like trying to figure out anything, any magic secret sauce other than just sit down and write. Okay. Yeah. What I actually need to do. Uh, I mean, that's by far the biggest thing I'm laughing cause I once got writing advice and he, um, from a friend, I won't name him, but I think he'll be fine. He's a bestselling author and he was giving me all of this advice. It was very standard advice. Like, um,
you know, write every day, do it first thing in the morning, have like blocked time, focus on input goals. And I was like, okay, cool. I'm not learning very much. And he was like, oh, and I'm drunk. I was like, what? He's like, oh yeah, just drink wine. Just like every day. He's not an alcoholic, but it was just like, it helps him. So for me, I got pretty intense about optimizing things. So I definitely figured out when you do your best work. I think most of my value of my day is in the first three hours. Okay.
So I'd try and not check anything. I'd have the hours. Then in the afternoon, it can be a little bit more chill. Like maybe it's a bit more like leading in the search. Every evening, I would check in with my chief of staff at the time who's kind of-- so I had a whole team kind of working on this. And with her, we would review the day.
Well, we would set goals for the next day. So I'd set input goals and output goals. I would track the amount of time I was lighting. I wouldn't track anything else. So the only number I was having in terms of my inputs were just lighting goals, because everything else was less important.
And I'd also say-- MARK MANDEL: Input versus output. Do we mean-- MARK BLYTH: Input is how many hours I spent writing. MARK BLYTH: I see. And output is how many words you create? MARK BLYTH: Yes. Or it would be more sections. So I'd say, tomorrow, I'm going to do six hours of writing. I count that like a full day. It wouldn't be very often I'd do more hours of writing than that. And then I would say, I want to write these four sections or these three sections.
And so every day she would hold me to that, like ask if I've done it. If not, like why? - Yeah, and accountability. - Exactly, and it's really huge. So deadlines are just very big. It used to be that when I had a writing deadline, I would post on Facebook. I would say, if I don't submit this paper by this date, then I will give $50 to... And then at the time I was like, I'm looking for someone, you know, I posted on Facebook like $50 to the first person to message or something.
And you need to spend it on something I don't approve of. And so a couple of people who eat meat were like, OK, yeah, I will take that and I'll spend it on boneless chicken thighs. And I'm like vegetarian.
And there was one week I worked like 100 hours, like worked through the night because I was like, I just cannot give this person money to buy chicken. So it's extremely effective. Some people judged me on Facebook for being a weirdo. But yeah, I find these kind of deadlines extremely effective because writing more than anything is like psychological hurdles. It's easy to procrastinate. If you have goals, you can really get through it. Yeah.
Then I'm also very heavy on just getting words on the page and it's shitty and just write something and then devise it afterwards. I think looking back, I maybe wish that I did a bit more of like figuring out the kind of logical structure because it's kind of,
This book was a real challenge in many ways, but one reason it's a challenge is it's trying to do two things. One is be like something that's moving the state of the fun, like our knowledge frontier forwards. So there's like a lot in that book that's like, you know, novel work that you can't find even in academic journals or elsewhere.
But yet at the same time be this like broadly accessible introduction to these ideas. And normally those things like really trade off. And so I did think about the book kind of plan, but when I was thinking about the book, I was like, okay, if I'm going to fail in one direction, like probably I'm going to fail on like not being accessible, understandable enough. So I really learned heavily on that.
But then it meant that sometimes structures of the chapter is like, oh, this works narratively. But people are getting a bit confused about the logical argument. And so you actually can notice it through the course of the book where chapters eight and nine, which are just like philosophy, which I've put towards the end because it's for the advanced reader who really wants to dig in. There, the structure of the chapter is not much more clear. Because there I was less worried about the narrative kind of stuff.
And there was also a bit of an issue that sometimes I'm like, oh, this would make for a really good narrative. And it's based on maybe certain studies. And then I got the search assistant to try and replicate the studies. And it's like, oh, it doesn't check out. And I'm like, oh, that's annoying. Yeah, it's annoying for the narrative. It'd be so nice if this was true. There's so many things that were so nice if they would have been true. But yeah, this is, as I mentioned before this podcast, yeah, we did an hour of...
like at least a couple of years, like probably a couple of years worth of time just fact checking.
But yeah, so in terms of that, so the process, the huge thing is just no other commitments. So I, some people are able to like write in the morning and then do other stuff in the afternoon. Yeah, I've been trying that. For me, I just need long chunks of time. And so for both Doing Good Better and this, it's like, I just disappear from the world. For Doing Good Better, that was about six months. For this book, it was a year and a half. If I'm doing two things, I'm like...
Because part of the issue is like everything else is more salient than writing a book. So what are you thinking about in the shower? If there's something else going on, your brain's thinking about that. You're not thinking about the work. And then I got absurd amounts of feedback on it as well. So I had two big feedback rounds.
Well, no, actually, it essentially had three. So to begin with, I prepared a talk. It was like the core ideas and went on a speaking tour because I just, I wanted to be kind of the startup advice, talk to your users. I wanted to really get in the habit of presenting these ideas, getting feedback on what resonated and what didn't. That was before writing the book.
Then I wrote a draft of the first five chapters, got intensive feedback on that. Again, it's maybe like 40, 50 people provided in-depth comments. And then I'd written an entire first draft of the book after that feedback round six months later after that. And then again, it was maybe even more like,
I wouldn't be surprised if it was closing in on 100 people who provided comments and feedback on it. MARK BLYTH: OK. Lots and lots of work into the book. MARK BLYTH: And then what's your book going to be about? MARK BLYTH: It's basically about how to do more of the things that really matter to you by harnessing energy from the stuff that you're doing anyway. MARK BLYTH: OK, fantastic. MARK BLYTH: So sustainable productivity and how to make it fun and sustainable and meaningful and stuff. MARK BLYTH: Because then a question for you. I don't know if you're going to have a team, but something that's unusual about this book is I had a team of people
Like I was obviously the primary author. No one's contributed more to the book than me, but I had researchers that I'd employed. I had this like large network of contractors and consultants. So there's tons of history in there. I'm not a historian. I did spend like, I read a lot of history because I really wanted to get up to speed on it. But I had a historian who was like on contract. And the reason I focused on slavery is that for so much of chapter three is that I was asking, I asked him like,
what are some examples of historical events that seem very important, but yet were not inevitable, could have gone either way. And he came back and he was like, I think the abolition of slavery. And I was like, that seems totally mad. I thought that was just like inevitable development. He's like, no, actually like historical mainstream historical view is like, it's actually a relatively could have gone either way.
Again, it's like you know how to know but like this is not you know within his opinion exactly And so when I got really obsessed by that and like really dug into it, but then also climate change We haven't really talked about climate change. What's the deal with climate change? While we're here because I did have a written down is like something to ask you about okay for sure and um I actually maybe spent more time on climate change than anything else even though it doesn't make a huge appearance in the book well just because there's like
This huge economics literature and huge science literature to really kind of dive in on. So, I mean, the way I put it in the book is decarbonization, in particular via funding innovation in clean energy. I consider it like the baseline long-termist activity. Okay. The baseline. Exactly. So it's like... If you're doing nothing else, you should at least do this. Exactly. Yeah. It's like this alone kind of, I would say, like proves long-termism. And then, or as I define it, and then maybe we can do even more. Hmm.
And why do I think of it as a baseline? Well in the book I describe clean energy innovation as a win-win-win-win-win Okay, and I now actually wish I think there's a sixth win That I should have put in but why is that so like even if you're just looking in the near term just People live today burning fossil fuels kills three and a half million people a year. Oh what health particulates so air pollution is
this enormous health cost that people barely talk about. So even within Europe, very unpolluted part of the world, we lose about a year of life expectancy just due to the particulates from fossil fuel burning.
And when economists do the analysis, they tend to find that even if you're just looking at the health costs, the like near term health costs, you've got an argument for like very rapidly decarbonizing the world economy. Right. So that's one. Second, over the kind of medium terms, obviously the climate change impacts, which we all know and understand very well. Of course. Yeah. Third is if you're doing it via clean tech, you can like decrease energy poverty. Energy poverty.
Just that people in poor countries have very limited access to electricity and energy. Oh, so if you build solar panels on their houses. Exactly, yeah. Also, if you can just make energy cheaper, which you're doing. You're also just advancing technological progress. So we haven't talked about technological stagnation. I have a chapter in the book about that. I think that's a big deal and a big worry too. But we can...
So there weren't enough. I wasn't worrying you enough. No, I'm like, oh, damn, this is not the one I need to worry about. Yeah, there's just more there. But then, in the more distinctively long-termist ways, I think that clean energy tech helps with both extending the lifespan of civilization. And then the sixth thing that I didn't really talk about is, like, I think it's good for values perspective from the long run, too. So on the extending the life of civilization,
Lifespan of civilization imagine a catastrophe that doesn't result in human extinction But da is so bad that it kind of sends us back to pre-industrial levels of technology Yeah, so perhaps it kills 99.9% of the world population There's a question would we recover from that as in would we just stay as farmers forever or would we? redevelop science and in dust like post-industrial technology and
And again, I have a chapter in this book discussing that question. Okay. I actually come out very optimistic. Okay. I think it's like actually very, very likely that we wouldn't just go back to the stone agent forever. We would be able to reinvent. Exactly. We would be able to reinvent this microphone at some point. At some point. Yeah. And part, like there's many, many considerations there, but one is just looking at like, what are the potential blockers? Like, is there any like resource that we like have used up now that like,
we wouldn't always have, that seems kind of essential. And there's nothing that really is like remotely plausible to be essential that we have used up or might plausibly use up with one exception, which is fossil fuels. Where we obviously, like in terms of easily accessible oil, we've basically used that all up. Easily accessible coal, we still have like 300 years to go. Nonetheless, like if you told me that it's 300 years time and we've burned through all easily accessible coal,
I'm not like totally shocked. I think it's unlikely. I think we are just transitioning away from fossil fuels. I'm like,
Actually pretty optimistic on the progress we're making but it still just provides us like extra reason to keep fossil fuels in the ground Which is that case we need them in a later date exactly. Yeah dig them up fairly easily Because I'm trying to build a fracking distillery that yeah, which would be impossible with like 70. How you gonna do that? Yeah 18th century technology. Yeah, so this is kind of like this additional reason it's like a last resort last resort exactly Yeah, like it's a very safe emergency civilizational insurance. It like helps the bill civilization and
Then the final thing, which is the thing I didn't include, is the pathway via values, where the invasion of Ukraine has just made this much more salient. Countries that rely very heavily for the GDP on oil and gas and coal often...
Actually, no, I'll flip it the other way. Among the countries that seem like the scariest in terms of their governance, they often and yet are powerful and economically developed. How do they manage to do both of those things? Typically, it's because they have enormous fossil fuel reserves and are selling fossil fuels. Yeah, so they can print money. Exactly. So some people describe Russia as a gas station with a military base.
And that's like how you should understand it. Similarly with kind of, yeah, some of the more worrying kind of Gulf state countries or dictatorships that get set up in Africa as well. So there's this thing of the resource curse where in Africa,
It's a phenomenon noted by economists where among very poor countries, especially looking at Africa, those countries with the best natural resource deposits tended to have the worst economic performance because there's an incentive for a military coup, take power, become a dictatorship, extract natural resource wealth. They can use that to prop themselves up. They don't need to listen to the people. There's no incentive to build technology and stuff because...
Exactly, or have a democracy. And so the final thing is just like you move off fossil fuels, you've decreased that pathway to... Yeah, six wins, exactly. So I think of clean energy tech as very unusual in this space in that it's very robustly good. I've given you these six different perspectives and it looks good on all of them. And then secondly, it's just an area where
We really understand what's going on as well, like the health pollution, the fact that advances technology, the impact on climate change. This is not a speculative area.
And then finally, why is it a baseline? We could just dump tens of billions of dollars into this. And it would be able to absorb that and do something useful with it. Exactly, yeah. Whereas some of these other areas, it's like, OK, they're very underfunded at the moment. They've got tens of millions of dollars. How much can we scale that up? It's unclear. Probably we can, but it's at least not as obvious. Nice. Yeah.
The idea of sustainable productivity, the vague structure for my book is as follows. The top level idea is that really the secret to sustainable productivity is to find a way to get energy from the things that you do so that you can give more energy to the things that you love to do.
or something like that. So we need some level of wordsmithing. So three parts of the book, classic part of the sort of three part structure. Part one is called Energize and it's about the three energizers, the three Ps of power, play, and people, which are the things that make anything that we do, whatever, whether it's work or projects or chores, more energizing. Then part two is like, okay, you've got that energy or like,
It's all well and good, that stuff energizing you once you're doing it. But a lot of people struggle to get started. And so those are about the three blockers that cause procrastination. So we're calling those obscurity, anxiety, and inertia. Obscurity, basically, like goal is not particularly clear, et cetera. Anxiety, like self-doubt and all that crap. And then inertia, just, oh, I can't be arsed to get started. And then the final part of the book is sustain.
So it's like part one, energize, part two, unblock, part three, sustain. So how do we make this all sustainable? So one chapter about like the idea of consistency and just seeing things through to the end. Another chapter about recovery and rest and stuff and how that relates with burnout and how we think about rest and recovery and recharging. And the final chapter about like purpose and meaning and stuff as like the ultimate thing that makes anything sustainable if you're working for a cause that you really care about.
We'd love to hear just any thoughts on that structure, that idea, anything that comes to mind, anything that sparks that I can follow up as like a book recommendation or anything like that. Okay, sure. I mean, like in terms of, I mean, so obviously I would make a plug for EA being in the final chapter. Absolutely. Yes. I mean, I'm sure, you know, the procrastination equation, a second thing.
One thing is just like, I mean, as you, I'm sure you know, like what's the value of a book? It's like no one reads books. What happens though is like you have like builds up brand and cache and ideas. And yeah, what did you call it? Lead magnet. Lead magnet. Exactly. And so this is exactly right. And people are like, oh, this is book and they've got this thing to hang on.
And so then it's like, there just needs to be like one idea that's like very magnetic and memorable. And so like one story or something, I don't know if you have that yet, but like think about that as like 50% of the value of the book or something. Like if you spend half your time just getting that one story. So that, unfortunately I actually don't have this for this and this is like a major issue. And I'm still trying to think about like what, what can work there. But yeah,
Yeah. Peter Singer's like drowning child analogy. You know, in his original article that takes up two sentences. Yeah. But it's the thing everyone remembers. Everyone remembers. So do you know Made to Stick? Yes. I haven't read it. By Chip and Dan Heath. Yeah. Extremely good book. One of my favorite books. And it's about how you write sticky ideas. And it's just like very powerful. It's almost always stories. So if you can get like...
two famous people, one of whom like horrifically burned out and the other one just like had this like long running success over time. I didn't know like Arnold Schwarzenegger or something. How did it keep going for 90 years? Yeah, exactly. Yeah. That's like ideal. It's like you've got this like 90, yeah. 80 year olds who still like crushing it. Like, like how,
Then that's like really memorable. Nice. And that's much more memorable. Like you gave me these PPP. I've forgotten it already. I've got to remember those. Yeah. Story. Yeah. Yeah. But it's one opening story and you will say it a thousand times. Yeah. In all the podcasts and all the things. Exactly. And yet that's the thing that people will hear and they'll be like, okay, Ali Abdaal, this book. Yeah.
It's that story. And then that's the, again, most of the value. So that's a big thing. And naturally, that's the thing you lead with as well and appears in an excerpt. In terms of the content of it, it's kind of interesting.
I'm just not sure how to do it has been for me that I like get energy from one set of things and that like transfers over to being energetic about other things. Yeah. So in a, in a way it's like that, that causal link is, is not quite what we're trying to say. Okay. What we're trying to say is that like, it's really hard to be productive in a sustainable fashion. If the thing that you are actually doing really drains your energy. Yeah. And so let's figure out a way to make it more energizing. Okay. Okay.
as almost as a side effect of incorporating autonomy and like a sense of play and people and accountability and stuff into your work okay you'll be more productive at the thing but you'll also be more like motivated at the end of the workday for example you don't like your job yeah okay so it's kind of like a manual for improving in practice improving grit like improving ability to achieve like large projects over a long time period essentially that so that could be another story as if there is someone who's like
they achieved something that's very impressive, but took like 20 years. Yeah. That would be like a very good story. It's like, and then how did they do that? Okay. That is something which is like resonated with me. I like find it like, why have I been successful? I'm like kind of confused by it. I mean, obviously a big part of it is luck. I think one thing I do do is like, I have long term goals that I really work towards the other people. I don't know, like, like writing a book, especially like
this is like a real effort book yeah it does involve just like you're plugging away at something over a very long time period yeah and there's lots of people who i think like have the ability to write such a book but like lots of people just don't yeah and so what's going on there one is like
Yeah. That, like you say, feeling of purpose, like I just feel morally motivated. I'm like, if I'm slacking off, like that's important. The world is getting worse. Um, obviously easier to do when it's like four people in sub-Saharan Africa, then existential threats can be like a little bit harder to visualize, but nonetheless, I'm still just like, okay, this is more motivated. I should push myself every day. Morally motivating.
What does the goal look like? Is it like an input goal or is it like an outcome goal or is it like I want the book to hit X number of sales or like? Oh, I mean my goal is much bigger than the book. I have a vision for the world where everyone has this kind of shared goal of just making the world better and it's just agreed we've got this like impartial goal and everyone's just willing to reason about it and that
that's understood to be just like part of living a good model life in the same way as like not being a racist, you know, not like murdering and so on. And I'm just like, that is achievable. Like we could have that world rather than the world where it's like us versus China and like the, you know, the woke and the anti-woke and everyone's fighting. And I'm like, no, instead it's kind of like, like within EA, like,
is just like you've got all of these people that like okay we want to do as much good as possible we believe in like reason and argument as the way of like figuring out things and it's like people can have disagreements but there's always this feeling of like shared project and i'm like okay i want to just expand that to the world as a whole yeah and i'm like okay how do we get there and then it's like well one path on this unlike obviously i'm just trying to move the world in that direction but one path in that direction is you know having a book and convincing more people and that's like
that's motivating. And then like there's a sub goal of like, okay, I want, you know, when I was writing it, I definitely wasn't thinking about this. Now it's like campaign mode or something. I'm like, okay, I want to be on the New York Times bestseller list. So like, what would it involve? When I was like actually writing the book, the sub goal is more just like, I want to write, I guess I like want to write something that I'm like still kind of happy with when I die or like, I want to write something that's like the obituary is like, yeah,
will mccaskill who go what we are the future like okay i was like this is the output i wanted for like so i think there's yeah so i was yeah much more focused on just like how can i how can i create the kind of most important book i can yeah so it's like during the process of it because i think for for me with the book like if i think too much about the new york times bestseller list it demotivates me to write the book because it just feels like pressure yeah i really wasn't thinking about that yeah
Also, it doesn't matter. No one reads the book. It's kind of like, that means, I mean, again, go to chapter eight. It gets pretty technical. Endnotes, they got us to put a lot online, but it's still a big chunk of the book. I got to do all of the nerdy intellectual content I wanted to put in there because it's like, not that many people will make it to chapter eight. Those that do will be interested. So it's actually kind of liberating. You can partly do it for you.
So yeah, why do I think I'm like good at like longer term aims? I think a big thing is like having a vision that I ultimately want to achieve and then it's like this backward induction of like, okay, so we've got to do this thing and this thing and this thing. And that ties to like now, like therefore tomorrow I've got to do like three hours of lighting in the morning. What do you, what do you find helpful when you find yourself procrastinating? Normally the main thing I'm, the main thing I want to say is like structuring in my life. So I don't procrastinate. There's three phones.
Yeah, what to do with three phones? So I used to have one phone. I used to procrastinate a lot. I am like, have a horrifically addictive personality. It's like, there's always got to be something that can be computed, like computed future, whatever. Facebook, Twitter, Reddit, BBC. That's kind of one set of things. Another is just like,
which is just also very common, is always wanting to check email or messages or some stressful work thing and therefore not getting a work-life balance. Whereas I'm extremely in favor of just hard lines between, are you using time that's optimized for time off or are you using time that's optimized for work? There's nothing in between. I think that's a really bad state to be in.
Because I guess in between you're sort of half doing like, "Oh, I should be doing work, but I'm not." Lots of people do this. Even since uni, I was like, "Yeah." People are like, "Oh, I'm in a library, but I'm also chatting." I'm just like, "What is this?" Or the worst is if it's like, "Oh, it's my day off, but I'll do a bit of work." It's like, "No, just do one thing or the other." Why do you find that helpful? Why do I find that helpful? I just think you get the worst of both worlds. You're neither most productive nor having the most enjoyment.
if you're like doing something that's like half work, half enjoyable. Whereas you want like, you want chunks of like pure rejuvenation time. And then when you're working, I mean, in general, there's this underlying view, which also underlies a lot of effective altruism, which is like most values in the tales.
So like it's the same, like, like, like goes into the 80, 20 of the rule. But yeah, if there are like 10 things you could do, do all 10, like how much value comes from the best one? How much value comes from the worst one? Maybe it's 1% from the best one, like 50% from the worst.
best one. That's true for leisure then like of all the good things you could do, like with your time off, like if you're doing something that's kind of crappy, maybe that's like 10 times worse in terms of how the juvenating it is than something that's best. Similarly, when you're working, like if you're like, Oh, I'm doing this half work. It's probably just like, no, you should just focus on the best thing. Actually that leads to another, like a bit of like a guiding motto or like productivity technique for me is I call it eat that elephant.
So do you know the book Eat That Frog? So yeah, the idea is you have some time, like three hours, and you just, the most hardest thing, you just do that. Eat That Elephant is like that, but for much larger tasks. And so I'm normally like, nothing, there's just something that's the most important thing. Do nothing else until you have finished that most important thing. And maybe that's like two weeks.
and you've got this like enormous in that like inbox and you've not responded to any of your emails but now you've done that most important thing maybe it's an entire book maybe now I'm in a situation where it's like you know I have this team around me so I can like still handle those other things and things don't clash but it definitely like prior to that which is most of the time it would instead just be like okay dropping everything like completely zooming in this most important thing
And I think that's like really, or like, I guess the future fund was kind of like that as well. It's like, I had this whole plan. It's like, nope, there's this new foundation. Like I'm kind of, I didn't quite top everything. It's like, haven't I managed everything, but I'm now focusing on this thing. And I think it's served me really well because people just don't, I think people aren't willing to like switch enough. So a big thing is just structuring my environment so that for the class, it's
It's hard to procrastinate. So I have a work phone that has email, Slack, work email, Slack, Signal, which I use for other messages. And then I have a personal phone, which doesn't have those things, but has other things. And then I have this old, clappy phone that takes ages to load up. And that has indulgences.
And actually, since I've had the phone, I've basically never used it, like almost never use it. But if I was like, oh, I really want to play this iPhone game, then it would be on that phone. And it lives in like a drawer. And so it's like, I can still play it anytime I want. I just have to load it up. It's got like this brutal back screen.
You just have to load it up and wait the five, 10 minutes that it takes to load because it's this old phone. Like get it from the door. Yeah. So that's essentially equivalent to work phone, personal phone, and like a games console. Yeah, I guess so. Yeah. Like a Nintendo Switch or something. Exactly. Like I've got one and I haven't used it in years because it's just in the drawer. And I know I always could use it if I wanted to, but I just cannot do it. Yeah. So I think the like, how do you overcome procrastination? The main thing is like,
don't be put in a position such that you can procrastinate. So like similarly, like how do I avoid eating sugary food or something is like don't have sugary food around. The other thing, but then, yeah, it's impossible to avoid all forms of procrastination.
Yeah. The accountability stuff has been by far and away the biggest thing and touch tracking time when I've been writing. Yep. Otherwise it's so easy to procrastinate. So like input time and output time. So input literally just number of hours. Yeah. So do you mean like an output goals? Yeah. Like, Oh, from nine to 12, I was writing. Or do you mean like from nine Oh one to nine 18, I was using Google docs. Therefore I was writing. Yeah. As in, I've got toggled back on my computer. Yeah. So it's like a
I press it. If I go to the bathroom, it goes off. So that three hours is like really fact hours. Like I take a break, it goes off and I start again. Oh. And I'm normally like... So it's like actually three whole hours. Yeah, yeah. No, actually three whole hours. Probably four hours of like actual time. Exactly, yeah. It'd be about four hours. So like, yeah, typically... Oh, damn. I'd be so productive if I did that. Yeah. If it was like... And you literally notice...
People take a lot longer breaks than they would realize otherwise. So yeah, typically during a pandemic when I was lighting, I would, yeah, it would be something like eight to 12, three hours of lighting. And then like, yeah, two till six, three hours of lighting. And then like a little few other things in the evening after that. And then ending at seven.
Final question I guess on this front is like power play and people essentially basically power being a combination of autonomy and mastery Taking pride taking control taking responsibility of the thing that you're doing. Yeah, because weirdly somewhat somewhat country to me I hope putting more energy actually gets you more out of it Yeah, whereas if you're disengaged with your work and you just put the bare minimum and you're actually just yeah sucking life out of yourself. I
play basically just like tracking progress with like little progress bar like they do in video games. But also kind of treating things with being like sincere rather than serious, approaching things with lightness and ease and just in the spirit of play. And finally people finding a way to ask for help from others, finding a way to help others and finding a way to sort of work with others through accountability and things like that. Anything come to mind like stories from your life or anything like that where you're like, oh, I did this thing and this really helped
Yeah, it's interesting with the people. It was a little different than how I would describe it. For me, it's just like, if you're around high performing people, you perform better. So like humans are like imitators and we imitate, like it's actually really remarkable even comparing to other primates. We like imitate even in cases where we don't understand what's going on. Whereas monkeys like, or apes like,
They need to know why someone's doing like another ape is doing something before they will imitate that. Yeah. But we just imitate anything that's happening. So, and that just means like, if you're surrounded by just like,
really hardworking, high-performing people who are really focused, you just start doing the same. And if other people are like lazing around, you just start doing the same. That's just true for all of us. And so when you said people, I thought it was going to be more via this mechanism that's much less kind of formal or much less like defined. And it's more like...
picking up the vibe. It's huge though. Culture is just huge. So that's the thing I was going to say. Other things. Oh, one meta thought. Do you know Ben Todd? Yeah, he's the co-founder of 80,000 Hours with him. I think, firstly, I just think you get on really, really well. He's also just the person who's thought most about this stuff that I know. Nice. Would it be nice being on the pod?
He would love that. Sick. Is he best in the UK? Yes. Oh, perfect. His office is 10 minutes walk away. Incredible. That would be so good. And he's in fact writing a book now because he was running 80,000 hours and is now
Moved into present role so he can like focus more on writing and once I have a book but like he's really deep on the productivity stuff and Also how that impacts you his whole your whole life so glorious and he's like really on top of the evidence of it Yeah, I think you'll get really well. Yeah, can you do an intro? Absolutely. Yeah In terms of things that have really made a big difference like medication so
antidepressant medication like because you're depressed or because you mind the migraines no i mean i started on antidepressants because i was depressed from like 2011 and then switched to this particular type because of migraines right but yeah just like you know there are many pat like procrastination yeah is like it's a psychological phenomenon but like the many you don't need a psychological response or something it was like um and like loads of people i know have just like enormously benefited and like i honestly think that people shouldn't
necessarily be in the mindset of like, am I depressed or not? Like the other, they should be like, if I consume this pill, will it have a net positive impact on my life?
Again, it's not necessarily the mindset the doctors always have but I know other people who are just like I know maybe they're mildly depressed and then started in particular Welberthin Welberthin, so it's an antidepressant It's prescribed only for smoking cessation in the UK in the US It's prescribed for as an antidepressant is one of the lead ones actually, right? It's a very different mechanism than SSRIs and it
People find it just particularly good, in particular, it's high energy. When it was first developed in the US, again, US medical advertising, it was marketed as the happy, horny, skinny pill because it doesn't have the normal anti-libido effects. It can actually have all weight gain. But actually, it can be good for just high energy and focus.
Similarly, like a lot of people I know who yeah, were like struggled with the stuff for ages and then got diagnosed with ADHD. That's just like huge. Like now that I'm like medication, it's just like, oh yeah, that's a problem of the past. In terms of other things I've done, I mean, yeah, another one is just like being on top of my mood. Like you described me as, yeah. Do you like questionnaire yourself or something? Or like, how do you? I now like, because I was working so hard, I like, I was, yeah.
There was a point maybe a month ago or something, or a month and a half ago, that I was like, OK, I'm clearly on this burnout trajectory. So I started monitoring my mood and a bunch of other indicators. And over time, I'm going to see how it all correlates. MARK BLYTH: What were those indicators that you were monitoring? How did you monitor your mood? PAUL LEWIS: Yeah, so I just have a spreadsheet. It's got dates. And then what do I put in?
I put in my mood, my productivity, my number of hours worked. I put in whether I've been traveling. I put in my location. I put in whether I'm in the same location as my partner. I put in whether my partner's feeling sad. Because again, I just predict that that has a big impact on me. I put in whether like, did I have a nap? Did I exercise? Did I meditate?
How much caffeine did I drink? And then I also like, I think there might be some other things. And then I also put in qualitative comments as well. And so I'm going to look, see, see what other patterns over time. You don't do this all the time. Just like when you sense yourself on a burnout trajectory, you're like, yeah, I just, this is a new thing that I'm trying as of a few weeks ago, but I think it's actually already helpful even just for the like, Hey, this is now something I'm monitoring and thinking about. So,
Finally, I want to ask you about is completely different is again asking selfishly, like what, how do you think about the balance between sort of your personal income versus the money that is in the business to like the business, whatever your business looks like to hire people and researchers and all that. Oh, okay. I mean, I think that's maybe a weird, maybe I'm not the person to advise or something. Cause my, my personal, I've just got this cap and then it's just all in a sense, it's like all either in the business or going to charity or like, okay.
As in, so like in our case, we have a bunch of cash in the business basically, which is currently not being put to very much use. And even though I take money out of the business, there's still like way more in the business than I need to live on. It's like, what I'm thinking is,
what do we do with that? Yeah. If we donated 100% to charity, it's like, well, then there's no operating capital. Oh, for sure. Yeah. Okay. Yeah. Um, I think in the first instance, yeah. Um, no, it's cool. I don't know. Bonus material or something after that. I mean, yeah. If I were you, I would think of it as kind of a failing that you're,
have this money in the bank and you haven't thought of a way of like turning it into more success. Like, um, I'd be like, okay, if I'm not doing this, I'm like screwing up in some way where, yeah, I don't know. Like I would think something like,
can you be getting like, let's say a 15% return on this, like 10, 15% return on this? If not, then maybe donate it. But like, so like there's this hard question of like, at what point do you like keep in? I mean, it actually might be higher for you, like even a lower bar for you because you're not going to be,
like your path to impact, assuming you get on board with like bringing people on the show and so on. But yeah, your path to impact is not gonna be via making money. It's instead like you wanna build up this thing. We could like start putting some numbers on it. But anyway,
There's going to be some rate of return that you can get from... So you've got the cash. If you put it in a bank at the moment, you get 0.1% interest or whatever. Yeah, S&P on average, it's like 5% per year. Seems like you're onto a good thing. I think you should be able to get better returns than that. At what rate of return should you switch from, in order to maximize your impact, donating it rather than investing it? That's a good point, yeah. Because we probably have way more...
It's just that I haven't thought about the question of what is the goal here. In my mind, it's vaguely, help people live their best life. But there's no-- MARK BLYTH: Yeah, yeah. I mean, my best guess is that if you're genuinely going to be trying to use this platform to move people into effective altruism stuff, to a first approximation, just try and grow this as much as possible. And-- MARK BLYTH: Try and what as much as possible? MARK BLYTH: Try and grow the business and your profile as much as possible, and invest
Oh yeah, here's a way of thinking about it. We spend money that is a loss, just like straightforward loss, in order to get more people engaged with effective altruism. There's no other turn. Insofar as you're doing something that has like, yeah, okay, so the answer might well be that you should be spending all of the cash and more. You should be asking for grants to like, oh, can I grow it even further? Grant to a foundation to be like, hey, this is the impact that we're having. Yeah, exactly. And we need more money to be able to do this thing.
As for what's the best way of spending that, is it more the searches? Is it more marketing? I don't know. Because in my mind, I'm kind of thinking that, oh, I want to run a 60% profit margin.
margin. I'm like, what the fuck do I want to run a 60% margin? No, wait, that doesn't make any sense. Yeah. It's an arbitrary number of plucked out of thin air because it gives me the sense of psychological safety that like, Oh, right. Yeah. There's a couple of million in the bank and I don't need to touch it. Okay. Yeah. Oh, right. Yeah. I mean, look at Amazon, like what was its profits for years and years? It was zero. And that's like, why? Yeah. Like, and it makes sense. Like what are the companies that are going to get biggest? It's like those that just like ruthlessly reinvest what they've made. Okay. Yeah. And you sit down and think about this. It
We've got some people coming over for dinner tonight. And this is also, okay, yeah, this also is a difference between if you're trying to optimize for impact versus income. So you might think like, okay, I've got a couple of million in the bank now. I'm just going to be happy with that. Like I can just seek that out. Like additional money is not worth that much more. And because you've got like,
Is it 3 million YouTube subscribers about that? Okay, yeah. So you're like, if I had 6 million, I'd have a bit more money, but it's not going to be a huge difference in my well-being. I'm not particularly motivated to grow the numbers. Exactly. Because I don't have an impact goal. Exactly. But now, if you're having impact, how much better are 6 million subscribers than 3 million? Yeah, way better. Probably about twice as good. Maybe not exactly, but to first approximation. And so...
having altruistic impact in mind gives like much stronger arguments for scaling yes then merely financial yeah because that's the thing it's like the more we scale the more headache I have because it's like more people oh yeah yeah and even though we've got now got management and stuff it's still like not the same as not the same level of speed as me just doing stuff myself with like a team of three people all right yeah but obviously the scale that helps that the thing that I've always been
Currently, it's been a battle between scaling the business, but what's the point? Because I don't need more money. But if the point is actually impact, now there's a reason to do the thing. And in a way, just even entertaining that thought makes it feel like way more fun. Yeah, exactly. Let's get a team of 50 people if we can sustain it. Yeah, I mean, because also, I don't know, do you really want to have a lifestyle business? Which is just like, come on, it's not as cool as like...
I don't know. Like what's your long run aim? Like, I don't know. I think I, I'm not sure how to think of that because part of me is like, I mean, I just would love to spend my time. If I, again, if I won the lottery, I would just spend my time reading, writing, teaching, learning about cool shit, making videos about it, big podcasts, writing books, which is all the stuff I do anyway. I just wouldn't bother making any more courses. I just put them online for free because the only reason we make courses is to make money. Yeah. I don't know how to figure out this question of like, what is my aim? Yeah. Okay. How should I go? Um,
But yeah, I mean, it's kind of interesting with YouTubers. Like what's the like, I guess like the real upside is like you have like a media company. It's like participant. Do you know participant media? No. So it's like a, it's a fund. It's basically a fund that...
funds movies that have social impact. Oh, okay. So do you know the film Contagion? Yes. That was, yeah, that's like a social enterprise funded thing. It's by Jeff Skoll, one of the eBay co-founders. So similarly, like the iPhone game Plague, the board game Pandemic. Oh, yeah. Why was there so much stuff about Pandemics beforehand? Yep. Yeah. If only people paid more attention.
But yeah, but you could do that like a much larger scale. Do you know Philip Detmer, Kurzgesagt? Oh, yeah. Yes, yeah, yeah. He's really into EA as well. We've been working with him a bunch recently. Oh, yeah, he's done a bunch of videos about long-term SME type stuff. Yeah, yeah, and it's going to increase. He's got more in the pipeline as well. Bye-bye.
Yeah, you should definitely meet as well. But yeah, so it's worth thinking about like, yeah, what's the real upside? And I think probably it is just like, oh yeah, you're doing YouTube. That's one aspect of it. Hollywood is another aspect. Like, yeah, you've just got like this like huge kind of like media platform. This has been sick. Thank you.
Thank you so much. Absolutely wonderful. And we should definitely keep in touch. Absolutely. Yeah. Yeah. Thank you very much for coming on. Any, any final ask to the audience when they hear this? Appreciate the fact that like, if you're listening to this, you're probably in this immensely influential position. You live at this very unusual time in history when we have unprecedented opportunity to do good.
If you're listening, you may have your whole career ahead of you as well. This is this huge opportunity to have a contribution to one of the biggest problems in the world. So check out 80,000 hours, read what we are the future or my previous book doing good, better. And yeah, let's together try and do as much good as we can. Amazing. Thank you so much. Cool. Thank you.