Wondery Plus subscribers can listen to Armchair Expert early and ad-free right now. Join Wondery Plus in the Wondery app or on Apple Podcasts. Or you can listen for free wherever you get your podcasts. Welcome, welcome, welcome to Armchair Expert. I'm Dan Shepard and I'm joined by Lily Padman. Hi. Let's start at the beginning. Angela Duckworth. Yes.
Gave us a few book recommendations. Yes. And one of them was A Brief History of Intelligence, Evolution AI, and The Five Breakthroughs That Made Our Brains. And I read the book and I absolutely loved it. And I've actually now read it two and a half times and been talking about it forever. And this is the author, Max Bennett is here. Max is an entrepreneur, a researcher. He started a company, Albi, an AI company. And
He's very tall and very, very charming. Very. And shockingly and impossibly smart because this isn't really his field and yet he's really done a master class on the evolution of intelligence. It's so fascinating and it's also broken down in a very like...
a linear way, which is nice to follow. The five breakthroughs. So fun. I had so much anxiety walking into this one because it's a very complex topic and it's a very dense book. And then he was masterful and wonderful at just walking us through very lay person style. It was great. Yeah, I love him. Please enjoy Max Bennett.
We are supported by Audible. We know you love audio content. Thanks for listening to the show. But if your ears are craving more audio, Audible is the place to go. I probably, in truth, spend more time on Audible than any other place. Any other app? Yeah, I'm listening every night for an hour before bed.
There's more to imagine when you listen. Whether you're searching for the latest bestsellers and new releases, or you want to catch up on a classic title, you can find it all in the Audible app. And as an Audible member, you choose one title a month to keep from their entire catalog. What are you listening to now? Well, I'm just finishing The Worlds I See Now.
by Fei-Fei Li. It's so good and moving and I love it so much. I'm sad it's ending. Now, listen, new members can try Audible for free for 30 days. Visit audible.com slash DAX or text DAX to 500-500. That's audible.com slash DAX or text DAX to 500-500.
We are supported by Echo Dot Kids. Echo Dot Kids is a cute, smart speaker for Alexa made just for kids. Echo Dot Kids automatically filters explicit songs, so kids are always ready for dance parties and singing. And parents can rest easy knowing everything is kid-friendly.
By using voice commands, children can interact with the devices without a parent's help, fostering a sense of responsibility and independence. Alexa can help kids develop by establishing healthy morning and bedtime routines, complete with reminders for tasks like brushing teeth, packing school bags, or going to bed on time. I need those ASAP. Calvin listens to his Echo Dot before bed every night. He does. He does.
He'll ask to play Fleet Foxes or Local Natives and just listen to music all night. We have one of these in our house, and my family loves it. We've got the super cute little owl design. It's so sweet. Plus, purchase of the device comes with one year of Amazon Kids Plus, a digital subscription designed for kids age 3 to 12 to safely learn and explore. Endless fun for kids. Peace of mind for parents. Shop the device now at Amazon.com slash Echo Kids.
He's an armchair expert. He's an armchair expert. He's an armchair expert.
Here's the reason that halves are kind of worth it is inevitably, for me, I'm 6'2 and a half. Okay. I'll say I'm 6'2 and then I'm with another person that's 6'2 and I'm taller and they go, no, no, I'm 6'2 and a half. You're right. Or I say 6'3 and someone's 6'3, I'm a little shorter. Yeah, yeah. So it's the humbler one to... It's just like you kind of got to own it because inevitably someone will disagree with you. Yeah, that's right. How are you, my friend? I'm good. We already got called out on having a fake book.
I'm sorry. What book? Well, I was so excited because Stranger in a Strange Land is an incredible book. It's one of his favorites and we just put it there. It's just color coordinated. It's not faking that there's no text inside, right? I haven't looked. It's faking that we didn't read it. Stranger in a Strange Time is your favorite? Yes, Strange Land. Where is it? The left, the left, the left. The pretty red one. What is that, sci-fi? It's sci-fi. Okay. I would believe that for you. Okay.
Why? What does that mean? Well, you profiled me correctly. Let's start there. By the way, this is terrible. Oh, no. I know. I know. I know. I'm going to own my total anxiety and terror right now, which is I've read your book twice. It's so comprehensive and dense that I was like, yeah, I could go for another. And then in anticipation of this, I'm like, well, I don't really have to research much. I've read this book twice. And then I was like, no, this book's impossible to remember. Yeah.
I got to go back through it. I have to be honest, because this is what we do here. You are not what I was expecting. What were you expecting? Yeah. There's even more than what you're about to say. But continue. He's AI. Wow. That makes more sense. No, I knew we were interviewing you. I knew your name. I didn't look you up. And I knew what you wrote. The subject matter. Exactly. And I did know it was really comprehensive. So I did expect an older man.
a very professorial type. Yeah, Sapolsky type. But not as playful. I was like, oh, I hope this is...
animated. And then I walked in, I was like, oh my God, this is going to be fun. He's a six foot four babe. Exactly. Yeah, unexpected. I love Robert Sapolsky, so I think he's got a great look. Well, your book's often described as a mix between Sapiens and Behave, which is fair. Although Behave is such its own masterpiece. 100%. Incredible. And one thing I think is so inspiring about his work there is he integrated so many different fields.
That was absolutely an inspiration for me. He didn't only come from the space of neuroscience. He integrated a ton of psychology, evolutionary psychology into one comprehensive story, which I thought was really beautiful. Yeah, just the notion that he himself is an armchair primatologist that is the foremost expert on baboon behavior. Yeah. Preposterous. That's not even his field. He lived with baboons for a long time. Like he really put his money where his mouth was. Yeah, okay, so-
Similarly, your book does that. But even knowing a little bit more about this, I still expected to see that you would have had a PhD in neuroscience or perhaps computer science. Not the case. No. Let's start at the beginning. Ooh, beginning. And let's go through your pretty unimpressive academic background, if I can be honest with you. Oh, I love that.
Versus how profound your book is. Yeah, I like this. Makes me like you a lot. Where do we want to start? Where did you grow up? I grew up Upper West Side, New York with my mother. Single mom. Single mom. Great. A lot of time alone reading books by myself. Self-learning was a very standard part of my upbringing. Siblings? I have two half-brothers who I adore. And they're younger? Younger.
So mom remarried. Mom never remarried. I don't know if I should say this on podcast. I think she's mostly sworn off men. That's fair. But very happy. She goes on world trips with her cadre of 70-year-old single women and they live the best life. My dad remarried, so I have a stepmom. There we go. I should feel like I should introduce my mom, Laura, to your mom because she's in that phase of her life too. She's dating a lot.
My mom loves dudes. Yeah. She has not sworn up. Yeah. She'll probably be on a date in hospice at some point. Yeah. Yeah. Yeah. Interesting date. Yeah. Yeah. They were a match made in heaven. Okay. So Upper West Side, any stepdads in the mix? No stepdads. Whew.
Okay, you dodged a big one. No, no drama there. Now you go to Washington University. It's probably an incredible school. I hadn't known that there was a Washington University in St. Louis, Missouri. Yep, yep. So how do you find that school? What is that school known for? How do you end up there? There was a bunch of schools that really interested in me, but what I loved about WashU is how interdisciplinary it is.
Most schools, when you go into an undergraduate program, they force you into a single program. But WashU lets you take classes across the board. So I started studying physics and then I did some finance, economics, math. So it really lets you try everything, which I loved. But my first job out of school was actually in finance. Because your degree was economics and mathematics. Yep. I grew up in New York where you get this bug where you're supposed to be like in finance, be a lawyer, a doctor. I worked in finance for a year and did not like it. Yeah.
At Goldman Sachs? Goldman Sachs. Don't hate me. I left. I can share some funny stories. I'll need to check if I can share this publicly, but I think it's fine. Yeah. Well, you didn't kill anyone there. I didn't kill anyone. No, there's no deaths. But I should have known that I wasn't a good fit when after the 12-week training program, so they just put you through class for 12 weeks. The first day Friday was casual Friday. So I was like, great, casual Friday. And so I showed up in my shorts and a t-shirt and I was like, I'm so excited to have casual Friday. Casual Friday meant no tie. No tie.
Right. You went all the way. You thought it was beach day Friday. I thought it was beach day. I thought it was going to be casual. I'm going to meet my new friends. And when I showed up on the desk, they had security just to mess with me, escort me out, put me in an Uber and drop me out of Brooks Brothers and say, come back when you have a seat. Oh, wow. That was day one. Yeah, yeah, yeah. I love that they took you to Brooks Brothers. Yeah. Did they give you a credit card? No, no, no. Oh, shit.
Okay, so you're one year there. What were you actually doing? So I was a trader and I traded Latin American interest rate derivatives. Okay, great. Latin American interest rate derivatives. Explain that further if you're interested, but I'm going to guess no. I kind of just want to guess, which is it's not even based on playing the interest rate game. You're not buying and selling their money.
There's a product based on when that moves. Exactly. Other complicated product. Oh, man. Yes. The only thing I really know about derivatives is I got weirdly interested in credit default swaps post 2008. There you go. And I was quite shocked to learn. And I think most people would be that you can basically get an insurance policy on
securities that you don't own. That is the premise of a credit default swap. That's preposterous. Yep. And the notion that you'd be heavily incentivized to make a company go out of business is also really crazy to me. Yep. Okay.
It's a problem. And we saw the manifestation of those problems. So the people that they brought in to work on derivatives were a lot of you mathematicians? A lot of them have math backgrounds. Yeah, yeah. Okay, so you did one year of that. And was it hard to quit because you've kind of gone to where ideally you'd want? It wasn't for me. There's nothing wrong with people who love that at all. I don't want to disparage that. But I learned a lot about myself, which is I...
I'm much more of a collaborative. I like working with the person sitting next to me. I don't want to compete with the person sitting next to me. I'm much more interested in long-term projects where we're working together towards a common goal. And in retrospect, it's silly that I thought I would like being a trader, which is the most cutthroat competitive thing ever. So it just wasn't who I was. And so it was actually quite easy to leave. How about the bro scene? Were you getting invited out to like rock? Yeah, yeah, bros.
Yeah, yeah, yeah. And that's also so not me. I grew up with a single mom. I'm trained to be very averse to that. Cocaine and strip clubs wasn't a natural fit. It wasn't a natural fit for me, no. So when you leave, do you know immediately what you're going to do or do you have a period of trying to figure out what you're going to do? I didn't know this was going to be like a whole life story. This is great. I think we care about them.
Well, I can tell you why. Okay. And I think Sapolsky is a great person. My other favorite one is Sam Harris, which is, I think a lot of people know a ton about a lot of things and they're not really curious why that was even appealing to them or they were driven in that direction.
And I find that element to be really important because I think there are certain concepts that make us feel safe given our own background and childhoods. And for whatever reason, those are very comforting to us. And then we just pursue them. And so I don't know. I just feel like that should always be a part of the recipe of why you would land there. Makes sense. Okay. So I got a job.
I was an unpaid intern at this company called Techstars, which is a startup accelerator. And so I was like the free business help for all these startups going through the program, which was hilarious because I had no business experience. But for whatever reason, they thought I did. And I met these two very close friends of mine now who added me as sort of the third co-founder of this business idea. That's when I got into AI and that's where I spent nine years of my career. This is BlueCore? This is BlueCore. BlueCore, as I understand it, it uses AI, but in the column of marketing. Yep.
Walk us through how that actually functions. Sort of my whole career in business has been about empowering companies to compete with Amazon. Uh-oh. That's our new home. That is our boss. Okay. But let it rip. Let's see if we can get into boss. You're honest here. Yeah. Daddy trouble. Ooh. Thank you. Our corporate daddy.
Amazon has obviously an incredible business and they have an incredible amount of technology to help both with their marketing. So they do incredible job personalizing marketing. The emails they send are based on all of the things that you've done on Amazon. Their recommendations are incredibly intelligent. And so most other brands and retailers did not have that technology. And is that a two-prong issue? One, you don't have the actual tech. Two, you don't have the actual technology.
Two, you don't have the database of all their info. Both of those are problems, which makes the AI problem harder because you need to make an AI system work with less data. And so what BlueCore was all about was helping brands like Nike, Reebok compete with an Amazon. Isn't it funny to think of Nike as like the underdog, the David in the Goliath story? Yeah.
And so that was where my original interest in the brain came, because when working with machine learning systems, it begets this thing called Moravec's paradox. So Hans Moravec was a computer scientist who made the following observation. Why is it the case that humans are really good at certain things like playing basketball or playing guitar or doing the dishes that is so hard for machines to do?
And yet things that are really hard for humans, like counting numbers really fast and doing arithmetic, are so easy for computers. And this is classically called Moravec's paradox. Like, why is that the case?
And so that was the beginnings of my interest in the brand. The book starts with the most intriguing premise for me, which is what we're trying to do now in this phase of our technology is to get these computers to have intelligence that either matches ours or exceeds ours or is similar to ours.
But what's really funny is we don't know how our intelligence works. So we're trying to replicate a system that actually we don't really truly know why and how we're intelligent. And I think that's really fascinating. How are you going to recreate something that you don't actually understand in the first place? So it's just a very intriguing jumping off point.
And then you break this story. These are co-evolving narratives. One is the birth of AI and where we're at today. And then one is the birth of the first brain and the first intelligence and the first neurons.
And you break it up into five really helpful sections of evolution where these big leaps happened. Was that an intuitive way to frame this? So my original intent was actually never to write a book. I started just by trying to understand how the brain works. The whole book wouldn't exist without a huge dose of being naive because I thought that I was going to buy a neuroscience textbook, take a summer and read it, and I'd understand how the brain works. Yeah.
Yeah, yeah. I would have thought that too. Yeah, yeah, yeah. And after reading a textbook, I just realized, wow, we have no idea how the brain works. So then I bought another textbook.
And this process continued for about a year and a half until I had unintentionally sort of self-taught myself neuroscience. And yet I still felt like I had not satisfied the itch of understanding how the brain worked. So that led me to start reaching out to various neuroscientists because I wanted to collaborate with them. And no one responded to my emails, to your point, lack of impressive academic background. No, I think you should apologize for that at this point. No, not at all. It's true. No, I was expecting Harvard and Stanford and PhD.
It's a very good school. I thought you were going to say like Arizona State. Hey, you're on a real run about bagging on Arizona State. Sun Devil Stadium, Flaky Jakes, go-karts. No, listen, they have a floaty thing around the whole campus. A lazy river. I think it's okay for us to say that. Anyway, sorry. Sorry to drag you through the mud on her. So no one would respond to me. So what I decided to do, because I started coming up with these ideas, which were not based on evolution. Actually, my original idea is,
We're trying to reverse engineer this part of the brain called the neocortex, which I can get into because a lot of fascinating things with that. And so I had an idea for a theory of how that might work. So I decided the only way I was going to get people to respond to me was I'm going to submit it to a scientific journal because they'll reject it, obviously, who am I?
but they'll have to at least read it to tell me what was wrong with it. And this is where this whole unlikely journey began because to my surprise, it actually got accepted. He wrote a theory and it was peer reviewed and it was published. And I won what's called the reviewer lottery. Just by luck, one of the reviewers is a very famous neuroscientist named Carl Friston. He became sort of an informal mentor of mine.
And then sort of this started cascading into this self-directed academic pursuit. How many years ago was that? That was four years ago. Oh my God, this is so accelerated. Yeah. How old are you? I'm 34. Oh no! Was I supposed to be younger or older? Oh, way older. Stanford, old. Younger than me is upsetting. Yeah, you're a loser now. You're a big boy. I know.
Oh, that's rough. 34. Okay. Wow, so you were 30. But you've had a few articles published now. What was your novel proprietary theory? So with the neocortex, and in large part, I don't think this approach is actually going to work that well, which is why I pivoted to the evolutionary one. But the broad theory, it's a little nuanced, but the neocortex is so fascinating because if you look at a human brain, all the folds that you see, if you remember an image of a brain, that whole thing is neocortex. And what it actually is, is this sort of
It's like a sheet that's bunched together. What we've always thought is different regions of the neocortex do different things. So the back of your brain is your visual neocortex. If that gets damaged, you become blind. There's a region of neocortex that if it gets damaged, you can't speak. It's a part of neocortex that if it gets damaged, you can't move.
And so one would think that this isn't really one structure. It's actually a lot of different structures. Each have a unique structure that would facilitate the scene or the hearing. Exactly. Yeah. And what is mind blowing is if you look under the microscope, the neocortex looks identical everywhere.
What? And somehow this one structure does all of these different things. And this was when I started becoming really fascinated with this topic. And can be relocated and move, right? You can co-opt areas of the neo. It's interchangeable? Well, if one area is destroyed, you can relocate into another section of neocortex to perform that task. You clearly remember the book. I'm impressed. Oh, right. Well, we'll get into the columns at some point. So the best study that demonstrates this, it's a very famous study where they took ferrets and...
and they rerouted their visual input from the visual cortex into their auditory cortex. So if it is the case that these regions are different, then they shouldn't be able to see appropriately. But what they found is they could see pretty much just fine. And this areas of auditory cortex were just repurposed to process vision. And this is also why after strokes, you can regain abilities. The region of neocortex that's damaged doesn't actually grow back. There's no repair, it's just relocation. Correct.
Take a second. What a fucking thing, the neocortex. What is it? Me and you, I mean. Max has known for four years. I know I'm going to say how long I've known, but we'll just say, yeah, I'm just learning now. That was a full humble. Um...
Okay, so that really sparked your curiosity. And what paper do you write based on that? So I wrote a paper that was trying to understand how the neocortical column, so what we now think what the neocortical sheet actually is, is a bunch of repeating columns called a microcircuit. And there's a big mystery to figure out what does this microcircuit do that makes it so repurposable? And there's a few theories that we can go into. But my question in particular was, how does it remember sequences?
So how is it the case that you can hear a sequence of music one time and you can hum it back to someone? There's a sort of mathematical framework where I theorize how I might do that. And that part is still a great mystery, right? We're getting into memory too. Where is that at? We can't observe it. We can't see anything etched into a neuron. Presumably there's new connections and a new pattern of connections. That pattern of connections somehow represents that sound we heard. There's pretty strong consensus in the neuroscience community that the physical instantiation of memory,
is in changing the synaptic weights. So the connections between neurons either delete or more get formed or the strength changes. But the big question is for a specific memory, like a memory of your childhood or the memory of how to do something, where that lives is more of an open question because it doesn't seem to live in any one place. It seems to be distributed across many different regions of the brain.
And there's components to a memory, right? There's an emotional component. There might be a smell, there might be a color, and those are all drawing from all areas of your brain to come together for this one memory. And one way we feel there's a lot of confidence that the brain is different than computers. In computers, there's something that could be called register addressable memory. The way you get a memory is you need to know the code for its location on a computer chip. You lose the code, you lose the memory.
In the brain, there's something called content addressable memory, which is the way you remember something is you give it a tiny piece of the overall memory and the rest is filled in. Oh, wow.
And this is why when you smell something, a memory pops back into your head or you forget how to play a song, but you start playing the first chord on a guitar and then the rest starts flowing to you. Okay, so now I'm going to have to consult my notes, which I try not to do, but here we are. Let's first talk about the complexity of the brain, which is 86 billion neurons and over 100 trillion connections.
I mean, that's hard to really comprehend. Within a cubic millimeter, so if you look at a penny, like one of the letters, there is over one billion connections in the brain. No. That's boggling, right? Okay, so the way you decide to lay out the book is instead of trying to reverse engineer how the brain works so that we can apply it to how the AI should mechanistically work...
Let's start at the very beginning and then just take the ride up the ladder because everything that's already happened in evolutionary terms to previous iterations, animals, amoebas, it's all still here, right? I think that'd be the first concept that might shock people is virtually everything that happened 600 million years ago. We still have that. And then we've added things on top of that. Evolution is very constrained, right?
So every iteration of an evolutionary step, in other words, another generation, it can't redesign things from scratch. It has to tinker with the available building blocks. And that's why you see that our brains aren't that different than fish, even though our common ancestor with fish is 500 million years ago. Because once a lot of these structures are in place, it's very hard for evolution to find solutions that reverse engineer them. So usually it just tinkers with
Even bird wings are a repurposing of arms. It didn't just reinvent wings. It repurposed an arm over time. You even go back and to say scales become the feet and the feet become the wings. It's all one steady line we can follow back. So what is the first version of what we might think of as a brain? What's the first animal that has neurons? Okay, so interestingly, there were neurons before there was a brain.
So if you look at a sea anemone, which side note, because most of my learning was self-taught, one of the most funny parts of me entering the neuroscience community is I pronounce words wrong all the time. Oh, good, good. I do too. Yes, because you've only seen it in print. Only seen it in print. But this one's actually, I don't have a good excuse for. I used to say anemone until my friend was like, did you ever watch Finding Nemo? It's not anemone. Anemone, a sea anemone, jellyfish. These are creatures that have neurons, but no brain. And they have what's called a nerve net.
Their skin has a web of neurons that are implementing a reflex, but there's no central location where decisions are made. Each arm or tentacle is independent largely from the other tentacles. The first question is why did neurons evolve? And there's a really interesting diversion between fungi and animals that I find fascinating. Yeah, I love this. Because we are actually not that different than fungi, even though they look very different.
Because at the core, the way we get energy is through respiration. In other words, we eat sugar from plants or other animals and we breathe oxygen and release carbon dioxide. We combine sugar and oxygen to make fuel. Exactly. So we can't make our own fuel. Plants can make their own fuel.
They just need sunlight and water and they're fine. We can't survive without plants creating sugar for us. So fungi and animals very early in evolutionary time took diverging strategies to get sugar. Fungi took the strategy of waiting for things to die.
I just want to say one thing on this because it's really fascinating. It's the birth of predation. So prior to this, and the earth is four and a half billion years old. This is only hundreds of millions of years ago. Organisms that live, they just lived and they got energy from the sun and they didn't eat anything. Fun.
Fungi is like the first thing that it's going to need other organisms to exist. It's gonna consume other organisms for its source. That's really crazy that that even happened. Totally. And so even though there might've been some minor forms of predation before oxygen was around, what's called anaerobic respiration is much less energy efficient. So trying to do respiration without oxygen. But when oxygen came around,
You can get way more energy by consuming the sugar of others. And that's really when you got the birth of predation. So fungi, they survive by having fungal spores all over the place. They're around us all the time. And when something dies, it then flourishes and grows. And this is why when you leave bread out, there's fungal spores everywhere and we'll just start eating them. The gross filaments emerge. Animals took a very different route.
Their route was to actively kill other things. And so this is where neurons are really important because even the very first animals, and we don't know what the first animals actually looked like. There's theories, but the best model organism is a sea anemone. They probably sat in place. They had tentacles and they waited for things to fly by and then they would just capture them and bring them into their stomachs.
They just waited. They can't see, they can't hear, they can't smell, nothing. There's no data coming in. They're just existing and hopefully things will bump into them and they'll eat. They have data, which is touch. So like if something touches their tentacle, then they pull it in. So that's one of the main reasons we have neurons. And with that, there's a bunch of commonalities to your point about...
how evolution is constrained, our neurons are not that different than jellyfish neurons, which is kind of mind blowing, which is the unit of intelligence between us and a jellyfish is not really much different. What's different is just how it's wired together. Some of the foundational aspects of neurons, which are, they come in excitatory and inhibitory versions. So some neurons inhibit other neurons from getting excited and other neurons excite other neurons from getting excited. You see that in a C anemone. Why does that exist in a C anemone? Because
Because even in a basic reflex, you need to have an if this, not that sort of rule. If I'm going to open my mouth to grab food into it, I need to relax the closed mouth muscles. If I'm gonna close my mouth, I need to relax the open mouth muscles. So you need this sort of logic of one set of neurons inhibiting another set of neurons.
It's very binary, right? It's like on or off, exciting or calming. And that's a fun parallel with just how computers up until whatever quantum happens, I guess. But that is also the basis of computing, right? It's ones and zeros. It's on and off. It's binary. There's a really rich history in computational theory and computational neuroscience about the degree with which neurons and computers encode things similarly. There is a big debate around that. Some neurons we know encode things in their rates, right?
So it's not really ones or zero, it's how fast it's firing. So pressure is like this. So the reason you can tell the difference in pressure is because your brain is picking up the rate that a specific sensory neuron in your finger is firing. But we also know other parts of the brain are clearly encoding things in a different way than just rates. Other parts of the brain seem to be encoding things more like a computer. And so it's a big mystery sort of how it all comes together, but the brain's probably doing both. Okay, fascinating. So-
Now, as organisms are going to consume other organisms and there are versions just waiting to get lucky and have food bounce into them and then eat it. Somehow some new organisms develop or they decide, no, we're going to go in search of the food. And this is
The first brain. The first brain. And so the first brain is all about locomotion and steering. Bingo. Yeah, break that down for us. One thing that's fascinating about this is this is an algorithm that the academic community would call taxis navigation. I think steering is a simpler word for it. And this algorithm exists in single-celled organisms. So how does a bacteria find food? Bacteria has no eyes or ears. A bacteria doesn't have complex sensory organs.
All it does is has a very simple sort of protein machine on its skin. And if it's going in the direction of something that it likes, it keeps going forward. And if it detects the decrease in something that it likes, it turns randomly. And this takes advantage of something in the physical world, which is if I place a little food in a Petri dish,
the chemicals from the food make a plume around it. And the concentration of those smells are higher, closer to the food. So if I have any sort of machine that detects an increasing concentration of this food, if I just keep going in that direction, eventually I find it. Right. And if you turn and it decreases very quickly, you'll know, wrong direction. You turn again, it decreases again, I turn again.
Really quickly, you're going to run through the trial and error of it. Exactly. It's like playing a hot-cold game. So that works on a single cellular level, but on a large, not large to us, but from a single cell, very large scale of even a small nematode, which is very akin to the very first animals with brains 600 million years ago, you can't use the same machinery that a single cell uses because they have these tiny little protein propellers that can't move an organism with a million cells. So this algorithm was recapitulated or recreated in a web of neurons.
So the same thing happens, but it doesn't happen in protein cascades within a cell. It happens with neurons around the head of a nematode or worm-like creature that detects when the concentration of a food smell increases and drives it to go forward. And another set of neurons that detect a decreasing and turn randomly.
And what's so cool is the point of the very first brain is you have to integrate all this input to make a single choice. And they've tested nematodes, which are good organisms to experiment with what early brains were like, because they're so simple. And you can create a little Petri dish and you put a bunch of nematodes on one side and you put a copper line in the middle. And nematodes hate copper for a esoteric reason that it messes with their skin. And put on the opposite side some food. Does it decide to cross the copper barrier? Yeah.
And so the very first brain had to make these choices. And it depends on two things, which would make a lot of sense. One is the relative concentration of each. So the more food, the more of them are willing to cross and how hungry they are. Mm.
The hungrier they are, the more willing to cross. Yeah, if the other option's death at that point. If you're full, I don't need to cross. Exactly. If I'm about to die, there's nothing at risk. Are they going to die anyway from the copper? No, the copper is just uncomfortable. Oh, I see. Yeah, yeah, yeah. Wow. Makes them cranky. Yeah. Is the medical and scientific term. Stay tuned for more Armchair Expert, if you dare.
Ryan Reynolds here from Mint Mobile. With the price of just about everything going up during inflation, we thought we'd bring our prices down. So to help us, we brought in a reverse auctioneer, which is apparently a thing. Mint Mobile Unlimited Premium Wireless. Give it a try at mintmobile.com slash switch.
$45 upfront payment equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes each detail. This Halloween, ghoul all out with Instacart. Whether you're hunting for the perfect costume, eyeing that giant bag of candy, or casting spells with eerie decor, we've got it all in one place.
Download the Instacart app and get delivery in as fast as 30 minutes. Plus, enjoy $0 delivery fees on your first three orders. Offer valid for a limited time. Minimum $10 per order. Service fees, other fees, and additional terms apply. Instacart. Bringing the store to your door this Halloween.
With Credit Karma, finding the right credit card for you is easy. Our app analyzes user profiles to suggest personalized recommendations. Visit creditkarma.com today to explore cards tailored to your needs. Credit Karma, simplifying your financial choices. ♪
Okay, so that sounds so simple. I want to talk about Roomba because that too is very interesting. I think it's in this realm. But just right out of the gates, I would say there's a much bigger implication to what you just laid out, which is
I'm detecting more of that food, I move towards it. I'm detecting less, I turn. Inadvertently, that's created good and bad. The very foundation of good and bad, and the way that I'm constantly lamenting that we're so drawn to binary, it's so appealing, and we follow people who seem to know which of the two binary options is the correct one. It's like our Achilles, our binary wiring. It's from the very jump. Good and bad. There's less food or there's more food. That's how simple the world is.
And we're inheriting all of those neurons in that evolution. And yeah, that's a hard thing to transcend. Yep. We'll see as we go forward in evolutionary time, there are aspects of humanity that are not binary in that way. But evolution does seem to have started in a binary way like this. And one interesting thing about the brain of a nematode. So a nematode, the most famous one is called C. elegans, where people study this nematode a lot. I have a poster of him on my wall. Okay, yeah, he's a great guy. He's very famous. Oh. No. No, that's it. Oh.
Don't trick me. My brain is already working really hard right now. He was setting it up though like that he was the Brad Pitt of the anematos. You got him, you C.Elegant. He, oh my God, he. C.Elegant have a neuron. So what's interesting is when we see things in the world, our neurons in our eyes encode information in the world. It goes into our brain and then elsewhere in the brain, we decide if it's quote unquote good or bad. Human brain is much more complicated.
But a nematode brain, whether or not something is good or bad, is directly signaled by the sensory neurons themselves. So the neuron that detects smell directly connects to the motor neuron for going forward and is directly sensitive to how hungry the nematode is. So in these early brains, there was only good or bad. There was no measuring the world in the absence of putting it through the lens of whether it's good or bad. Right.
So how does emotion originate in this phase? So this is one of the most fun parts of doing this research. I found this to be one of the most elegant things that I had not seen discussed in the sort of comparative psychology world of studying other animals. In humans, there's two neuromodulators we're probably all familiar with, dopamine and serotonin. And we hear a lot about these two neuromodulators. They do very complicated things in human brains.
But by and large, we know that dopamine tends to be the seeking, pursuit, want more chemical. And serotonin is more of the satiation, relaxation, things are okay chemical. So let's go back to a nematode and see if we can learn anything about the origin of these neuromodulators by seeing those two chemicals in their brain. And what you see is something completely beautiful.
Their dopamine neurons directly come out of their head and detect the presence of food outside of the worm. So it detects food outside, floods the brain with dopamine and drives a behavioral state of turning quickly to find food nearby. Serotonin lives in its throat.
and it detects the consumption of food and it drives satiation and stopping and sleep. Digestion. Digestion. So this dichotomy between these two neuromodulators, one for there's something good nearby, quickly get it. And another for everything is okay, you can relax. You see in the very first place. Whoa, that's-
That is crazy. Yeah, it's wild, right? Wow. Good time to talk about Roomba. Let's talk about Roomba. Because Roomba is so impressive. The first time you see a Roomba, it's vacuuming someone's home. Who's in charge? How does it know? And there were really complicated attempts, right, to create this self-vacuuming device that didn't work. Give me the history of Roomba a little bit. So, Roomba. Tell me about Roomba. Oh.
So Roomba was founded by this guy, Rodney Brooks, who actually is a computer science MIT roboticist.
did a lot of writing about the challenges with trying to implement AI systems by reverse engineering the brain for exactly the problems we were talking about, which is the brain is so complicated. He gives this great allegory. I'm going to botch it a little bit, but the general idea is suppose a bunch of aerospace engineers from the year 1900 went through a little portal and woke up in the modern world and were allowed to sit in an airplane for five minutes and then they were sent back.
would they correctly identify the features of an airplane that enables flight? And he argues no, because you would look at the plastics, you would look at the material on the edge of the plane, and you'd be confused by the actual features that make flight possible. And so he thinks that's the same problem with peering into the brain as it is now. We're at risk of thinking certain things are important when they're in fact not. Right.
Right, right, right, right, right. So he started, interestingly, I don't think he thought about it this way, but the serendipity is interesting. His robot, the Roomba, works not that differently than the very first brain because he decided to simplify the problem dramatically. And he realized you don't really need to do many complicated things to create a great vacuum cleaner. All it has to do is turn around randomly. And when it hits a wall, it turns randomly. It'll keep going. And it'll just keep doing that. Right.
And then eventually it'll get around the full room. I mean, there's a slightly more complicated algorithm it uses, but by and large, it's the same thing. And most interestingly, the more modern Roombas have something called dirt detect, where if it detects dirt, it actually changes its behavior for a minute or so, and it turns around randomly. Why? Because the world is clumpy. If you detect dirt in one area, it's likely there's dirt nearby. This is exactly what nematodes do. Right.
They flood the brain with dopamine and they change their behavior to turn quickly in search of the local area. Now I need to watch more docs on Roombas, but I would imagine the first people trying to crack this were thinking the thing has to map the room first and then it has to come up with a very efficient protocol for going back and forth and not overlapping. Maybe the goal also was too steep. The Roomba is clearly going over stuff it's vacuumed a bunch. Is that the case? Yes. But who gives a fuck? It'll eventually get it all. Exactly.
Because you could also start with maybe too high of a goal. Yeah, it's like it needs to be so efficient. Yeah, like the way this must work is it must know every inch and then it'll design the most efficient route through here. And that's how it'll work. But that's not how the organisms were working. Okay, now breakthrough number two, reinforcement learning in the first vertebrates. And you were going to tell me when those came along.
And maybe tell us about the Cambrian explosion. Yeah, so in the aftermath of the first brains emerging, the descendants of this nematode-like creature went down many, many different evolutionary trajectories. And eventually they get caught in this predator-prey feedback loop. And so this is where evolution really accelerates.
because for every new adaptation that a predator gets to get a little bit better at catching prey creates more pressure for the prey to get another adaptation to be better at getting away from the predator, which puts more pressure on the predator and you get this back and forth feedback loop. So the Cambrian explosion is around 500 to 550 million years ago. We see an explosion of animals in the ocean. Before that, the Ediacaran period, very sparse fossils from that period. Afterwards, you see animals everywhere. That era is mostly run by arthropods.
So if we went back in time and we were in a little submarine around the Cambrian, our ancestors would be hard to spot. It was mostly these huge insects, crustacean-like looking creatures. And our ancestors were as a small, probably four inch long fish looking creature, which were the first vertebrates. And there's a bunch of brain structures that emerged there. And maybe we set up
Marvin Minsky at this moment too, because as we were saying, the book so elegantly parallels the different evolutions. So really 1951 is the very beginning
First time we hear artificial intelligence. So what's Marvin Minsky up to? Marvin Minsky is one of the sort of founding fathers of AI. In the 50s, there was a lot of excitement around using computers to do intelligent tasks. And of course, there was a winter after this, what's called an AI winter, because a lot of the promises from the 50s didn't come to fruition. But in relation to vertebrates, one of the interesting things...
that Marvin Minsky tried was training something through reinforcement. And so this had been discussed in psychology since the 1900s. We know that you can train a chicken to do lots of really complicated things by giving it a treat when it does what you want. Like Pavlov's. Harnessing the reward system. The reward system.
So Pavlov was about sort of learned reflexes. Thorndyke was the one who saw that you could teach animals to do really interesting things, not just have a reflex. Oh, I see. Okay. So Thorndyke's famous experiments were these puzzle boxes. You put like a cat in a box and you put a yummy food treat, like some salmon outside of it. And the only way it can get out of the box, it's fully see-through, is it has to do some behavior. Let's say it has to pull a lever or has to poke its nose in something. Thorndyke theorized that
that these animals would learn through imitation learning. So once one animal figured it out, he let the other animal watch it.
And this did not happen. They never learned through imitation. That's a primate thing. Right. But he found something else. He found, oddly, that over time, the speed to get out of the puzzle box just slowly went down. And what he realized what was happening is it was just trial and error. When they got out, they just became slightly more likely to do the behavior that happened before that got them out. They didn't really understand how they were getting out, but through trial and error, they just slowly were getting more likely to do these
So that had been an idea from the early 1900s. And we would call that reinforcement learning. Reinforcement learning. He never used that word, but we would call that reinforcement learning now. And Marvin Minsky had this idea of what happens if we try to train a computer to learn through reinforcement? So he had a sort of toy example of this where he tried to train an AI system to navigate out of a maze through reinforcement.
And although it did okay, he quickly ran into problems with this. And he identified this very astutely as the problem of temporal credit assignment. And so, although those are a bunch of annoying words, it's actually a very simple concept. If we played a game of chess together, and at the end of the game you won, how do you know which moves to give credit for winning the game?
You make a bunch of bad ones, you make a bunch of good ones, and then the net result is you win. But how the fuck would a computer know which of those- Was the key. Exactly. Yes. Yeah.
It's so complicated and so long and so multifaceted, you can't point to anything. Exactly. Well, yeah, because it wouldn't just be one, right? It wouldn't just be one. And now is where is a fun time, I think, to introduce just how many possibilities there are in checkers. So in the game of checkers, there are 500 quintillion possible games of checkers. That's not even a word. Yeah.
That's not a real word. Which is nothing compared to how many possible games of chess there are that's 10 to the power of 120, which is more than there are atoms in the universe. So the computer would have to have a data set much vaster than the total amount of atoms in the universe to have played every single scenario and know. Yeah. So it's not possible to what you would be describing as like brute force search every possible game to decide the best move.
And it doesn't work to simply reinforce the recent behaviors right before you win, because the move that won you the game might have been very early. And so this was a problem in the field of AI for a very long time until this guy, Richard Sutton, I've met him now, he's an amazing guy, invented a solution to this in AI, which turns out is also how vertebrae brains solve the problem.
And that's what's so cool. This is the actor and the critic. Yes. This one to me is one of the concepts I struggled with a little bit more. So lay out the actor and the critic. Richard Sutton's idea comes from a little bit of intuition. So let's imagine we're playing a game of chess. You could imagine partway through the game, I make a move.
that makes you realize all of a sudden, holy crap, I'm in a way worse position right now. The points didn't change. Yeah. But all of a sudden you go, holy crap, I made a mistake. And he asked what just happened there. In that moment when you have an intuition that the state of the board just got worse for you, maybe that's something that the brain is using to teach itself. So his idea of an actor and critic is the following. Maybe the way reinforcement learning happens in the brain is there's actually two systems.
There's a critic in your brain when you're playing a game of chess that's constantly every state predicting your likelihood of winning. And then there's an actor that given a state of the world predicts the next best move. And the way that learning happens is not when you win the game.
It's when the critic thinks your position just got better. So when the critic says, oh, I think your probability of winning just went up from 65% to 80%, that's what reinforces the actor. Interesting. And what's so weird about this is the logic is kind of circular because if you start training a system from scratch,
a critic's probability of winning depends on what the actor is gonna do. And so the critic should be wrong a lot. And yet there's this magical bootstrapping that happens when you train a system like this, that the critic gets better over time, the actor gets better over time. They make each other better. Exactly. Wow. It's complicated though, right?
Or you got that really easy. It's so easy. Yeah. That was easy peasy for you. But you're saying that that's equivalent to the predator prey evolution? Well, we're going to find out now how vertebrates basically created this system. What happened in their brains?
So there was a whole history in the 90s where there was a big mystery about what dopamine does in the brain. Because we used to think that dopamine was the pleasure signal. And there was a lot of really great evidence by this guy, Kent Barrage, that discovered that dopamine actually is not the pleasure chemical. And there's two reasons why we know this is the case. So he came up with this experimental paradigm to measure pleasure in a rat. And you can do this in babies too. A rat, when it's satisfied, will like smack its lip.
And when it's not satisfied, it like gapes its mouth. So it's like a facial expression. He asked the question, okay, when you give a rat's brain a bunch of dopamine, it eats a lot. We can see it consume dramatically more food, but is it enjoying it more? And what he found is actually, if anything, it makes less pleasurable lip smacks. It's just consuming because it can't resist. And so dopamine doesn't actually cause pleasure. It just creates cravings. This is where Buddhists enter the chat. Exactly. Exactly.
So dopamine is clearly not the pleasure signal. When we record dopamine neurons, we also see that it doesn't get activated when you actually get a reward. It gets activated when some cue occurs that tells you you're soon going to get a reward. And so what's interesting is in a game of checkers, that's exactly the same thing as the signal when you make a move and all of a sudden you realize, holy crap, my position just improved. That's when you get a dopamine burst. Uh-huh.
because your likelihood of winning goes up. And that's the learning signal that drives you to be more likely to make that move in the future. So there's a part of the brain called the basal ganglia. You love this. This is your favorite. I love the basal ganglia. You're horny for the basal ganglia. It's a great brain structure. And the basal ganglia in a fish is pretty much identical to the basal ganglia in a human, which is kind of crazy. I mean, there's some minor differences, but the broad macro structure is the same. And there's a lot of good evidence that the basal ganglia implements an actor-critic-like system.
And so if you go into the brain of fish, you see these same signals where when a cue comes up that makes it look like the world's about to get better, the regions around dopamine neurons get excited the same way that happens in mammals. And so there's good evidence that invertebrates in order to learn through trial and error, which we know fish can do,
It implemented something akin to Richard Sutton's actor-critic system. And this enabled it to learn arbitrary behaviors. This is why you can train fish to jump through hoops, through treats. This is why fish will remember how to escape from a maze a year later. They can learn through trial and error. I didn't know fish were doing that. People don't appreciate fish enough, man. Not at all. You don't see them at SeaWorld doing any cool tricks. Yeah, yeah. That's why. Good for them, though. Not a sponsor. They dodged a bullet.
Fish are the sponsor. If you go to YouTube and you look up fun fish tricks, you can find, I'm sure you'll do that in your spare time. I'm pretty busy watching crow tricks. I see. I love crow tricks. Crow tricks, yeah. They can do like an eight step problem solving. Yeah, they're incredible. Okay, so within this new...
new development in the brain, we also get relief, disappointment, and timing and pattern recognition. This all happens in this section of evolution, the reinforced learning section. So how does relief and disappointment enter the equation? So when you have a running expectation of some future good things, so if you're playing a game of chess and you have now a system that's predicting your likelihood of winning,
Then when you lose, when you have an expectation that doesn't come true, this is the emergence of disappointment. Relief is just the inverse. When you expect something really bad to happen and it doesn't happen, that's relieving. So when you have an expectation of a future reward, that's what begets the emotions of relief and disappointment. We discovered quickly that AI systems can't really be taught sequentially.
Like what they first started doing when they were just teaching the computer to add, they started with ones and they taught it how to add by ones. And then once it mastered that and they taught it how to add by twos,
It would have forgotten how it added by ones, which is weird to me. I'm not fully sure why it would just ditch what it just learned. But that was a big issue in teaching these computers. And still is. So it's called the continual learning problem. It's still a big area of outstanding research in AI. The way we train these neural networks is we have like a web of neurons. We give it a bunch of input, has a bunch of output. We show it a data sample. So let's say we have a picture of a cat.
And then at the end, we say, this is a cat. We look at how accurate the output was and we nudge the weights to go in the direction of the output we want. The problem is when we're updating the weights, we haven't figured out how the brain knows not to overwrite the weights of previous memories. So the way we do this AI systems is we give it a bunch of data at once. There's a bunch of techniques for doing this, but then when we send it out into the world, we don't let it continuously update weights because it degrades over time. So this is a fascinating part of AI and I learned it from this book. And I think it's really interesting if you're familiar with
"Chat GPT 1, 2, 3." When those things are released, they're at peak efficiency. They're never gonna change. They were educated with all the data at once so it couldn't overwrite things and get rid of things.
and then it can't take on anything new. So it itself can't evolve. It has to stay put or it'll start ditching everything it learned. - I don't wanna overstate the claim. The AI research community has lots of techniques to try and overcome continual learning, but by and large, they're not nearly as good as the human brain does it. And the best proof and principle of this is we don't let chat CPT learn from consumers talking to it. So when you talk to it, it's not updating its weights. It only updates its weights when researchers at OpenAI
retrain it and meticulously make sure that it didn't degrade in the key features that they want to perform it at. And then they send a new version out into the world. But the AI systems we want are like humans that we're constantly learning as we're interacting with each other. We're constantly updating our weights in our brain and we don't have AI systems that do that yet. Well, the relief and disappointment is kind of interesting because it was reminding me of when I won state. Monica is the state champion cheerleader.
Yeah. Hell yeah. And not like, go Bengals, like flying in the air. Damn. Yeah, yeah. So the second, we won twice. Sorry, two times. Wow, wow. Yeah. But the second time, we won by one point and it was very close. And the expectation was that we were to win because we had won last time. And the feeling of winning was wild.
not happiness. It was just pure relief. And I always thought that was weird for all of us. We were just so relieved. Where does happiness fit into any of this? It doesn't really. Well, happiness is a big mystery. I think it's also a problem of language because I think
When we use the word happiness, we are referencing many different concepts, but there's a loose connection to reward. And so we definitely know that if you have a high expectation of a reward and you still are given a reward, but it's lower than your expectation, you get a decrease in dopamine. And so we know that there is a running expectation of rewards that affects happiness.
how much dopamine you get when you receive something. - How did curiosity evolve? That was the last one. - Curiosity is so interesting because when we were building AI systems to play games and DeepMind, which is now a subsidiary of Google, did some of the best research. - We interviewed Mustafa. - Oh, great. There's certain games that they still, even with all of the smart actor critic stuff, figure out how to play.
and most famously, "Montezuma's Revenge," which was like an old game. - Well, really quick, it had conquered all the Atari games. - Yes. - DeepMind. It could win every game except for this one game. - They couldn't figure it out. The problem with this game is there is no reward for a long time. The first reward is like five rooms over when you escape from this first level. And so the system just never could find it. And they realized it was because
It never had a desire to explore. So there's classically this thing called the exploitation exploration dilemma in AI exists in animals too. Once you know something works, you don't know for sure that it's the best way to solve the problem. So when do you actually explore new approaches to figure something out, even
out. Even though you've got a reward, there might be a better way. There might be more rewards. So imagine that you enter a maze, you go in, you turn right and you get a cool whatever. When you go in the maze the next time, are you going to do that same move or are you going to try somewhere else? Original AI systems solve this problem by just saying 5% of the time you're going to do something random.
But that's an issue because if the solution's far away, doing one move locally in this area isn't going to get you to the full new room. So what they came up with was effectively imbuing curiosity into the system. What they do is they have another perception module that's trying to predict what the next state of the screen is going to be. And whenever something surprising happens, that is a reward signal. So the system actually tries to surprise itself. Once it's explored a room, it gets bored.
and it likes finding the way out of the room and it gets excited to see a new room. Armed with curiosity, it beat level one. And so we see curiosity also emerge in vertebrates. What a task to diagnose, oh, it's lacking curiosity and then trying to think of what is the solution in code for curiosity. Incredible. Yeah, it's really mind boggling. Yeah.
Okay. Breakthrough number three is simulating in the first mammals. Now we're getting into the zone that endlessly fascinates me. So what happens? 150. Oh my God. This whole thing has been so insane. Well, it's going to peak at primates as you would imagine. Chimp crazy. Yeah. Theory of mind. That's so crazy that we can imagine what someone else's motivation is and what they're trying to do. But we're a couple steps away from that. But simulating is the first step in route to that. So what is it that mammals do uniquely?
When people study the neocortex, people usually think about the neocortex as doing the things that it seems to do in human brains. So movement, perception. Without a neocortex, humans can't see, hear, etc. But what's odd about the argument that the neocortex does those things is if you go into a fish brain...
and a lizard brain, they don't have a neocortex. And yet they are pretty much just as good as perception as we are. I mean, you can train an archer fish. They're called archer fish because they spit water to get out of the water to catch insects. And you can train an archer fish to see two pictures of human faces and spit on one of the faces to get a reward. No. No.
No. What? It will recognize the human face and you can rotate the face in 3D and it will still recognize the face. No. Oh my Lord. So fish are just as good at perception. So then why did the neocortex emerge in these early mammals? What was the point? So this is one of the big things that motivated me to go really deep on the, what's called comparative psychology literature.
See what really are the differences between most mammals and most vertebrates. I need to make one caveat here. Evolution independently converges all the time. So just because fish can't do something that mammals do, it doesn't mean all vertebrates can't do it. A great example is your love of crows. Birds seem to have independently evolved a lot of the things that mammals did. That's just an important caveat. There's at least three things that seem to have uniquely emerged in mammals. And they seem different at first, but then through this sort of framework that
it became clear to me that I think they're actually different applications. The same thing. One is something called vicarious trial and error. So this guy, Edward Tolman, in the 30s and 40s, noticed that rats, when put in mazes, would stop at choice points. So a point where there was a fork in the road
And it would look back and forth and then move in one direction. And he theorized, I wonder if what the rat is doing is imagining possible paths before deciding. And he was largely ridiculed for this idea because there's no evidence that the rat's actually doing that. And it wasn't until this guy, David Reddish and his lab
I think it was the early 2000s, went into the brain of a rat and confirmed that they are doing exactly this. And the experiment's very cool. So in a part of the brain called the hippocampus, there are something called place cells. So you can see that as a rat is moving around a maze, there are specific cells that activate whenever they are at that location. So neuron A activates only when they're at location A, neuron B, location B. By recording them, you can see a map of the space.
Usually these place cells are only active when the animal is actually in the location that they are. But when they reach these choice points, the place cells no longer are the current location. You can see them playing out possible paths. Oh, wow. You'll see the neuron fire for location C or D. Not only the location C or D, the path to location C or D. So you can watch a rat imagine possible futures. Whoa, whoa, whoa. Oh my gosh.
And we like to think it's this weird paradox where we're so high on ourselves. We're really reluctant to acknowledge that other animals are doing the modeling, which is what it is. You really think about the ability to model a scenario in your head and predict the future and create the whole thing and see the whole thing. What a step forward. Mind-blowing. And you can imagine, I mean, the ecosystem that this ancestor of ours grew up in was one ruled by dinosaurs.
Our ancestors were very small, four-inch long squirrel-like creatures that hid underground and in trees in a world that was ruled by huge predatory dinosaurs. And so it's speculative, but it's likely they use this ability to come out of the burrow at night and try to figure out, how do I get to this food and back without getting eaten? Because they have the gift of this sort of first move. I can see where the predators are and decide and plan, will I make it back and survive?
And that's sort of the gift of mammals. And we can see birds probably have this too, that evolved independently, by the way. It's even pointed out, we would intuitively know that there's something different because if you watch reptiles move around, they're kind of clumsy. They're still kind of moving in the like, oh, turn this way, turn this way. Whereas the mammals...
are moving much quicker. They can predict where they should land on a branch. They can predict which part of their body should be used. So fine motor skills emerge. There's two other abilities that I think are good to go through. So David Reddish did a follow-up study. He wanted to ask, okay, if I can imagine possible futures, might I also be able to imagine possible pasts?
And so he came up with this experiment that he called Restaurant Row. So you put a little rat in this sort of circle and there's four different choice points. And every time it reaches one of these doors, a sound goes on. And the sound signals whether if it goes to the right, it will get food in one second or if it has to wait 45 seconds. Ooh, that's a big, that's a big, big choice for a rat it is. To have its life. And it learns which treat is at each location. So one is a cherry, one's a banana, et cetera. And once it passes the threshold, it can't go back.
So it's a series of irreversible choices. So here's the experiment. We know that rats, he verified this, rats prefer certain treats over others. So what happens when it's next to a cherry and it gets the sound for one second and the next one is a banana, which it likes way more than cherry? It has to make a gamble.
I can go to the next one and hope for the banana, but if it gets 45 seconds, I'm gonna regret it. I should have just had the cherry. Bird in the hand. Exactly. So what happens? When the rat makes the choice and 45 seconds goes on, he goes into the brain of the rat and he sees two fascinating things. One, you can go into the part of the brain called the orbital frontal cortex, and you can see the rat imagining itself eating the cherry.
Oh my Lord. It regrets the choice and is imagining the alternative one. And you can see it look back. Oh my God. And the next time around, it becomes much more likely to say, screw it, I'm just going for the cherry. So you can see rats imagining alternative past choices, which is really useful for what's called counterfactual learning, which is imagining if I had done something different, the outcome would have been better. We do a lot of that. That's an ability that evidence suggests emerged in mammals. Counterfactual learning. Yeah, give some more human examples of counterfactual, how we do this all the time.
We play a game and at the end we lose and I go, man, we would have won had I done this one single thing. That's actually an incredible feat because reinforcement learning alone doesn't solve that problem. All you know is you lost at the end. But we're capable of imagining what move would have won us the game. And that's a really powerful ability. You're right. You're doing both things at the same time. You're playing through the whole event from your history and then you're modeling future or different events.
options and then seeing how that would have played out. You're juggling a bunch of different timelines. Right. One really cool study of this too is you can play rock, paper, scissors with a chimpanzee. You can train them to play rock, paper, scissors. And they've found that let's say a chimpanzee loses when it plays scissors and you play rock. What is it more likely to do next? Eat you.
Probably eat your face. Maybe. Play rock. If you played rock and it played scissors, the next move is likely to be paper because it's imagining what would have won the prior move. Right, right.
If it was just reinforcement learning where it wasn't being able to model these things, it would maybe be less likely to play scissors, but it wouldn't be more likely to play paper. But because it's imagining what would have won, it does. Well, I'd even argue if it was strictly reinforcement, it would just play rock because rock won. Rock's a winner. Sorry, I was imagining the chimpanzee lost. It played scissors and you played rock. Right. So then the chimp next time, if it was strictly reinforcement learning, it lost.
would probably play rock, no? If it's capable of imagining them in your shoes. But all it did was it took the action of scissors and lost. So a chimpanzee probably could do that, you're right. But standard reinforcement learning is I take an action, I get an outcome, and then that's it. And I just don't play scissors again. Right, right. That's all I can really think. Now, Henry Molaison is an interesting character. He's kind of like Phineas Gage.
He's the Phineas Gage of the hippocampus. Phineas Gage, of course, lost his frontal lobe when we learned a lot about that. So what happened to Henry? Famously patient HM. He had to get all of his hippocampus removed because he had epilepsy. Ding, ding, ding. Epilepsy. Yeah, yeah, yeah. He had very bad epilepsy. Monica has very mild epilepsy. Excuse.
Excuse me. Well, you don't have to have your hippocampus removed. I know. You're right. You're right. So I guess it's very mild. Call me when you've had your hippocampus removed. Yeah, yeah. So patient H.M. lost the ability to create new memories when he woke up from the surgery. Everything else is the same, Monica. His personality hasn't altered. He's competent, but he cannot make a memory going forward from that moment, which again would really...
lead you to think that memory's in the hippocampus, but that would be incomplete. Well, the fascinating thing about, this is one reason why if you take like an intro psychology course, they try to divide memory into different buckets. This is one of the canonical studies that differentiated what's called procedural memory from episodic or semantic memory, because he could learn new things, but it wouldn't be an episodic memory of an event. He could learn to play piano and he would be like, I don't know how to play piano, but then you put him in front of piano and he's playing. Oh,
And he doesn't know how he knows it. Memory broadly lives in more places, but a certain aspect of memory was lost. We see this with Alzheimer's a lot. Yes. Also, which comes out of this new ability in mammals to model is goals versus habits. This one really hit me this morning because we can all relate to this so much. Let's talk about the mice.
and what they learned in goals versus habits. The famous study around this is what's called a devaluation study. So the way it works is you train a mouse or a rat usually to push a lever a few times and then get a treat. Now, what happens if separately from this whole experiment, you give it the same treat, but you lace it with something that makes them sick?
You verify that it doesn't like the treat anymore because the next time you give it the treat, it eats way less. When you bring it back and show it the lever, does it push the lever? And what they found is if it's only experienced it a few times, the rat will remember what the lever leads to is the treat. I don't want it. It simulates the goal and says, I don't want to push the lever. But if it had pushed the lever 500 times before, it will push the lever. It'll ignore the food, but it can't stop pushing the lever because it's built this habit that I see lever and I push it. Whoa.
Yeah, a hundred times or less, it'll stop pushing the lever. But if it's gotten to 500, it'll just keep doing it. And I was just thinking of humans and like smoking, you know, it's just something you've done a hundred thousand times by the time you're five years into it. And really nothing's going to combat that. There's good evidence that when behaviors become automated, they actually change the location in your brain. So they shift to different parts of the brain that are more automated.
And this is obviously useful for us. It's the reason why we can walk down the street and not think about walking. Or drive a car. Or drive a car. It's all automated. Daniel Kahneman famously talked about system one, system two. Habits are also the cause of lots of human bad decision-making because we are devoid of a goal when we're acting habitually. Stay tuned for more Armchair Expert, if you dare.
Are you struggling to close deals? Cold outreach is wasting the time of both the buyer and seller at every stage, especially when sellers are using shallow and outdated data. Your organization can overcome these challenges with technology that translates comprehensive, high-quality buyer data into real-time insights.
These deeper insights empower sales reps and teams to adopt the habits of top performers, which leads to better outcomes, like more pipeline, higher win rates, and larger deals. We call this deep sales.
And we've built the first deep sales platform with the next generation of LinkedIn Sales Navigator. Right now, you can try LinkedIn Sales Navigator and get a 60-day free trial at linkedin.com slash trial. That's linkedin.com slash trial for a 60-day free trial. Let LinkedIn Sales Navigator help you sell like a superstar today. Just go to linkedin.com slash trial and get started.
Bombas makes the most comfortable socks, underwear, and t-shirts. Warning, Bombas are so absurdly comfortable you may throw out all your other clothes. Sorry, do we legally have to say that? No, this is just how I talk and I really love my Bombas. They do feel that good. And they do good too. One item purchased equals one item donated. To feel good and do good, go to bombas.com slash wondry and use code wondry for 20% off your first purchase. That's B-O-M-B-A-S dot com slash wondry and use code wondry at checkout.
This episode is brought to you by Huggies Little Movers. Huggies knows that babies come in all shapes and sizes, and their tushies do too. Huggies has more curves and outstanding active fit. Parents know that there's nothing worse than an ill-fitting diaper, especially for active wiggly babies. Huggies Little Movers are curved to fit all curves, so babies feel comfy no matter how much they're moving around. And we all know they're moving around a lot.
They also offer 12-hour protection against leaks, which is a game changer. Get your baby's butt into the best-fitting diaper. Huggies Little Movers. We got you, baby. ♪
Okay, so somehow AlphaZero and Go are involved in this some way. And I'll just add, we laid out the number of possible moves in a chess match being 10 to the 120. Go's even more complicated. So this game Go. So AlphaZero was the first AI.
And that was DeepMind? DeepMind as well, yep. They beat Go, but it can't do this in the real world. It can do it in the framework of there's limited moves to be made. But once it's in a world with a mushy background and lots of variables, like it can't handle that.
How are we handling that and they aren't? There's classically two types of reinforcement learning. There's what's called model-free, which is what we talked about in early vertebrates. It means I get a stimulus and I just choose an action and the response to it. Model-based is what we've been saying comes in mammals. That's when you pause to imagine possible futures before making a choice.
And model-based has always been a harder thing in AI because you need to build a model of some worlds and you need to teach a system to imagine possible futures. And as we said, you literally can't imagine every possible future in a game of even checkers, let alone chess or Go. So the brilliance of what they did in AlphaZero is it did imagine possible futures, but not all of them. What they did is, in simple terms, they repurposed the actor-critic system. And instead of just predicting the next best move, they said, you know what?
why don't we play out a full game, assuming you made this next move, and then let's see what happens. Then let's go back and let's choose your second favorite move and play a full game. And let's choose your third. And let's just do a few of those and do that maybe a few thousand times in a few seconds. And so that was a clever balancing act because it wasn't choosing a random move. It was choosing a move that the actor was already predicting, but it was checking to make sure that it was in fact a good move. But to your original question,
Still, the game of Go has certain features that are way simpler than the real world's. On any given board, there's only a finite number of next moves you can make. Even though the total spectrum of games is astronomical, from a given position, there's only really a handful, maybe 100 or so. I forget the exact number. In the real world right now, the variables of where I put my hand and how I speak is infinite. Yeah, yeah.
These are continuous variables. And so that's a much harder problem of how do you train an AI system to make a choice when there's literally an infinite number of possible next things to do? Yeah. Okay, now we're getting to primates, my favorite. And this is mentalizing. So what happens in the brain and what is the outcome?
One thing that I found fascinating going into the brain of non-human primates is how incredibly similar they are to other mammals. I mean, there's not much different in the brain of a chimpanzee from a rat other than just being bigger. But there are two regions that are pretty substantially different. One is this part of the front of the brain called the granular prefrontal cortex. And there's another part in the back of the brain with complicated names. And so there's a big mystery of what these brain regions did. Phineas Gage is an example of this odd thing where we've known from the 40s
that people with damage to the older mammalian parts of the brain have huge impact. I mean, if you damage the old part, the original mammalian part of the frontal cortex, you become mute. The older mammalian part of visual cortex, you don't see. But if you damage these primate areas, people seem oddly fine. In fact, some people's IQs don't even degrade that much.
But the more you inquire, you realize there's all these oddities in their behavior. They make social faux pas. If you start testing them on their ability to understand other people's intents, you realize all of a sudden they struggle to understand what's going on in other people's minds. Sometimes they struggle to identify themselves in a mirror. One of my favorite studies to differentiate what were these new primate areas doing is they looked at three groups of people. People with no brain issues, people with hippocampal damage, and people with damage to the primate area of prefrontal cortex.
And they asked them to do something very simple. They just asked them to sort of write out a scene in the future. And what they found is people with hippocampal damage would describe a scene with themselves. So they could articulate themselves in that scene. But the scene was devoid of details about the rest of the world. The people with prefrontal damage was the exact opposite. They could describe the features of the external world, but they couldn't project themselves into their own imaginations. They were absent from it.
Yeah, like Phineas couldn't imagine what would happen when he left whatever room he was in. But he could imagine the room. You could say, what's a kitchen look like or a picture of the kitchen, but he couldn't see himself traveling forward in time, as I remember. When we now look at primates, we say, cool, so what would that suggest? That would suggest some set of theory of mind, sense of self emerging in primates. And there are some really cool studies that you can show that chimpanzees and monkeys are very good at inferring the intent in other people.
much more so than it seems like most other mammals. There are again, exceptions, dolphins, dogs. There are some animals that seem to have independently gotten this. So one experiment I like about this is you teach a chimpanzee when it's shown two boxes, the one with the little marker on it is the one that has food. So then you have an experimenter come in and they bend over and they mark one, they stand up and they accidentally drop the marker on the other one.
So the cue is identical. A monkey always knows to go to the one that they intentionally marked because it can tell what you intended to do. Oh, because they saw the accidental. They saw one was a mistake. Right. And one was intentional. Wow.
And then they also have this wild thing within that same experiment, right, where there's two people administering. And one experimenter is incapable of giving them the treat because he simply can't see it or she can't see it. Their ineptitude is preventing them from giving the chimp what they want. And then the next experimenter clearly knows where the thing is that they want and is choosing to not give it to them. Oh.
So the result is saying they don't get the thing. But when given the option of who they want to be in the room with, they'll always pick the person that was just oblivious to where it was because there's some chance they will discover it. How fucking complicated is that? There's like an intent-
That's when people go like, not to get political, but it's like intentions don't matter, outcomes matter. Well, no, they really matter. They're really what's driving almost all of our good faith and goodwill towards each other. It's like we are acutely aware of intention. I think we're going to ignore intention. It's bonkers. Yep. Also, let's talk about the one I really liked from it. And we can talk about how this evolved and what probably incentivize it is this political arms race where you're living in a multi-member group.
The hierarchy is kind of flexible. There's ways to outwit other people. Knowing their intentions is going to be hugely rewarded. And so Bella and Rock. I loved this story too when I stumbled upon it. So this researcher, I think his name was Emil Menzel. Just so you know, on this podcast, I've told this story. When I was reading the book, I was so excited about this example and a fact check. I was like, Monica, this fucking chimp figured out that, you know. Sorry, I'm still kind of stuck on the other thing. Do you think that people...
are more likely to choose a partner or a friend who they think is stupid over mean well sure if you've dialed their intention to be harmful to you yeah you would definitely pick a dumb dumb that might accidentally hurt you i guess so but there's this whole thing on tiktok liz taught me about you always have to qualify while you're on tiktok because i don't know it well enough because
She just relays it. But there's a grid of partnership and it's like stupid and cute, mean and cute. Like you pick. And I mean, I picked hot and mean. I picked smart and hot first, obviously. Sure, duh. But I did pick hot and mean. You don't like Hufflepuffs. This is all in keeping. So to be fair with the monkey experiment, it wasn't that one of them was just so dumb they couldn't help out. No.
It was that it was hidden from their view, for example. Oh. So they just were unaware of the treat or it was in a box they couldn't get into. So it wasn't like the monkey just liked the dumb human. That they were inept. It was that you maybe would have the intention to help me. You just weren't capable of it. But the other one could have and I knew they didn't want to. That's the difference. Interesting.
- Interesting. - Not that it, you know. - Not that it changes my grid, but yeah. - Also that cost benefit analysis is quite simple. One could result in a treat and one will definitely not. - Right, for sure. - Exactly. - So you're way better off rolling the dice. - Exactly. - Even if they had little confidence that the dum-dum was gonna figure it out, it was possible. - Okay, so Emil Menzel. - Yes. - He was doing studies in the vein of Edward Tolman I was telling you about, "Put Rats in Mazes." And he wanted to experiment with the degree with which chimpanzees could remember locations in space. That was all he was trying to study.
And so he had this little one acre patch of forest that he had a few chimpanzees living in. And all he did is he would hide a little treat under a tree or in a specific bush and show it to one of the chimpanzees. Bella. She very quickly learned where to find the treat. And so she was also, as many chimpanzees are, a good sharer. So she would share the treat with Rock, who was a high ranking male. And very quickly he became a jerk and he would take the treat from her. Wouldn't share. He'd eat it all himself. Of course. Yes. Typical. Yeah.
And what happened from there, he wasn't expecting, which is what got really interesting. So then Belle decided, okay, I'm going to wait for Rock not to be looking. And then I'm going to go get the tree. Okay, that's kind of smart, but maybe not that crazy. And she would sit on the tree. I think that's a part of it. They'd show Bella, she'd see it, she'd sit on it, and then she'd just wait for Rock to not be around so she could gobble it up herself. Because she too is a selfish little piggy. No, she's hungry and this mean guy keeps eating all her shit. I agree, I'm on Bella's side. Yeah, we all are.
And so then he started pushing her off. He would know what she was doing. Then he started pretending not to pay attention until she would go to the tree. Then she decided to start leading him in the wrong direction. She pretended the tree was over here. Then he would go. She'd sit on something for a while. He'd shove her off and start rooting around. Then she'd go gravel. This is how these romantic games started. Right, right.
And then Bell and Rock fell in love. Yeah, exactly. So this cycle of deception and counter-deception demonstrates an incredible theory of mind because they're both trying to reason about what's in the mind of the other person and how do I change their behavior by manipulating their knowledge.
Yes. In these multi-member chip groups, there is a hierarchy, but some are clever enough that they will call out Leopard to the whole group, Predator. And the Alpha, of course, has to respond to that and rally the troops. So he's, oh, I'm going to fuck.
can deal with this. And he goes over there and then the subordinate gets humping on the female and passes on his genes. So right there, the intelligence and the deception was rewarded. And you can see quite easily how that would ratchet up over time where cleverness would be rewarded. And that's why we're just innately deceptive. Interesting.
That's the brown lining of this. We can overcome it. We can transcend it. Yeah. Okay, so this obviously has not been completed. We're not there yet, but maybe we introduce the paperclip conundrum as this theory of mine would be a solution to one of the problems AI deals with, right? So Nick Bostrom, who's a famous philosopher who wrote the book "Superintelligence,"
has this famous allegory. And in the allegory, he imagines we have this super intelligent AI system that's fully benign. So it's not an evil. It's not trying to hurt anyone. And we give it a simple instruction. We just say, hey, can you make as many paperclips as possible? Because you are running the paperclip factory. And it goes, okay, cool. And it
quickly starts enslaving humans in the nearby neighborhoods to make paperclips, and then it converts huge chunks of Earth into paperclips, and then it decides, I'm going to turn all of Earth into paperclips, and it starts taking over the solar system. And his point, as silly as that sounds, is it demonstrates that you don't need an intentionally nefarious AI. You just need to misunderstand our requests.
And I will point out, I do think this is where most people are fearful of AI are actually on the wrong path. I think Steven Pinker points this out. We would have to program it to have that strongest will survive. I must dominate, you know, but a confusion, a well-intentioned accident is actually what you should be afraid of. Exactly.
And we don't realize this, and we'll get into this in the Breakthrough 5 with language, but when we're speaking to each other, we're always using mentalizing. We're inferring what the other person means by what they say because our language is a huge information compression of what we mean. If you asked a human, can you maximize paperclip production? They would very easily be able to infer a set of outcomes that you clearly don't want.
like converting all of earth into paper, but are making more than needed. Exactly. But an AI system might not. Well, the great little exchange that could detail this for you is the exchange. Bob saying, I'm leaving you. Alice says, who is she?
We would all know immediately what that conversation means. Bob has not been faithful. Yeah, she wants answers. But an AI couldn't understand that. Well, so it's a really interesting, the way all of this has unfolded is if you ask ChatGPT these theory of mind word problems, it does pretty well. But the question emerges,
how is it solving the problem? And is it solving the problem in the same way that the human mind does? Because the way it solves a problem is we've written millions and millions of theory of mind word problems and just trained it on those word problems. And so it's a big open research question, the degree with which it's actually reasoning about our mind. And that's important when we think about embedding them into our lives. If we want an AI system that's going to help the elderly or help with mental health applications or
We really want to know how is it thinking about what's going on in people's heads? Oh, great point because the data set will be based on, quote, sane people and now you're dealing with some level of insanity. We're pretty good at intuiting what is really going on here with the person who's saying gibberish, but
if there's no comp for this gibberish, how on earth are they to? So what you're describing is the problem of generalization, which is how well does a system generalize outside of the data it was trained on? And humans are very good at that. And it's an open question, the degree with which chat GBT is actually good at it. What we really do is we just give it more data to try and solve these problems. Right. Okay. So then that brings us to the last breakthrough, which is speaking. And I think it would be
interesting to just initially set up how the way we communicate does differ greatly from other animals who clearly have communication. Animals have calls. There's some dictionary of over a hundred words that chimps use, or maybe words isn't the right word, but they have calls. Yeah. Is this where we break off
from those other primates? Yes. The argument in the book, which is not novel, lots of people have identified this, is the key thing that really makes us different is language. Now, there is a small subset of the community that it's important for me to nod to that thinks there's many more things that make humans unique. For example, there are still some comparative psychologists that think only humans have imagination. I find that evidence not...
Very compelling, I don't agree with it. But there are people that make that argument. To me, most of the evidence suggests that really the dividing line between us and other animals is just that we have language. And the more you learn about language, you realize how powerful this seemingly simple tool is and why it empowered us to take over the world. So right, the bird calls and the different calls we've observed in nature
they do a very specific thing versus what we do. We do declarative labels so we can assign abstract symbols to things. Whereas, and I thought this was fascinating, if you look at different chimps
even though they've never had contact with one another, their calls are pretty much the same. You could travel all of Africa and you'll find that they're kind of the same, which is interesting. It's almost suggests that they're implicit or they're in their brain already. In fact, different species of apes have similar gestures. And so when we go into the brain, it's likely that their gestures, their communication is more akin to our emotional expressions.
our smiles, our laughs, our cries, not the words we speak. Okay, so our ability to have these abstract symbols and have this enormous lexicon now opens up really like a rocket ship to progress as a species.
because we can transfer thoughts now. Break that down for us. One really cool framing of this whole story that I find satisfying is you can kind of look at these breakthroughs as expanding the scope of what a brain can learn from. So the very first vertebrates could learn pretty much only from their own actual actions.
When mammals come around, they can learn from that, but they can also learn from their own imagined actions. I can imagine going through three paths, realize only one of them leads to food, and I don't have to go through all three. My imagination taught me that. When you go to primates, I can also now learn from other people's actual actions. Primates are really good at imitation learning because with theory of mind, I can put myself in your shoes and train myself to do what I see you doing.
But never before language was it possible for me to learn from your imagination. Never before was it possible for you to render a simulation of something in your head and for you to translate that simulation to me so we all share in the same thing. And that's the power of language. That is fucking mind-blowing. I have a section in the book where I call that we're the hive mind apes.
because humans, we're sharing ideas all the time and we're mirroring each other's ideas. Although our minds are not actually physically tethered together, in principle, we are tethered together because we're constantly sharing these concepts. And a fascinating graph to look at side by side as the physiology of our brain in the last 300,000 years as homo sapien sapien has not changed dramatically. But if you chart our understanding of the world,
Because it compounds and compounds and compounds. Anything learned is not lost. It's built upon. It becomes this crazy trajectory by being able to pass all this on. Right. Ideas become almost their own living thing because even though humans die, ideas can keep passing through humans and evolve. You know, we like to think we do a lot of invention, but really what happens is we're given all these building blocks.
And then we kind of just re-merge them together. That's our contribution as a generation. And if you took Einstein, you brought him back 10,000 years ago, he's not going to figure out special relativity. And this is how we always sort of move the puck forward a little bit as a generation. All we're doing is receiving ideas that are handed down from thousands of years of other past humans, and we're tinkering with them a little bit.
and then pass them on. 0.01% better. Ideally better. We hope. You know, not always. When does that hit critical mass? Like when is there so much prerequisite info already obtained that just acquiring it takes a lifetime? When does that curve start slowing down? I even think about what I had to learn 20 years ago in college versus what you would have to learn now. And it just keeps building and accelerating. I guess in some way, that's why AI is almost essential at some point where it's like somebody has to keep
All the building blocks. I'm not an expert in this, but there are cases that I've read about where in anthropological history, we've seen where groups of people get separated. There's huge knowledge loss.
Because before writing, which is a key invention that changes the dynamic you're describing, all the information needs to be passed brain by brain. And so because it can't all fit in one brain, there's specialization in who passes what information. And if you get a separation in this group, knowledge is lost. So there's several cases where we've seen groups get separated and technology goes backward in time. But writing changes a lot of that.
because writing allows us to externalize information in a way that was never before possible. It's a hard drive. Exactly. Anthropomagical hard drive where I can pass information across generations. Yeah. Do you ever play with this experiment in your head? I imagine myself going back to the 1880s, like I've time traveled there and I have to take what I know and somehow try to implement that in
real time. I have thought about this. Yeah, right? It's like, I think I understand a helicopter. When it got down to it, all this stuff I inherited, how much of it could I actually deploy? Like, could you make an iPhone? Yeah. Right, right, right. Right. There's tons of stuff I could. And then stuff I really could. What do you think you could make? The steam engine. You could on your own. Yeah, yeah. Well, I fully understand the internal combustion engine and the steam engine. And yeah, if I could work with a guy that could fabricate metal, I could do it. I couldn't do...
much. I think most of us couldn't do shit. Like we would just be telling people like, no, you wouldn't believe it. There's this box and on it, you see other people doing other things. It's like, I can't, I can't do it. I understand how the diode worked, but not really, you know, like you do in that life. Well, I had a podcast.
I shopped a lot. There's a lot of really great. Where are the shops on this island? I think I get a shop. What if your first order of business, Monica, in the 1800s was to create the row? Modern day, the row. But in some ways, then I guess I would be creating money. So that's huge. That's a big deal. So how does then AI kind of fold into this crazy ability to transfer thoughts and to build on thoughts? Where is it at in that process?
So the thing that's been so interesting about the recent developments in AI is how they've taken almost the exact inverse process that evolution took. So all of the big developments over the last few years have been in what's called language models like ChatGPT. And these are systems that by and large, even though language was the fifth thing, the final thing for us, start from language. These things don't interact with the world. They don't see the world except for newer ones. They don't hear the world.
All they're done is they're given the entire corpus of text that humans have made, pretty much. And they're just trained to reproduce that. And not to disparage that, an incredible amount of intelligence emerges just by training it to do language. It's remarkable what ChatGPT can do, having been trained solely on language. By the way, I'll add, we were with Bill Gates for a week in India, and numerous times we heard him say, do we know how it works? And he's like, we really don't know how it works. That was a good impression. And I just feel like, wow, if he doesn't know,
We don't. Well, the interesting thing is we know how it was trained because what we've done with these systems is we build an architecture, we define a learning algorithm, and we just pump it data. And then it figures out how to match the training data. And so we don't know how it solves these problems. In some ways, that's great, but in some ways, that's scary. Now, to be fair, do humans always know why they do what they do? No. But we're a little bit better at explaining our behaviors than humans.
it seems chat GPT is. And so that's sort of an interesting challenge for us to overcome. - Is there a world in which you could ask it how it did that and it wouldn't know? - You can do that. The new model that was released, if we go back to the system one, system two dynamic, one reason why we're smart at some word problems is we pause to think about them first.
And so a lot of these things that used to trick language models don't anymore because they actually did something very clever. So there's something called the cognitive reflection test. It sort of pits our system one habitual thinking against our system two deliberate thinking against each other. So imagine the following word problem. 10 machines take 10 minutes to make 10 widgets. How long does it take to make for 500 widgets
- This is hard. - Hold on, hold on, hold on. No, I got it, I got it, I got it. Hold on, hold on. We have 10 machines, 10 minutes to make 10 widgets. - Yeah. - How long does it take for 500 machines to make 500 widgets?
10 minutes. Okay, smart man. Because the way I describe it. You'd be inclined to think it takes one minute to make a widget. Exactly. So we want to just sort of fulfill the pattern recognition. But if you pause to think about it, you realize actually you just increased the number of machines that should just take the same amount of time. So if you ask questions like that to ChatGPT in the past, it got those wrong. Oh.
It doesn't anymore. And one of the main things they do now is it's something called chain of thought prompting. And what you do is you say, think about your reasoning step-by-step before answering. That's one reason why ChatGPT is so long. It's belabored. It writes its thinking out a lot before answering because they've tuned it to do that because its performance goes way up when it's not.
When it thinks, what's so interesting about this form of thinking is it's transparent. You can look at its thinking right in front of you. And the new version of ChatGBT they just released does even more of this. It does a lot of thinking beforehand, but all it's doing is talking out loud. In some ways, that is recapitulating the notion of thinking because it's saying things in a step-by-step process before answering. Yeah, it's like reflecting before. Exactly.
Okay, I just want to flag this one thing because this is in the book, actually. There's a neat case of a guy who has face paralysis. He can't move his face. And so if you ask him to smile, he cannot smile. But if you tell him a joke, he will smile and laugh.
Because you have motor control in your amygdala as well as motor control elsewhere. It's kind of nuts. I would have thought motor control. His face is then moving? Yes, because the amygdala is firing the sequence of electricity to engage the muscles, whereas the frontal cortex or whatever area of the brain, you would decide to make a smile. That's damaged, but the amygdala is not. Whoa.
And okay, this is my own pet peeve now. You can shoot this down or whatever. But as I hear the war march towards AI and everyone's freak out about AI, I keep checking in with where we're at robotics wise. And we're fucking nowhere. And we get really, I think, distracted with how special our brain is at modeling and thinking and communicating. And we...
really undervalue our brain's ability to move these quadrillion muscles in an infinite amount of ways and taking the information. I mean, what we do physically to me is as impressive as what we're doing mentally. And we're like nowhere near that in AI. Like when I hear all these people going like, oh, we're going to lose 8 billion jobs. Like,
Like, you think there's a robot that's going to be able to fix my car. I can go out in the garage and open the hood and diagnose. That's fucking hundreds of years. Do you watch South Park? You see the South Park? There's a hilarious South Park episode where all the lawyers are out of work, but the mechanics are flying to space. Oh, that's great. There's that whole thing that doctors could be replaced, but nurses not really because they have to like put the bandaid on and they have
to check. Draw the blood. It's very hard to make predictions on these things. I think a lot of people thought that visual artists would be the last thing that AI could do. And now we have these models that produce incredible visual art. And yet, yeah, doing the dishes is still...
so hard to figure out how to get a robot to do that. And there's a big schism in the AI community. I tend to agree with you that I think we're going to need new breakthroughs to figure out how to get robots to have the motor dexterity of a human. There are other people that are trying to scale up existing systems to solve these problems. I'm skeptical that approach is going to work. I just wonder if they'll have to go through the same processes where they identify when did this motor control start? How do we build it in a much simpler way and keep adding as opposed to like trying to crack
how our brains do all this. Totally. I just think it's miraculous the way we can move our bodies through time and space and no one's really talking about that. Yep. Well, listen, I was really, really intimidated by all this. It's a very, very dense in a great way book. You cover geological evolution, animal evolution, the AI,
history and it's all cohesive and multidisciplinary and as you talked about it's such an impressive book you made it quite easy so thank you so much for helping me through this fucking guys
Put in your apps to Washington, in St. Louis. Yes. Is that ever been explained? They have an incredible PT program. I do know that because I know a lot of PTs. Monica's kink is PTs. I just happen to have a lot of friends who are PTs. And it's a very good program. Yeah. So you're in this space. What's the big next thing, I guess, that we need to kind of solve before this becomes the science fiction everyone thinks it is already? I think some of the...
doom discussions are distracting from the more immediate challenges that we need to solve for. These language models are going to be embedded all over our sort of lives. You even saw with the recent Apple announcement, we're going to have these language models like ChatGPT directly on our phones.
And I think there's a few things that we should think about as we roll these systems out. I think the chance that these things wake up and take over the world is not the concern we should be primarily focused on. Things like misinformation, things like when you ask ChatGPT a controversial moral question that humans don't agree in what the right answer is. Who gets to decide what ChatGPT tells
our children? And these are tough questions. And I think these are the types of things we should think about. Also a concept called regulatory capture. When Congress doesn't understand a new technology, who are the people that are guiding them in how to regulate it? If it's the people who get the financial benefit from deciding how it's regulated,
That's a problem. And so making sure there's more diverse voices in those discussions, deciding how do we want to regulate these systems to make sure that we capture the upside for humans. I mean, there's amazing things, even just the AI systems we have today can do. For example, imagine someone who doesn't have access to high quality medical care in developed countries.
is able to ask ChatGPT a question and get amazing medical answers. Play that forward five years to when it's actually way better at answering medical questions. That's incredible that for free, you can effectively have access to an incredible doctor. Think about education. I was grateful that I really loved my educational program. My wife didn't really feel like she jived with the way she was taught in high school and middle school and so on. And the reason is because we have to just
trained to the average, right? Or to one group. We can't give everyone the same pedagogy because people are all different and we only have one teacher. But with these AI systems, you can imagine that everyone has their own personalized tutor that helps them learn the way they want to learn. Like these are incredible things we can do.
So we don't want to regulate AI to the point where we can't capture these amazing benefits, but we need to protect against the possible downsides. And we want to make sure the right voices are part of that process. Seems really obvious that academia is going to have to play a huge role in that, right? Because who else is going to know about all this stuff and stay abreast of it in the way that an entrepreneur is going to?
I mean, I don't even know if academics can even be on par. The reason I'm hopeful is because social media was kind of like training wheels for this AI thing. The way we approach the social media problem is we always post hoc regulate. We always say, let it play out. Let's see what goes wrong. And then we'll try and bandaid the problems. And I think most people agree that approach wasn't a great one with social media. Didn't really work out. Right, right. Although I'll just add into that. So many of them are absolutely unimaginable.
I think in good conscious, those people at YouTube designed their algorithm to give you more of what you wanted, not anticipating you'd get more radicalized in the process. But that's what he's saying. It's after the fact that it's like, oh yeah, that's bad as opposed to anticipating. That's why I'm only flagging that is like, that's what people are calling for is more anticipatory regulation. But my argument against that is you actually can't anticipate what things emerge in
I bet some people can. I mean, to some degree you can, but it's like, we just saw it time and time again where it's like the best intention thing. Well, wow, this fucking result was inconceivable. I'm not advocating for any specific regulation. I'm not even saying we need lots of regulation. All I'm saying is we definitely want our eyes on the ball. The answer might be, you know, the risks aren't worth doing anything and we wait and see, but making sure we have our eyes on the ball is, I think, important. And the conversation's not...
fully complete unless you also acknowledge the geopolitical pressure. Of course. We should regulate this and slow it down. Okay. But no one else in the world's going to. Well, that's not an option. You just kind of go, well, okay. So really, we're just on this fucking missile because everyone's on the missile. Yeah. And we're going to do our best. But I just think it's a little naive to think
We can either stop in ways it's unfortunate, but also I think it's the reality. Well, I don't think anyone's asking for stopping, but I think regulation can also help actually with progress. If we have regulations and other countries don't, we're like, oh, no, that actually might implode for them. Yeah, maybe there'll be one where we go, we're going to sit this one out. They might get the advantage or they might collapse. Yeah. Dark. Yeah.
Anyways, Max, this was awesome. I love your book. It's so good. I recommend it to so many people. I'm going to listen to it probably a third and fourth time. I hope everyone checks it out. Thanks so much for coming. A Brief History of Intelligence, Evolution, AI, and the Five Breakthroughs That Made Our Brains. And just...
All love to Angela Duckworth. She's the one who turned me on to this book. Do you know her personally? Someone sent me the podcast she was on. It was like, Angela likes your book. So I reached out to her. She's so amazing. Oh, she's so awesome. So we've become friendly. We've chatted a few times. She's amazing. Oh, I love her. She is a content mega bomb. She's so great. All right. Well, good luck with everything. Everybody read A Brief History of Intelligence. Until your next book. Thanks for having me.
Stay tuned for more Armchair Expert, if you dare. And we're back with Canva Presents Secret Sounds, Work Edition. Caller, guess this sound. So close. That's actually publishing a website with Canva Docs. Next caller.
Definitely a mouse click. Nice try. It was sorting 100 sticky notes with a Canva whiteboard. We also would have accepted resizing a Canva video into 10 different sizes. What? No way. Yes way. One click can go a long way. Love your work at Canva.com. We hope you enjoyed this episode. Unfortunately, they made some mistakes.
New jumper? Yes. Yes, it is. You know, your question kind of stuck with me last night and I was ruminating on like, oh, why don't you ever say anything about J.D. Vance anymore? And I had like an answer while we were driving, but I really thought about it more. And I think my conclusion is six and a half years ago, everyone loved him. Everyone was sucking his dick and loved his book. And I was like, this guy's full of shit. Yeah.
warning, this guy's full of shit. That was what I wanted to shine a light on. This guy's full of shit. So once everyone agrees that he's full of shit, I don't, I'm not inclined. And I was thinking, yeah, that's my worldview. So you and I fought in November where I was going, Biden is not competent. He's not, he's not going to win an election.
And everyone's mad come July everyone sees the debate. No. Oh, yeah, he can't do it I'm not at that time of the debate. I'm not going he can't do it like I already I was sounding the alarm You normally do say this though. You do normally say like yeah, I've been saying this for a long time Like I've heard you say yeah my your I think worldview what you scan for is like
Who's being oppressed, which is great. And how do I advocate for people? Mine is my triggers group think when I think everyone's like under the spell of somebody or there's some weird group think that I think has gone awry. That's my calling. Like that's the thing that'll animate me and motivate me to be loud. You also do want people to to think.
Like when you said he's full of shit, you wanted people to think that that's also groupthink. That you're hoping that turns into groupthink. No, I'm hoping it shatters groupthink. Into a new groupthink. Like it's all groupthink. Well, into reality. Reality was he was full of shit. Like that's been demonstrated. Yeah, definitely. And I was screaming it. Yeah, exactly. So in essence, like that's been accomplished. The groupthink, the spell has shattered on him.
So I don't have a role in that anymore. Yeah, I was just surprised I never heard you say like, yeah, I've been calling that for a long time. Like an I told you so or something? Yeah, because I do think you can do that. I have heard you do that in a good way. When he came onto the political scene and it was so shockingly... Abrupt. Crazy. Yeah. And...
very different from the guy in the book. Exactly. Yeah. I said to you, I was like, oh my God, you were right. Like you were right. And I've also told many people like Dax called this for a really long time. Thank you for that. I just was, I realized why, like it was a very legitimate question. Like, why did you hate him?
so much for six and a half years ago. Now the guy's actually going to maybe be vice president. You have nothing to say about it. Well, I feel like you're done saying bad stuff, which I find kind of there's a part of me that's like, well, now that he is associated with this other side that I do think sometimes you try to protect.
Yeah, I think that's what you thought it was motivated out of, which is a very legitimate guess at what's going on. And so I just I really thought about what's going on because that's not what it is. I mean, other than I'm not trying to piss off half the country ever. Yeah. But at that time, he didn't represent a political faction. He just, in my opinion, was a guy everyone bought into that I thought was full of shit. Yes.
So, and I think the Biden thing's the same thing. Like I had a lot to say when it seemed to me like I wanted to go, guys, guys, guys, we have to pivot. Like we have to figure this out. And then once people were ready to pivot, I was done, you know, I was past that. Like the thing I had hoped would, would people would finally get on board with had happened. Mm-hmm.
And so, yeah, my focus is, it's just my, I think my nature, it's like now I'm looking around, I'm like, what's another thing that we're all kind of a little delusional about and I think is gonna explode or implode? And that's just my, I think that's my nature of that's the thing I'm always looking for and sounding alarms. Yeah, that makes sense. If I look at my history of who people I have like agendas against, it's never the popular bad guy.
Right. I'm not, and then that made me my outsider punk rocky. I think it has a lot to do with single mom in a really idyllic neighborhood where everyone was married. I think that's where the chip on my shoulder comes with like, that was the group think. Right.
Right. Sure. And I was like, you guys are wrong. Like our household is, is, is actually very truthful and very, has a lot of integrity and honesty, you know? So yeah, I think that's my kink. And I think that's why I used to talk about them a lot. Now I'm not interested in talking about them. And I think if you look at who I'm currently, like my little things I'm stuck on right now, if and when those ever turn out, I'm right about those. I will, I'll have no interest in them anymore. Amen.
Interesting. Does that make any sense? Yeah, it does. It was fun to think about. Well, good. Yeah. What did you think about?
We won an award last night. We did. Yes. We won an award. This was a post-award chat on the car ride home. It was. Yeah, yeah. Yeah, we won a really nice award from Variety. Yeah. Greatest Podcast of All Time Award. Yeah. I didn't read what the official award was. But that was. But I think it was. That's what they were indicating. Greatest Podcast in the History of Podcasting. Yeah. And it was lovely. And we went to Funke. Yeah.
I think that's how you say it. I think so. F-U-N-K-E. Yeah. Which is a very hard to get in restaurant. And they like did this whole thing. We were on a rooftop. Yeah. It was nice. It was very nice. Rob, will you look up when the official first podcast ever was? Oh, that's a good question. I just want to know how embarrassingly short the history of this medium is. 2003. Okay. 21 years. 2003. What was it?
So this is technically radio open source by Christopher Linden was the first podcast launched in 2003. Thank you, Christopher Linden. I feel like we should have a portrait of Christopher Linden, assuming he's not a Nazi or something. I haven't done enough research. Let's do some deep dive. Let's just figure out if he'll if he's worthy of having. I feel like he needs to be honored.
I do, too. He gave us our lives. That's a ding, ding, ding, because I just I was editing a one sheet for an upcoming project that we're all involved with. Uh-huh. That has to do with an early podcast experience, an early podcast that I found that I loved and was obsessed with. But that did have me thinking about podcasting before I was in podcasting. Yeah. Which is. 21 years old. Clive Barenthal. What was the name?
It's 10, 10 windows ago. Yeah. Shit. We're supposed to honor him and I already forgot his name. Fuck. I'm very bad at honoring everyone. Christopher Linden. Christopher Linden. Christopher Liden. Christopher Liden. Liden? Yeah. L-Y-D-O-N. Oh. That makes me think of Johnny Liden. That is. Lead singer of the Sex Pistols and then P.I.L.
Cool. Great. We're not here to honor him. I am. I loved P.I.L. You just moved on so quickly from Clive. Well, it's going to help that I can remember his last name. Oh, Christopher. Leiden. Leiden. Let's put a picture of Johnny Leiden up. No. I'm against him. That's group think. What was his nickname in the Sex Pistols? It wasn't Johnny Leiden. It was like...
It was Sid Vicious, and I think he had like a gross nickname for himself. Johnny Rotten? Johnny Rotten. It's like when you talk about cars. Speaking of, we're recording with the guest today that I know, that I love. I'm so excited. But I know there's going to be a chunk of time that's going to go to some car talk. I'm prepping mentally. I don't think we're going to have time today. I started the book last night. It's so good. It's so good. It's so fucking good.
I don't normally do that. I don't like to do that. Yeah, yeah. But I couldn't help myself. How good of a writer is this? Such a good author. Couldn't help myself. Yeah, it's so tasty. The way this author crafts their stories, which is very proprietary to them, which is so interesting.
Is laying out like 30 different stories at a time that are ultimately going to get all woven together to this overarching hypothesis. Hypothesis. It's a singular hypothesis. He's so playful. So last night, you guys want to know this, but I was so distracted from the moment we got on Fountain because we were.
driving behind a 1988 Mustang GT convertible. Okay. What color? Black. Oh, wow. Of course, in high school, and I'll give him specific credit, Johnny O'Neill. Johnny Lydon? Pretty similar. This kid was, he's the most gorgeous kid. Colleen's older brother. Okay. Colleen, who I was in love with. Sauna. Oh, who is Sauna, girl? There's so many people. Oh, that's Randy Hamina, junior high. This is now high school. Okay, got it. I thought you were
with Carrie the whole time in high school? No. Randy Hammonah was off and on from seventh grade till ninth grade. She broke up with me when I moved to another town. Okay. Very early into ninth grade. What a day. I was already such a loser at my new school and I had a really bad haircut and I didn't have cool enough clothes and I had a lot of acne and my nose got big and I lost weight. It was rough. And then she dumped me. By the way, she should have dumped me. Nothing on her. We didn't live in the same town anymore.
We were fucking in ninth grade. It was more of a logistics issue. I wasn't around. Longest sense. Out of sight, out of mind. Yeah. That's ninth grade. And then I'm pretty much girlfriend-less, as I recall, until probably 11th grade. Okay. And I had had that hot streak sixth through eighth grade. So it was a real adjustment. Okay. Completely lonely.
Then I don't remember the order. I think Stephanie in 11th grade for a while. And then before that, fall in love with Colleen. But Colleen's not having it. Colleen wasn't having it. Oh, okay. Okay. God bless her. So she was in the 10th grade round? 10th till forever. But her older brother, we've gotten totally done. So he had a 1988 Mustang GT. Whoa. White. Oh. White. Titanium white. Wow.
Five-speed stick, convertible, very 5.0 with the rag top down so my hair can't blow in the circa of vanilla ice. That's cool. And he was gorgeous. His whole wardrobe was Z Cavaricci's and Gerbeau's. So they were rich, this family? They were loaded. The dad did commercial plumbing for fucking assembly plants. Wow.
Yeah. Good for Colleen. Yeah, yeah, yeah, yeah. I have since heard updates. I don't want to spread any disinformation, but Johnny has taken over that company and I think it's a fucking enormous. And what I love about his journey was a total fuck up.
He was just gorgeous and a good dancer. You know, really cool car. I thought he was the greatest. He's probably two or three years older than me. Yeah. Anyway, so that car for me, I was like, man, if I had that white Mustang, I'd be just like Johnny O'Neill. Oh, wow. I'm surprised you didn't get one. That's my whole point. So I have been longing for that car. By the way, they're not that expensive. Sure. It's not like I want a Mercedes or anything. It's an 80s Mustang. It's the Fox mug. Yes. Or the Black Bear Mustang.
But I have been longing for it, wanting it. Last night, driving for 20 minutes, completely distracted by the car in front of me. Is that a stick? Is it this? Did you keep the factory wheels on it? Does he have the fan wheels on it? All this stuff was happening. And then me going like, why don't I have that car? I can afford that car. I should have that car. We should all be driving in that convertible car right now, home from this dinner. Wow. But I'm like, it's almost more fun to want the car. I know when I have the car, it'll be another car that battery is dying and the air is
Pressure is always wrong because I don't drive it enough and all this stuff. But my fantasy of it is really fun and cruising around like Johnny O'Neill and being a great dancer and running a huge HVAC company. I mean, what was the first one? Gorgeous. Is that what I said first? Maybe. I don't remember. But I mean, you have achieved all those things. I do own an HVAC company. Most people don't know that in the audience either.
You know what I mean? Yes, yes, yes.
So you don't need to be him. You became him. Well, that is the great joke of life is that like none of the things that you think are going to heal the wounds that you have do. And so, yes, I can intellectually acknowledge like I became Johnny O'Neill in so many ways. Right. But that car will have the magic it had for my whole life, which is awesome. The GT bicycle will have the magic too.
The horror will. Like all the things I wanted that I couldn't have will always have their little magic. It's kind of neat. It is neat. Well, then I agree you shouldn't get it. And I don't like convertibles.
Noted. Do you understand? Don't ever buy me a convertible unless it's an 87 to 90 Mustang. Oh my God. So many words. Do you know how long I listened about mugs with a smile on my face? But I don't care about mugs. I care about mugs as much as you care about cars. Listen, do you think that I talk about mugs as much as you talk about cars for real? I think you talk about fashion and clothes and bags and the row and mugs.
collectively much more than I talk about cars. Okay. That might be true. I don't know. We'd have to do a... Rob, go through everything. I'll have that calculation in a couple minutes. Ask AI to tell us. I guess what's esoteric is subjective.
Oh, for sure. Because I don't know anything about these bags. Like for you, the reason one bag is better than the other. Or we were driving on the way to the event last night. And from like 200 yards, you're like, I have that same purse the woman has. Yeah. That purse to me. Yeah. You know, I wouldn't be able to delineate it between any other purse. Sure. I get it. I could describe it as medium, small or large. Yeah. I wouldn't have expected you to.
Like I would. Oh, right. Yeah. I'm just saying it's esoteric for sure. Yeah, it's esoteric. From my point of view. I think they're the same. Yeah, I get that. They're probably the same. Yeah. So this is for Max Bennett. He was great. He looked so much like my friend Max. Yeah.
To me. Callie's Max. Yes, Callie's husband, Max. To me, they looked very similar. And they both really like sci-fi. And they're both tall. And they're both named Max. And I thought that was some sort of rip in the space-time continuum. It was a sim glitch.
Yes, and in your and Callie's Max's defense, 'cause I think he'd be really pissed if I didn't point this out. Okay. They did diverge on their thighs.
Because Max, your Max, Callie's Max has world-class tree trunk thighs. That's nice of you to say. And Max Bennett's were great. Yeah. There was nothing wrong with them. No. But they were in no way the once-in-a-lifetime thighs of your Max and Callie's Max. Well, that's very kind. I believe Max's parents listened to this, so good job.
Really great station. Really great. I've been getting some legitimate complaints that they're not hearing, hey, y'all, really great station enough. And that's a thank you for calling us out on that because that is an oversight and it should be said once every third episode. It needs to make its way back. Yes. The rotation. Hey, y'all, really great station. Do you think that should slide down this way so your feet can fit under it and it's not right on? Um.
Yeah, let me see how that looks and feels. Let's see how that goes. That would be good. What does it affect your feet though? No, I can chill with this. Okay, yeah, that's better, I think. Yeah, yeah, that's great. I'm going to go a little bit this way if that's okay. There it is. Great. For anyone that's viewing this, this is a new purchase. I went out on a limb. I'm not generally in this trifecta of creatives allowed to make stylistic choices. I've been kind of
famously with the lazy boys, a lot of criticism.
I just, despite all that, I'm like, I'm still going to take a big swing on a coffee table. Yeah, you wanted to get a new coffee table. Yes, I didn't like, that one was too wide, although this one isn't much narrower. Yeah, it looks similar. But I also wanted something dork in the middle, dork in the middle. I know, that's our difference. Yeah. That's our difference. This rug is too white for me. So when you looked at, in my opinion, the wide and you had this very light rug with that really white coffee table, it was too white for me.
there, it's there, it was a washout for me. I wanted some dimension. Got it. Yeah. Does that make sense? It does make sense. I'm, I. You like a lighter coffee table. I don't like it when it's all dark and the space is on the darker side in a good way. We have dark green and it's very library-esque. So yeah, I like a light
mix in to that, but I think it's great. And I like the length. The length is good. It was a little short, that coffee table. That's a fun gender thing. I wish that maybe Kat would have brought some of that into it because most of the wives I know want a white everything. They want a white kitchen and they want a white couch and they want white, white, white, white. I don't know. It's interesting. Yeah. And guys are like, I want a den and I want dark and I want dark wood. Why would
Men be more drawn to darker colors and women more drawn to lighter colors. That is that's fascinating. It is interesting. There must be something like cave like evolutionary. Maybe. Yeah, I don't know. To be clear, though, I don't want to white all white. That's not for me. I like color. My new kitchen is not white. In fact, it's quite dark. It is. Yes. But it but I need pops of light.
And I need like a lot of light coming in. Yep. When the guy has his dream space, it's like a dungeon. Yeah. With a dark bar and a dark everything and dark leather everything. It's so...
It is. I mean, I guess the stereotype is like it needs to be dark to play your video games. But I don't know how evolutionary. I don't play video games, though. I know. I know. But yeah, my favorite room in the house is definitely the theater room. Yeah. You like watching stuff. I like being in a dark cave. Minimal stimulation. Yeah.
Yeah. Yeah. It's soothing. Yeah. You're also mixed messages. You like your cave and you like minimal stimulation, but you also, as you said last week, you thrive on chaos, extroversion on people and stimuli and those types of things. So it kind of. So great. Yeah. We might be on to something. Okay, let's figure it out. So I think because my external emotions,
presenting real life is generally I seek out stimuli and chaos. When I'm doing my chill thing, I want none of that. And then I think conversely for like
Kristen, who doesn't want to go out and be super extroverted and social and get overstimulated. Yeah. In her little world where she's in the house a lot, she does then need it there. Sure. That makes sense. Yeah. You didn't see when you walked in, but Alex Reed from Bill Gates' team sent us a beautiful photograph of our time in India. Oh, wonderful. Yeah, which is nice. We'll have to figure out where to put it. We got some blank space over there. Yeah, we'll find a spot. Yeah, and we also got to get Chris Leiden up.
That's true. Don't make room for... I got it though this time. Yeah, that was good. Only because of Jimmy lied. Johnny lied. Sorry, Max Bennett. Okay, Max. So how many atoms in the universe? According to scientific estimates, there are approximately... 10 to the power of 10? 10 to the power of 78 to 10 to the power of 82. So that translates to a number between 10 quadrillion...
Vigintillion. Wait, 10 quadrillion ventillion and 100,000 quadrillion ventillion atoms. When do you think we'll have our first ventillionaire? Yeah. Ventillion, I don't think I've ever heard that word. I might not be saying it right. It's V-I-G-I-N-T-I-L-L-I-O-N. Vignitillion? Yeah, the G might be pronounced. I'm not sure, but I would
Guess it's not. And what was chess at 10 to the power of 120? Say it again, sorry. I think chess was 10 to the power of 120 as opposed to 10 to the power of 78 or 10 to the power of 60. I think it's 10 to the power of 120. Chess outcome? Permutations? There is... This says... It's in my notes from the episode. Yeah. The number of possible variations in chess is so large that it is estimated to be between... This just says...
Too many? I say 10 to the power of 120. Yeah, okay. But what is this? Are you in the National Museum's Liverpool? What is this? That makes no sense. Right, exactly. But it's just which is more than the number of atoms in the observable universe.
Number is known as Shannon, the Shannon number. Ooh. How did Shannon get her name on that number? That's cool for her. Because I don't recall, like, there's not a famous grandmaster named Shannon. A mathematician, Claude Shannon. Claude Shannon. Look, I'm just going to say the AI overview. Is a little lacking there. That was not right. Yeah, it didn't understand something there. Yeah. Ding, ding, ding. AI. Ding, ding, ding. Human intelligence. A brief history. Okay.
Okay. Now, chimps. So you said they had like a hundred word vocabulary kind of, like a dictionary sort of. Uh-huh. Calls. Mm-hmm. So it says they howl, they squeak, they bark. Sometimes they scream, then grunt, then bark, and then scream all in a row. And they capture 900 hours of primate sounds. It says, as it turns out, chimps are particularly fond of a few combinations. Hoot, pant, grunt. Hoot, pant, hoot. And pant, hoot, pant, scream. Mm.
Sometimes they combined two separate units into much longer phrases. Two-thirds of the primates were heard belting out five-part cries. By combining the letters, the chimps had roughly 400 calls in their vocabulary. When we did the gorilla track, you're with some what we would call here like DNR people, like state rangers, Rwandan state rangers, and they speak gorilla.
So as they're coming closer, like they're kind of, they're telling them it's cool. They're telling them to back up. They're telling, and they do the pant and the hoot and the grunt and they fucking speak gorilla. And it's so impressive. That's so cool. Yeah. And they're, they're talking a lot to them. Right. Yeah. They're really communicating with them. God though. Is this like chimp crazy where it's like everyone feels that they're
that they know each other so well and it's a trusting environment and then the gorilla just will attack. Well, gorillas aren't like chimps. They don't do that kind of thing. Okay. Because of their structure.
Right. Oh, they're testicles. The way they're arranged, they're a harem group. So there's one silverback. He has access to all the females. The other males that live there won't get a silverback. Now, weirdly, in the Sousa group we were looking at, there were two other silverbacks, but that's because they were the brother of the main silverback and they'd already gotten their silver and another troop and then rejoined. But regardless...
They're not having to fight nonstop. They do it kind of once they have their tenure, then they get overthrown. Whereas the chimps are fucking a they're hunting other monkeys. They're fighting leopards. They're fighting other troops. The guerrilla troops rarely interact and fight each other like just the lifestyle of the champ. They have to be so much more aggressive and wild. Yeah. And scary as fuck.
Yeah. I mean, it's the difference between having a cow as a pet and a tiger. Like they have different instincts, they do different things in the wild, and they make better or worse pets.
Yeah. But I still wouldn't recommend getting a gorilla as a pet. No, because think how much damage the chimp at 200 pounds did, Travis. Yeah. And a male silverback is 450 pounds. Jesus Christ. 450. As you recall, when the one came at me and shoulder checked me, I felt like I was in a movie with giants, you know.
Oh, it was something. And it's not for when we see a 450 pound man, it's mostly collected all around their abdomen. The 450 pounds is in their shoulders, chest and their bicep and their lats. So when they're coming at you, you're seeing like hundreds of pounds of muscle. They've never attacked. They do often. OK, so 50 percent of the time that they interact with a group, the silver back
will grab one of the humans. Okay. But it grabs one of the humans and it drags them out of the circle and drops them. They don't ever bite them. They don't ever tear at them. It's never happened? According to the rangers. Really? I don't think they could bring groups of people up in there. Like, we also went on a chimp trek in Rwanda. Uh-huh. And you can't go interact with them. Right. You can't get close to them.
You're hoping to hear them. Maybe you're hoping to spot them. By the way, on that track, I was a mess. I didn't like it. Yeah. Because I know what chimps can do. Yeah. But yeah, the silverbacks won't... Now, if you were to run in and grab one of the babies... Right. It would kill you. But if you're just sitting there observing, and it'll just come demonstrate, I'm in charge here. They drag you out. They drop you. And then that's it. And they said...
It's going to happen to half the groups we bring up. So if it's you, here's what you do. You just go limp, you submit and he'll just drag you, let you go. And everyone's wearing a backpack, right? Cause you have your water and shit. And so they grab you by the backpack. They generally don't even grab you by like an arm or anything. That's so scary.
It's so scary. You love it. But I'm much more afraid of a 160-pound chimp than I am a 450-pound gorilla. Crazy. And we've seen this footage where babies fall into the gorilla enclosure and like the silverback go and protect the human child. Yeah. But we're not babies. No. But they have a much sweeter side. Yeah. I guess maybe if they grab me, I would say, goo-goo-ga-ga. That's a really good strategy. I'm trying to pretend like I was a baby.
Ooh, gaga. What if I was doing that? Like, what's going on with this human? We should kill it. It's acting very weird. What if he grabbed me and my training took over? You never know when your training's going to take over. No, no, no, no. If I try to wrestle. No, no, no. Oh, my God.
You know, I do have weird desires of like testing myself against other animals. Yeah, I know you do. It really all started when working at Chosen Shoots. Maybe I've told you this in the past or not, but I worked with a guy whose dad had wild animals up north in Michigan and they had a cougar. And he was regularly talking about how scary this cougar was. I said, the cougar is only 110 pounds.
I would for the right amount of money. If everyone here at Chosen Shoes was willing to pay like 20 bucks, I will get in a full leather motorcycle leathers and a helmet and something to protect my neck. Oh, my God. And I'll get in the cage and wrestle the thing.
Because as long as I'm protecting myself from like punctures, I can overpower 105 pound. I could lift it up and I could do things. And I was really inching towards maybe that happening. It just didn't. But certainly on a drunk night, that could have happened. Jesus. But I think I draw the line at the cougar.
110 pounds. Okay. Well, no, you've been wanting to fight a mountain lion. Well, that's a cougar, yeah. Oh, they're the same thing. Yeah. Puma, cougar, mountain lion are all the same thing. Huh. I thought pumas were black. There are black pumas. In fact, I
I just read, was it in our guy's book? I think it is in our guy's book. Okay. Which one? Max's? No, our upcoming guest. Okay. They discovered this insane thing about cheetahs. Okay. So the cheetah population was dwindling dramatically.
And at some point in the 80s or whatever, I don't know what year it was, they decided, well, there was a general movement in zoology to go like, let's stop taking animals from the wild and start breeding programs at zoos so we don't have to take them from the wild. So great, I stand by that. And they were having an impossible time breeding cheetahs. And they couldn't figure out why they couldn't get any of these cheetahs to procreate. And then they discovered at some point, they did a bunch of
Mm-hmm.
Your body will reject. If I take some of your skin, it'll reject it immediately, get necrotic. The only people that can really do skin grafts are identical twins.
Because they have the same DNA and the body doesn't recognize that it's foreign. Wow. They can skin graph any cheetah to any cheetah. Oh, weird. And they tested, well, maybe cheetahs accept skin graphs. No, then they do like a house cat rejects anything it rejects. But you can skin graph any cheetah to any. There's no diversity. Right. So two things.
when they would procreate, it would reject nonstop. The way our body does. Our body is 50% of all pregnancies self-terminate. Half. Because half the time it observes some genetic anomaly or there's disjunction. All these things happen. Well, what was happening with the cheetahs is they're aborting all of them, basically, because they're all detecting these genetic abnormalities because of the inbreeding. And then what really sucked about it is that...
In this population, they finally got some ground with breeding them.
they caught a coronavirus and they all died. Whoa. Because they have the same genetics. So if it's gonna be lethal to one of them, it's gonna be lethal to all of them. Yeah, interesting. So they're so vulnerable because of the lack of diversity. Ding, ding, ding, ding, ding. We love diversity. Diversity. Yeah, interesting. This episode was brought, no, I don't say welcome, I say, we are supported by diversity. Yeah.
Oh, gosh. Yeah. Wow. Yeah. Oh, Pumas. Yeah. So the Florida Panther, which is a Puma.
Which is black. Can be black. So panthers, pumas. Yep. Coyotes. Panthers, pumas, mountain lions. No, not coyotes. Mountain lions. Cougars. And cougars are all the same. What? Species. Species. But they're different, right? Well, they can be black. They can have spots. And that's the only difference. In the same way you can be brown and I can be white. We're the same species. Pfft.
I'd say there's more variation in human species than there are between these pumas. Huh. Okay. So the Florida panther was getting incredibly rare and also getting very, very inbred. And so they wanted to protect the...
Okay. Hmm.
Give some variety in the genes. Uh-huh. And there was a bit of an, I guess, intellectual or moral quandary. It's like, well, if we're trying to protect this specific animal, but the way we protect it is make it a different animal. Is that... Yeah, interesting. What do we do? You know, they did it though. And it helped. That's good. That's also diversity. Yes. Yes. And... Texas meets Florida. And interracial marriage. Yeah. Yeah.
Yeah. We're pro it. Yeah, absolutely. We almost prefer it. Yeah. Mixed people are way better looking than non-mixed people. Beautiful, beautiful people. Look at Calvin and Vinny, my God. Jesus, Pete. So did they call that a different kind of panther? No, they're calling it the Florida panther still. Oh, they still are. Okay. But we all know. Wink, wink. It's a bit of Texas puma. Oh, that's true.
Maybe white nationalists now reject the Panther, you know, because it's not pure anymore. Yeah, probably. Yeah. That's the funny thing about this desire for purity. It's like really a desire for vulnerability. Sure. Right? It is. It's making yourself very vulnerable. Uh-huh. Counterintuitive and dumb. Mm-hmm. Yeah. As much of the white nationalist agenda is. Well, true. Yeah. They're consistently...
That is very, very true. Do you think we have any white nationalist listeners? Oh, my God. We do an armchair. I got to applaud them if they can get through this show with such a radically different mindset than ours. I almost have to applaud the open mindedness. You're hesitant to applaud white nationalists. I don't think I can allow us to ever. Just one aspect of their. Like, here's a great example. But they could hate watch it. Like, and that I don't want that. Well, if they're hate watching it, then I don't applaud it. Right.
But if they're like, I totally disagree with this, but I'm open to hearing what they have to say. I am going to applaud just that sliver of their personality. Kind of like you got a this happened. This is Brene Brown's great example of during the hurricane in Houston. Yeah. Her great example is that when the people in boats showed up to the people drowning, no one said, what are your politics? Sure.
nor did they ask before they rescued somebody, what are your politics? And so that's a very beautiful part. Yeah. And we just can, we can just celebrate the heroes who rescued people. Yeah. We don't need to, we can, we have room to do that and then also hate maybe other aspects of them. Sure. Yeah. But I don't, that's not the same. The white nationalists aren't heroes because they listen to us. No, but they're demonstrating an open-mindedness that is impressive. Yeah.
Sure. Yeah. I mean, I hear you, but it's their whole being is not open-mindedness. Right.
Yeah. So if they're taking this radical step. Well, maybe they like you. You're a white, tall, white Aryan guy. I'm a good post. I'm a good master. Yeah. I don't know that we're. Well, I am. I guess it's like if they get if they listen to the fact check. Yeah. And I guess it's worth some interesting, but also they probably want to kill me. How about this? But they definitely want me to be.
Let's go. Eradicated. Let's go broader because I think this is like this is a pretty agreed upon parenting technique. It's like you can spend all your energy focusing on the things you don't like that they're doing and have a very negative approach. Or you can just positively reinforce and celebrate the things they're doing that you like. I think it's in that doc that we loved or that kind man who sat and talked to people.
Think that's what he did. Yeah, I did that my whole life. I had to do that my whole life. There's a lot of people, a lot of people, a lot of people I love, a lot of people I love's parents, a lot of stuff who I still love.
But a lifetime of practice of turning a blind eye to things that actively are against my best interests that are hateful and mean. Yeah. I had to find the good in order to survive there. Yeah. And it's exhausting to live a life like that. I'm sure. I can only imagine. So, like, I guess at this stage, I don't have to do that anymore. Yeah, yeah. And I don't.
I'm kind of tired of it. You know, I think I like put my time in on that. But here's the other. This is the other truth. Those people in my life, they're still they still are. I still love them. Yes. And your involvement helped.
Bring them closer towards where you wish. I have to. I don't think so. You don't think so. Especially like seeing the trajectory over time and seeing where those people are now and what they're all about doing. I don't think so, which is a bummer. It does bum me out.
So is your take throwing the towel? Like, it's just not worth, those people can't change, you know, there's no. Well, I definitely don't think I can make anyone change ever. And I think that's like a lesson learned across the board for me over and over and over again. And I think that's like a, that's a good lesson. Like you're not going to change anyone. Right. So yeah, those are, those are their beliefs and I'm not here to change them. Yeah. And so the people that are already in my life, I,
love deeply right and like that's not changing but I don't know that I'm interested at this stage in life of adding more people in who I need to do that cycle with it's not who Renee again she said yeah maybe it's not your job to set the table for that totally agree
I certainly have the capacity to do that. I haven't been exhausted, but I haven't had to do that. Yeah. And then also I've watched a lot of those documentaries now. I've yet to see the one where I thought I was watching the white nationalist person. And I was like, yeah, that person's really smart and they're really educated and they had a lot of opportunity and they were really loved. Sure. Yeah.
And so they have all those things and they're deciding this clearly. Well, I'm writing that, fuck them. You know, I'm writing. Pretty often, I think like as much as I deplore that group and their thoughts, I also think, yeah, they're probably a victim of their circumstances. Of course they are. Well, not all.
by the way, a lot of people that I'm talking about are rich, entitled white people. Yeah, yeah, yeah. I'm talking about these weird white nationalists I see in these documentaries. Sure, sure. Or the fucking whack jobs who tried to kidnap the governor of Michigan. I fully agree with you. I'm like, they are a product of their background and their history, but even more so why I'm like,
I'm not going to be the person to come in and change it. Yeah, yeah. I'm just not. Yeah. It's not their fault. I still, you know, I do stand by that it's not your fault, but it is your responsibility. Right, right, right, right. And I think the stories I find most inspiring is the episode of Blaine. It's the Jewish gentleman who brought Megan Phelps Roper Jewish food on her hate line. Yeah. Like if I talk about who...
I would most aspire to be and don't even think I have the capacity to be. It is those people. It's the ones that I'm like, wow, how did you find it in your heart to be that generous to the enemy?
Yeah. That's like to me the high watermark of what a human's capable of or blame. This man murdered my daughter and I'm still going to develop this friendship with him in prison. Yeah. Is like that's who I would pray to be. Sure. You know, it's it's kind of aspirational. Yeah, it is. Well, there weren't obviously very many facts. He knows his facts. He knows them all. He knows every fact.
So there were many. Is there one more you want to go to? No, there is something I wrote down, but I'm not going to say it. Okay. Because things are too contentious? Well, no, it's just, it's too esoteric. Okay. Okay. It's about the row. Oh, I don't care. I don't have a problem listening to your stuff. I enjoy it. Sure. I just, but if it is, it is esoteric. And so people...
People might not want to hear it, but there's some interesting. Oh, no. Everyone will like to hear about a billion dollars. The money. This is this is what makes it not esoteric. OK, so it's an American business success story. Yes. The row, Mary-Kate and Ashley, the has been evaluated at a billion dollars.
Congratulations, Gail. A huge deal. Congratulations. Women be crushing this year. I know. I love it. You got Alex Cooper. Yes. You got Tay-Tay. Yeah. Beyonce. You got Beyonce. You got Barbie. Now you got the Ro. There's so many. Like all the, like Pop Girl Summer, all the awesome women. They're crushing. It's so cool. I have a personal stake in this. Like I...
You tell me that. And like, I go back to being in her kitchen in New York while she's meeting with her first round of different artists to start working on stuff. Like, I feel it's fun for me because I saw it in its very infancy. And it was a very bold move to go from an enormous clothing line at Walmart. Yeah.
To know I'm going to do the like highest end premium. Yeah. Like what a bold swing. Well, they had some steps in between. They had like Elizabeth and James that was like maybe.
Maybe like a rung below. Yeah. But still like fancier. Actually. It'd be like Hulk Hogan going, I think I'm going to win an Academy Award in 10 years. Yeah. I mean, they are so impressive. And then he does it. Yeah, it's awesome. Such awesome business people. And they're also so elusive. Like they don't talk to anyone, which I really like.
They're very, very protective of the brand. They still own majority stake. Yeah. They're just really protective of it. And I like that. Yeah. They're cool. Wow. I wonder if this is the only time you would ever like cars. Ooh. You already know this. Sometimes I like cars. Yeah. But this is one of the things I thought was coolest about her. Oh, yeah.
Is that she had like a 2002 Cadillac DTS Black. And then she had a G-Wagon AMG. And I was like, dang, this gal knows. She's got good car style. She's got good style in general. She knows her shit. Really, really cool. Okay, bye. Love you.
Follow Armchair Expert on the Wondery app, Amazon Music, or wherever you get your podcasts. You can listen to every episode of Armchair Expert early and ad-free right now by joining Wondery Plus in the Wondery app or on Apple Podcasts. Before you go, tell us about yourself by completing a short survey at wondery.com slash survey.
In a quiet suburb, a community is shattered by the death of a beloved wife and mother. But this tragic loss of life quickly turns into something even darker. Her husband had tried to hire a hitman on the dark web to kill her, and she wasn't the only target. Because buried in the depths of the internet is The Kill List, a cache of chilling documents containing names, photos, addresses, and specific instructions for people's murders.
This podcast is the true story of how I ended up in a race against time to warn those who lives were in danger. And it turns out convincing a total stranger someone wants them dead is not easy. Follow Kill List on the Wondery app or wherever you get your podcasts. You can listen to Kill List and more Exhibit C True Crime shows like Morbid early and ad-free right now by joining Wondery Plus. Check out Exhibit C in the Wondery app for all your true crime listening.