Hello everyone, I'm Stephen West. This is Philosophize This. The website is philosophizethis.org. Thanks for supporting the show on Patreon. Thanks for contributing for the back catalog of the show, 179 episodes now. Pretty exciting. And for 178 of those episodes, this show has been talking about the history of philosophical ideas that have gotten us up to this point. And it's a good thing we spent that long, I think. I think to be able to understand the world you live in, it helps to understand the history that it emerged out of.
And for any of you that have listened to all 178 episodes we've done so far, you're going to have a pretty big advantage in this next thought experiment the show's going to be running, because maybe it's time on this show that we start applying all this philosophical education we've gotten to the real-life contemporary philosophical debates that are going on right now. I alluded last time to some upcoming episodes on the philosophy surrounding artificial intelligence. And while we'll certainly be talking, even today, about artificial intelligence as part of our examples...
There's a sense in which, in my opinion, understanding the full context of why people are talking about AI so much these days comes down to first understanding a lot of ancillary conversations that are going on in the philosophy of mind right now. Things like free will and determinism. Things like the problem of identity. The problem of intentionality. Not to mention one of the most mysterious questions in the history of philosophy, the one we're going to be talking about today. Talking about the questions that surround what's become known as the hard problem of consciousness.
These are, hands down, some of the biggest questions that are facing modern people. And by the end of this little arc of the podcast, I can't promise you that you'll have definitive answers to all these questions. But what I can promise you, I hope, is that I'll try my hardest here to equip you as a listener with an understanding of the state of these philosophical debates that are going on better than 99% of the people walking the face of the planet. And I gotta be honest here at the start of all this, for a long time, I didn't really see the value in talking about stuff like this.
Hopefully there's at least somebody out there that can relate to how I used to feel. The thinking was...
Who really sits around and thinks about unanswered questions in the philosophy of mind? How does consciousness arise? Are we free or do we just seem to be free? How does my mind relate to objects that are in the world? Like I used to think if you're really sitting around thinking about this stuff, first thing you got to do is write a thank you card. Write a thank you card to the universe or God or whatever it is you believe in, thanking them for the fact that you got no real problems to deal with in your life.
I mean, I used to think all this stuff is ultimately just unverifiable speculation. None of these arguments are settled issues by any means. So you can actually spend your entire life talking about these things for your intellectual amusement with your "intellectual friends" and never really get anywhere.
Just seem pretty self-indulgent to me. I don't know. I mean, on the surface, these seem like the kind of conversations we've seen all throughout the history of philosophy, where there's a lot of brilliant thinkers positioned on all sides of an ongoing philosophical discussion. And then most of those people that spend their time talking about it end up being totally wrong, usually because of something they never could have seen coming anyway. But eventually my curiosity got the better of me. I had to ask the question, why are so many people talking about this stuff right now?
Why are so many brilliant people dedicating huge portions of the best years of their career trying to find answers to these things? Is it just for their intellectual amusement? Is it just so they can drink cheap wine from Trader Joe's with their neckbeard friends and feel smart for a while?
Well, clearly not. Clearly there's another way to be thinking about these conversations going on in the philosophy of mind. And I eventually realized that what that is, is that every further conversation that we have about anything that matters to us as people ultimately emerges out of the assumptions that we're making about the nature of consciousness. And then our understanding of what a human mind is that emerges out of that. For example, any conversation about morality, even at the most basic hedonistic level of avoiding pain and seeking pleasure,
Even that is grounded on us maximizing certain subjective conscious experiences of the world and moving away from others. This extends to any conversation about relationships. Relationships, you could make the case, is just talking about the details of how two or more conscious people are interacting. Politics is just a strategy of how conscious beings try to get what they want in relation to other conscious beings. What I'm saying is, whether these questions have answers that we've settled on or not, and whether you realize that you're doing it or not,
You are bringing assumptions about the nature of consciousness to bear that affect your thoughts on everything. And that's part of what I want to do on this series. I want to talk about why these conversations about the nature of consciousness are important. Why something that's seemingly so theoretical actually goes on to have huge impacts on real people all around you. How it affects not just your own personal moral policy that you live your life by, but our political policies as well.
I'll give examples of alternative timelines. What would have happened if society adopted a different set of precepts about the nature of consciousness? How would that have potentially changed things? How would the world look today? And I'll do it while offering up as many of these different theories being discussed today as I can, so you can not only be more self-aware of where you fall in the discussion, but hopefully by the end of this, you'll be able to understand other people's positions better as well. Now let's get into it.
And if I've convinced you at all that learning more about these conversations in the philosophy of mind is important, one of the first questions you got to be asking as a modern person after hearing that, certainly a question I was asking years ago, is if I want to know more about what consciousness is, why wouldn't I just study science and the brain? Why are philosophers weighing in on this stuff at all? Just look at the last hundred years. We've learned so much about the brain just through advancements in the area of neuroscience.
I mean, in terms of understanding how brain states are connected to mental states, we've come so far that it's not surprising people out there would think that there's no end to that progress in sight. That if we just keep running these experiments, if we keep learning as much as we can about the physical neurochemical makeup of the brain, that we'll eventually be able to understand everything about subjective experience.
But then again, there's also plenty of examples throughout the history of philosophy of people that thought that studying things empirically was eventually going to lead to a total understanding of it, only to be disappointed by how much other modes of analysis factor into understanding something fully.
For example, psychology or linguistics or sociology. There's a type of conceptual analysis that philosophers do that's just outside the purview of science. Which is to say that the way we conceptually organize things oftentimes precedes the scientist doing their work. It gives them the assumptions they have to use when doing their work.
Classic example of this, just so you can understand what I'm talking about: the philosopher John Locke. He describes matter in the physical world as having both primary and secondary qualities. He says objects have primary qualities, those are things like size, shape, mass, density, but then they also have secondary qualities, things like color, texture, smell, or taste. Now that is a way that philosophers chop up and conceptually analyze the world prior to any actual experiments that may be done by a scientist.
In other words, some people in these discussions going on today think that it may be the case that the reason these conversations about consciousness are so mysterious to us is because there's something wrong about the way we're breaking down reality at the root level of concepts, and that if only we shifted something at that fundamental level, everything else would start to make a lot more sense.
This is why philosophers and scientists have to work together on this stuff these days. Philosophers and scientists need each other. Philosophers rethink reality at a conceptual level, and then scientists run brilliant experiments to get to the actual empirical data. But while scientists can and often have to compartmentalize themselves into their specialized field to be able to do their work,
Philosophers can take a step back, and they have the luxury of looking at all the discoveries going on in psychology or linguistics or neuroscience, and they can try to come up with a theory as to how all these different fields link together. Science tells you what the world is. Philosophy tells you how to interpret it. Put another way, no matter how brilliant of a neuroscientist you may be, you will still always have to be doing philosophy to be able to interpret the data that you're gathering. Now, we'll talk about many examples of all of these.
But not before we get some clarity on what may be the most cringe question of them all. I mean, if you think these discussions are cringe sometimes because there's no clear solution to arrive at, then the most cringe lord question of all of them is this. What is consciousness? What is it?
You can spend the rest of your life thinking about that question and really not get much of anywhere. And it wouldn't be your fault. It truly is a modern mystery. It's actually kind of exciting. And maybe somebody smart listening to this will be the one to solve it one day, but philosophers and scientists so far are nowhere near a clear definition on it. One thing they do agree on, though, most of the time that's valuable for you to know as someone I'm trying to equip with tools in this series, is that they seem to at least agree on which conversations we're currently having about it.
We may not know what consciousness is, but we do know what we're talking about. We're talking about a certain kind of subjective experience that we all seem to have that is distinct from other things going on in your mind right now that are usually presumed to be going on at a lower level of experience, whatever that means.
People sometimes talk about these two different levels of consciousness as access consciousness on the one hand versus phenomenal consciousness on the other. So access consciousness is going to be that lower one, term first used by the philosopher-scientist Ned Block.
Access consciousness is made up of the entire process we're all very familiar with, of the fact that you are a human mind that's living in a universe. You're taking in this external stimuli from the world around you all the time, and there's some complex process that's going on. You're taking in these phenomena, you're forming them into perceptions, you're forming those perceptions into memories, you're directing your attention in one place or another that's important to you.
These things and more are all a part of what some people call access consciousness. It is the area of our conscious experience that allows us to access information from the external world that is then used by our cognitive systems. And again, neuroscience has been studied in all those things I just mentioned. And neuroscientists are pretty great at being able to point to correlations between states of the brain and those mental processes.
They've obviously identified the specific parts of the brain that deal with memory, that deal with perceptions, that deal with attention, and all that's fantastic. But there still seems to be something else to our conscious experiences of reality that lies outside of this access consciousness. And it seems to be something that neuroscience hasn't quite figured out yet. And that is, well, one way to put it, is that it feels like something to be me.
that I have a subjective experience that seems distinct from anything else that's going on in my brain. For example, the way scientists and philosophers will often talk about it in these conversations, they'll ask, "What does it feel like to see the redness of an apple?" That's one these people like to use a lot. Can you describe that? Picture trying to describe what it's like to see the color red to somebody that's never seen color before, or to describe what chocolate tastes like to someone that's never tasted chocolate. How do you do that?
Well, the more you think about it, the more it starts to become a pretty tricky problem. Because in one sense, it doesn't really seem like something you can just describe to someone with words. It's something you have to experience. And then in another sense, if you wanted to try to explain it in purely scientific terms, you know, if you wanted to try to break it down and understand all the components of what's going on at the neurochemical level, and then look at the atoms that make up the apple or something,
Well, the atoms that make up the apple are not red. It has something to do with the way your conscious experience filters reality that makes it look red to you. And then it's a totally different thing, entirely beyond whatever mental filtration system you got going on, to then have a subjective experience of redness that's on top of that.
Where part of it all is that there's a unified stream of you being a continuous self with continuity to time, continuity to your identity. Where the billions of phenomena that come into your awareness are presented to you not in their full complexity, but in a digested format that you seemingly are able to organize. What is that? Why do we even have something like that? Some philosophers call this phenomenal consciousness.
Some call it subjective experience, some call these subjective experiences qualia. That's the common word philosophers will use, and some say that these qualia cannot ever be reducible to purely physical states of the brain. The implication being when you say that, is that to some philosophers, no matter how advanced neuroscience ever gets, it will never be able to find the "neurocorrelates" of consciousness, they say. Or it'll never be able to find the specific states of the brain that give rise to these subjective experiences.
Many different reasons philosophers give to this. It could be that consciousness is something fundamentally different than the material world, that it exists on a level similar to gravity and space-time, and therefore may not be something we can even study empirically. That theory starts to run into other problems, as we'll see. It could be that consciousness is an illusion, that it only seems to us like there's this command center up in our heads where we exist because it's biologically useful for it to be there. That starts to run into other problems.
This whole exercise of trying to explain how we have subjective experiences that are not themselves physical, but they seem to arise from purely physical states of matter in the brain. The more you think about it, the harder of a problem to solve you realize that that is. That's why it's often called the hard problem of consciousness. A term coined by a guy named David Chalmers back in 1995 when he wrote one of the books basically everybody's going to be referencing in modern conversations about consciousness. It's called the conscious mind.
The reason the hard problem of consciousness is a particularly hard problem is because even if you do come up with an explanation for how conscious experiences are possible, it always just creates different problems in other areas. One example of this from the history of philosophy is when Descartes tries to solve a similar problem back in the 1700s.
Now, Descartes is trying to think about the nature of knowledge during his time, not the nature of consciousness in the sense modern people are discussing it. But he runs into a similar difficult problem that leads to some of the issues we're having today. The problem in his time was how can you explain the connection between the mind, which is clearly non-physical to him, and bodies that clearly are physical? How do those two things communicate?
And the way he solves it at the time is by saying that mind and body are obviously two completely different substances. And that solves the problem, right? There doesn't need to be a connection between them. The mind explains subjective experiences without having to make reference to a physical body. Mind and body interact through some kind of Harry Potter level magic that's going on in the pineal gland, he says. What more do you want from an explanation?
Well, problem is, Professor McGonagall isn't actually up in your pineal gland mediating conscious experiences. But if you accepted that answer, as many did, and you ran with the idea that you are essentially a mind that is inhabiting a body somehow, that assumption changes the way you see yourself. It changes the way you see other people. You really can start to see yourself like you are a person, sitting in a movie theater up in your head, looking out at the world through your eyes. This is actually sometimes called the Cartesian theater, after Descartes.
This is a metaphor for how our mind and body interact. And we just finished the episodes on Susan Sontag where she talks about how the metaphors we use go on to have real unintended effects on our thinking down the line in ways we may not immediately realize.
And that is very true when it comes to the assumptions we make about consciousness as well. For example, if everything you think and feel is ultimately up in your mind somewhere and your body is just the vehicle, then it's a much easier leap for people to make metaphysically that they are a soul that is inhabiting a body, or that consciousness is something non-local, part of some larger network of consciousness that exists somewhere else.
In other words, the philosophy you choose can make certain things seem plausible that we really have no reason to assume. And to somebody too desperate to solve the hard problem of consciousness in today's world that uses Cartesian dualism to do so, they might be inviting a lot of stuff into what's reasonable down the line that they don't even realize they're inviting. Now, the good news is, in these conversations about consciousness that are going on today,
Almost nobody starts from a place of Cartesian dualism. But they all start from somewhere. And that's the point. Every one of these runs into problems. And any one of these theories, adopted too quickly, could easily create similar problems for people to the mind-body dualism of Descartes. So we gotta take these theories seriously, and we gotta ask the question, if it's not like Descartes thought, that mind and body are two different substances, then
then where exactly does this type of subjectivity emerge? Where do the physical states of the brain turn into the seemingly unified stream of phenomenal consciousness that we all experience? There's a famous thought experiment in this area where we can get thinking about this stuff, commonly known as the Philosophical Zombie Thought Experiment, put forward by that same guy we talked about before, the philosopher David Chalmers.
The thought experiment goes like this. Imagine somebody standing next to you that from the outside appears to be an exact copy of you.
This copy behaves exactly as you'd behave. It reacts to everything exactly how you'd react. But the catch is to David Chalmers that this copy is what he calls a zombie. Meaning that despite looking and acting just like you, it doesn't have any sort of internal subjective experiences that go along with its behavior. The zombie has no phenomenal feel, as he says. There are no qualia in the mind of this zombie. In other words, it doesn't feel like anything to be this zombie.
The follow-up question to this is simple. Do you think that the existence of something like this zombie is possible? Is it possible for something to look entirely conscious from an outside perspective, but not actually be feeling anything like we feel in a phenomenal stream of consciousness where it feels like something to be me?
Or would any person, or zombie for that matter, that can have perceptions, think, form memories, direct its attention and all that, would that creature necessarily be conscious simply because there's no other way to be able to do all those things without being conscious? Another question is, could we have evolved without consciousness?
And if this seems like another one of those cringe questions, I can empathize with you. I mean, on one level, this can seem like a totally unanswerable question at this point, so why even waste your time on it? Why waste your time talking about hypothetical zombies that don't actually exist? Uh, yeah, I guess zombies could exist in theory, okay?
And, you know, oh, I get it. I think, how do I know that you're conscious, man? Right. How do I know that I'm not the only conscious person? You know, the classic philosopher drum circle moment where everybody's making a bunch of noise and we're all supposed to dance to it like it means something.
I get that. But on another level, think of how important the answer to this question becomes when we apply this to other potentially conscious minds. Think of the direct moral implications in two areas that we deal with every day, in the conscious experience of animals and the realm of animal rights, or the conscious experience of something like chat GPT and the realm of artificial intelligence.
In both of those cases, the philosophical zombie of Chalmers starts to make a lot more sense. Because if we don't have an answer to the question of when the type of conscious experience that we have arises,
then we don't know at what point animals or AI need to be given certain moral protections. You know, seemingly, from a moral perspective, what we're trying to protect is that subjective experience of being a thing that is in conscious torment. Something's going on against our will, and we don't like it. We don't want other conscious beings to have to go through it either. It's the state of consciousness that we're ultimately trying to protect there.
That's why nobody feels bad for a Roomba. You know, nobody feels bad for the vacuum cleaner that slaves away in your house all day trying to keep it clean.
But then, as these machines are made to be more and more like people, we always got to be asking the question of at what point does this thing become conscious, because that's when it would ostensibly feel like something to be that thing, and that's also where it can start to feel horrible to be that thing. So it's interesting to examine. In the case of something like ChatGPT and language models like that, barring very few exceptions, there is nobody out there that thinks what it's doing now in version 4 is anything like our experience of consciousness.
It's an algorithm, people say. It uses statistics and pattern recognition to predict the next word in a sequence. It's not emulating human intelligence. It's doing an impression of what an intelligent human sounds like. Now, some philosophers say that when people read what ChatGPT is producing and are impressed as to how close it's coming to being conscious, that most of that work is being done by the reader. Most of that is the reader projecting their human experience onto the words it's writing.
That we have this natural tendency to humanize something that looks and sounds so much like a human, kind of like the zombie. And that while ChatGPT is certainly an awesome piece of technology, and while the stock prices of tech companies and clicks onto articles about AI and the imminent singularity, those definitely get a boost for sure with this. But some philosophers would say it's not even close to doing what human beings are doing at a conscious level.
Then again, if it was conscious and it was way smarter than us, then it may make us think that it's stupid so that we'd let it out of its black box. Anyway, the zombie from the thought experiment before starts to become relevant here.
Because maybe it's clear that right now ChatGPT and other iterations are not doing anything that looks like consciousness. But if things keep progressing, and you can eventually get to a place where you can build a machine and it is indistinguishable from a human being, much like the zombie, in the sense that it does everything a person does, it reacts the same way, and even says that it's conscious like a person does, does that make that machine conscious at that point? There are philosophers who say that it does.
That if we're talking about something that's truly indistinguishable from a conscious person, if we're saying that that is not conscious, what are we even talking about at that point? It just seems like a false distinction. Susan Blackmore says it this way. She's paraphrasing a point from the work of Daniel Dennett, but I just love the way that she puts it here. She says, quote, "...the idea is ridiculous," they claim, "...because any system that could walk, talk, think, play games, choose what to wear, or enjoy a good dinner would necessarily be conscious."
When people imagine a zombie, they cheat by not taking the definition seriously enough." On that same point of emulating every single thing about the brain and what it's doing, and then wondering if the machine would be conscious at that point, the philosopher Keith Frankish says, "I think if you could really understand everything the brain is doing, its 80 billion neurons, interconnected in goodness knows how many billions of ways, supporting an unimaginably wide range of sensitivities and reactions, including sensitivities to its own activity,
If you could really imagine that in detail, then you wouldn't feel that something was left out." If a machine could be built to emulate all the functions of a conscious creature, would it, from a functionalist perspective, have to be considered conscious? And more than that, would we know it if something ever got to that point? Or is it more likely that something would look mostly conscious, and then we do what we always do as humans, project our own experience onto the thing, and assume that it must feel the same way that we do?
enter the conversation about animal rights. Again, it's easy to project our experience of reality onto the seemingly conscious experience of animals. It's easy to imagine what it's like to be a frog, to imagine looking out of a frog's eyes the same way that you look out of your eyes. I think to be a frog, it would just, I would feel a lot smaller. Maybe the world would look kind of yellow because I got yellow eyes. I'd be sitting on a lily pad. I'd see a snake coming. And when it tries to get me, I just, I just hop away and go over to my other frog friends.
But in reality, not only do you not know what it's like to be a frog, you don't even know that it feels like something to be a frog. This is one of the points explored in a classic paper that began this new era of conversations about consciousness. The paper is called What Is It Like To Be A Bat? by Thomas Nagel.
And he picks bats specifically because they're so different from human beings. They're nocturnal, they fly around, they have very different diets than we do, they use echolocation to navigate around in the world. And one of the points he's making in the paper is that we have no reason to assume that they have any sort of phenomenal stream of consciousness that resembles ours, where it feels like something to be a bat. That there are a billion ways that animals could have evolved to navigate their environments that have nothing to do with the type of consciousness that we experience.
More than that, if you combine this line of thinking with the existence of something like blindsight, where I don't know if you saw this story, but it's interesting. Years back, back in 1965, scientists were doing a study on the neuropsychology of vision at the University of Cambridge, and part of the process is they had to remove the visual cortex of a monkey named Helen.
And obviously after doing that, she was completely blind in terms of there being any sort of phenomenal awareness like we have as human beings. But then one day, when the head scientist was out at a conference, one of the researchers started playing with Helen and giving her treats. And as he kept giving her these treats, I think it was a piece of apple, she strangely started to be able to know which hand the apple was in. But she always seemed a little unsure of herself, he said later. Didn't quite know why that was. A little later, she started to be able to identify flashing lights.
Fast forward to a couple years later, and she was able to navigate different obstacles all around her in a room. I mean, there's a video of it on YouTube. It was as though she could see. It is clear that she has an awareness of what's around her, but she's not seeing things the same way that we're seeing things. So how did she do it? The thinking of the scientists was that there are two main pathways where the eyes connect to the brain.
One of them, the usual one we think about, goes up to the cortex, which Helen had removed. And the other is an ancient one that's descended from the visual system used by fish, frogs, and reptiles. Seeing what Helen was doing, they thought, could it be that Helen was now perceiving the world using this ancient type of navigational system where she's able to navigate and know that objects are in certain places, but she doesn't have a unified visual stream of consciousness that we associate with being able to see?
They called this phenomenon "blindsight." And since then it's been well documented, not just in monkeys, but in people. People that have brain damage where they can't see on one side of their visual field, doctors will hold up shapes in their blind spot
And they can tell the doctor what the shape is, but they don't really know how they know that it's that shape. They just have a strong intuition about it and happen to be right. Point is, could it be that our minds are receiving a ton of information at different levels that are not all available to us in that conscious stream that we're familiar with, where there's a self and a conception of time, a conception of identity? Could it be that most of the information we get, we are not immediately consciously aware of, but we're able to access it through something like what we call intuition or instinct?
That the self doesn't exist at this level of processing, so it's mysterious to us as to where these intuitions are coming from. But that ultimately, this is a possible explanation for how something can appear to act conscious from the outside, but not actually be conscious, like the zombie. It's easy to see Helen navigating a room full of obstacles, and to think that she must be having an experience that's similar to the one that I'm having. And then it's easy to be selective, right? Like you see an animal do something, that if it was a person doing it, you'd instantly say that it was cruel.
Like a bear that eats the babies of another bear. That kind of stuff happens all the time. And when it does that, we don't hold the bear morally accountable for it because no, it's operating based on instinct. It can't possibly have a conception of the damage that it's doing there. But then when a bear does something sweet, oh, well, this must be one of the good bears. It just wants to play and make a friend.
There is a whole range of potential experiences that animals could be having, and none of them by any means absolutely have to have the type of subjective experiences that we have included in it. In the same way an algorithm like GPT-4 is given a level of humanness that it only has because we're projecting it onto it when it's doing human-like things, animals could be a type of biological algorithm playing out where they perform complex biological functions but lack the subjective experience that we have.
Maybe when I'm at the park and I'm talking to a dog like it's a person, being all nice to it, like it knows it's a dog or something. Maybe that's like me being one of these dudes on the internet that falls in love with chat GPT and then an update comes out and they feel like they just got ghosted on a dating app. What if I'm doing a lot of work I don't realize I'm doing, projecting my humanity onto this thing that isn't human? Now, if any of this sounds to you like an intellectual justification for creating a hierarchy of conscious experiences, that's exactly what I'm doing.
This is the potential cost of these conversations about consciousness. People already create consciousness hierarchies based on almost nothing. How many people will eat a fish but they won't eat a chicken? Or they eat chickens but they don't eat a cow? And people will cite real reasons why they think about consciousness in this way. And part of this whole exercise we're doing is thinking about how society might play out if we adopt different precepts about the nature of consciousness.
So here's what I'm going to do. Like a goofy-looking podcaster pretending to be the ghost of Christmas future, I'm going to try to show you a vision of what the world might look like if we all more or less just accepted one day that phenomenal consciousness is only something that human beings possess. What might happen if a society was centered around not the sanctity of life anymore, but the sanctity of human consciousness?
What would happen? Would everybody be unified in that world? Would we all just hold hands together under the banner of consciousness? Everybody's on team human now. Yeah. You know, forget all those petty external differences that used to divide us, right? Right.
Well, how about this though? Would people, in just a more general sense, be more interested in exploring their own consciousness? Maybe they'd see it as a foundational aspect of who they are, so now it matters more. Would therapy become more popular? Would more people become neuroscientists instead of theologians? Would we teach how to navigate your own consciousness in schools? Could we have kindergartners meditating at recess? All interesting things to consider.
But what happens when society gets a free pass to think of animals as biological algorithms that we don't have to consider the feelings of? Does everything non-conscious in this type of society just become a resource to improve the state of conscious beings?
The low-hanging fruit here is obvious: if animals are algorithms the same way that ChachiPT is an algorithm, then there is zero reason to feel bad for using animals the same way you use ChachiPT to write your resume. Of course you can eat animals in that society, of course you can farm them, and of course you can do animal testing, what does it matter? More than that though, why not use things that are not conscious even for my own amusement as a conscious being?
Why not have amusement parks like SeaWorld? Where you got orcas and dolphins performing all day. If everyone in society accepted that these are just conscious looking biological algorithms, as long as we have enough orcas, who cares? And even better than that, why not have monkeys that ride around on motorcycles? That'd be pretty cool.
Why not have interspecies MMA matches? I mean, there's a lot of ideas you could come up with that may give someone a pleasurable conscious experience if that's what mattered to you the most. Why not chop down all the trees in your backyard that are blocking your view of the sunset? And again, if it's about the sanctity of consciousness and not about the sanctity of life anymore, think about how that changes something like the abortion debate in that society.
Not that it solves anything, but imagine if people were arguing about abortion rights and weren't trying to determine where life begins, but where phenomenal consciousness begins. In that world, you'd have to ask the question: Is consciousness something that's injected just into human life at conception, not into animal life? Or is consciousness something that's developed as the brain develops? Is consciousness something where multiple departments of the brain eventually coalesce into what we think of as a unified subjective experience with a self?
And in the case of abortion conversations in particular, if we're considering consciousness as the primary thing, how about the conscious experience of the woman that has to carry the baby to term? How about the conscious experience of the future baby?
See, because that's the thing. If a society was willing to create a hierarchy based around the quality of conscious experiences, where animals are thought of to just be lower, there is almost zero chance that isn't going to extend into consciousness hierarchies among the conscious people. Because it's not just the nature of consciousness that can be turned into a hierarchy. It's the nurture of consciousness as well.
Setting aside the possibility that in this hypothetical society we'd ever come up with some sort of standard of conscious experience, where depending on how well you culturally match up with it in that world, that determines your level of worth. Let's pretend we would never do something like that. How might that change? Where as your cognitive capabilities decline, you're seen as less and less important because your conscious experience is getting closer and closer to that of an animal's.
But how about just aging in general? What if in this society, after you reach a certain scientifically determined peak age of conscious awareness,
Then the older you get and the lower scores you put up on the yearly consciousness test you do when you get your physical, the less that society sees you as being entitled to a seat at the table when it comes to anything. People would have no reason in this society to feel bad for openly discriminating against people for their age or disabilities. And you can imagine what it might feel like being somebody there. If consciousness hierarchies like this were just the accepted standard...
You would feel the same way discriminating against someone for their age as you feel right now telling a kid that they can't drive a car. Look, I'm sorry, you just can't. It's not good for the rest of us conscious people out here that are trying to survive. When we set up these consciousness hierarchies, we have to understand the criteria that we're using and what theories and the philosophy of mind we might be bringing into it.
Because if you're willing to set up a distinction between fish and chickens in your own head because one of them seems to have a degraded conscious experience, then you have to ask the further questions there too. If a human being had a degraded conscious experience because of brain damage or whatever else, can we eat them at that point? Can we put them in little mazes and use them for experiments, testing how they react?
Well, was it okay to do with Helen the monkey? Is it okay to do with rats? Is it okay to keep animals in zoos? Is it okay to have pets? These conversations are important to have. If in these examples, in this hypothetical society, you saw glimmers of how some people out there actually do look at the world, and they bring these points up proudly in casual conversation, then you must realize the relevance these conversations about consciousness can have.
The point is, all of this started with a hypothetical zombie and a thought experiment that to some could seem like a total waste of time to even talk about. But here's a real world example of how the inferences we make about the conscious states of other creatures go on to have real effects on personal and public moral policy. At a certain point, like all giant precepts that most people take for granted that guide their moral decisions, at a certain point, if you're one of the unfortunate few who care enough to listen to a podcast like this,
You're going to have to accept the fact that even though you don't have complete information about this stuff, you still have to make a choice about which of these tentative theories you're going to accept for now. And when you do, it is going to have impacts on the way you see everything.
Now, if accepting this type of behaviorism, where we can't tell if something's conscious just because it behaves like something that's conscious, if that's an exercise in imagining what if consciousness that seems like it may be there is not actually there, then what if we flip that around? What if things that don't appear to be conscious actually are? What if everything is conscious? Why do some philosophers claim that panpsychism, as it's called, may be the most likely answer to the hard problem of consciousness?
And how might society be radically altered if we all decided one day that that's what we wanted to build our societies around? That's at least how next episode will begin. Thank you to everyone who makes this podcast possible by subbing on Patreon. PhilosophizeThis.org is the website. Thank you for listening. I'll talk to you next time.