Hello everyone, I'm Stephen West. This is Philosophize This. Thank you for everyone who supports the show on Patreon. For an ad-free version of the show, go to patreon.com slash philosophizethis. Instagram at philosophizethispodcast, all one word. So last episode, we ended by talking about generative AI and the potential impacts it may have on society. How some philosophers think this could lead to an economic utopia, others think it could lead to a panopticon.
But there had to have been at least a few of you out there that heard the word panopticon and thought, what in God's name is that? And there's no doubt some of you out there who know what the panopticon is, who thought, why would anybody think this world we're heading for is going to be a prison? Well, by the end of the episode today, I'll try to explain why some philosophers think it's going to go that way.
And I guess I want to start by saying that I realize a good portion of last episode was spent trying to bring people up to speed on the state of generative AI right now. And I want to double down and say that I think all that context is necessary to understand the wider angle philosophical lens that this stuff can be viewed through. As I said towards the end of last episode, I just think we're something fundamentally different than the medieval peasants that didn't have a hope in the world of seeing what was coming with the industrial revolution.
Part of the value of philosophy and this world that we're living in is that it can help you see the broader historical trends that you're a part of so that you're not someone who's just a hostage to them. I'm not trying to say anything too controversial. I'm really just trying to echo the sentiment of Socrates here, that the examined life, you know, continuing to ask better and better questions. This is something crucial if you want to survive in today's world.
And I guess let's continue that journey here by trying to understand how a fairly random idea from all the way back in the 1780s actually applies to the world we're living in right now. I'm talking about asking the question, what would it look like if a philosopher sat down and applied that big brain of theirs to the task of trying to design the best prison that they could possibly come up with? What would that prison look like? The philosopher that did this in the 1780s was Jeremy Bentham. And the idea for a prison he came up with was called the Panopticon.
Some important context to know about Jeremy Bentham is that he's doing his work during a time when rationality is being applied to everything in the world to try to make it better. It was called the Age of Reason, after all. Systems of government, economics, morality, everything was going to be made better when we use reason to design the way our institutions operate. That was the plan. And for Jeremy Bentham, this idea went all the way down to the very architecture of the buildings that people do stuff in.
He looks around him at the world that he's living in and he sees these prisons where the inmates are being treated horribly. Filthy living conditions, disease is rampant, the guards at the prisons have to physically beat the prisoners to be able to keep them scared and in line. Bentham asks the question, is there a smarter way to be doing all this? Do we really need to be beating people to be able to keep them in line?
And what he comes up with is to redesign the actual prison building. His thinking is, what if we design a prison where the cells of the prisoners are in a circular shape forming a perimeter,
And then in the middle of that circle is a giant tower that can see in all directions. But then he says, what if through various different kinds of shades that were on the windows of the cells you could create a situation where someone who's standing in this tower in the center of everything can see inside of the cell of every single prisoner, but the prisoners would never be able to know if they were being watched or not? Think of two-way mirrors in today's world, same concept.
What the architecture of this building would inevitably create is an environment where the prisoners always have to self-regulate. Because if you can never know for sure whether you're being watched or not, then you have to behave at all times as if you're being watched. And for Jeremy Bentham in the 1700s, this is great news. And not just when it comes to prisons. I mean, it was certainly good for prisons.
Simply by changing the design of the building, you no longer got to have guards standing around watching everyone beating people if they step out of line. Now people will essentially become their own guards.
You turn the prisoners themselves into part of the mechanism that's imprisoning them. It's genius. And it's incredibly efficient as well. You don't even really have to be watching someone to keep them in line. Just the threat that someone could be watching is enough to get people to act in a totally different way that's better for everyone. The even better news for Bentham was that this design doesn't just apply to prisons. This same concept could apply to factories to make people better workers.
This could be used in schools to produce better students, make sure they don't cheat. Military barracks to produce better soldiers. It really was the prison of dreams to Jeremy Bentham. Like the Titanic or something. A magical place where this magic happens simply by there being a severe asymmetry in knowledge. That's how it works. The observers know everything and the observed barely knows anything. And who would have thought that we could make improvements in so many areas of society just by doing that?
But as you can probably imagine, there's a dark side to the panopticon. Fast forward almost 200 years later to the work of the philosopher Michel Foucault. Michel Foucault writes a book released in 1975 called Discipline and Punish. And in it, he reintroduces Bentham's idea of the panopticon and finally gives credit to just how influential this design for a prison has become over the years.
Because to him, this idea from Bentham was actually so good that it now pervades practically every major institution in Western society in 1975. See, part of what Foucault is saying is that if you're a government, or anything that has power for that matter, we don't live in a world anymore where you gotta crack people upside their grapefruit-looking head with a wiffle ball bat to keep them doing what you want them to be doing.
In 1975, he says, what you do if you want to keep people under control is you just control the minutiae of human life. You create standards of what is normal and abnormal, and then the people will police themselves, much like in the Panopticon. See, to Foucault, there is a clear relationship in modern societies between knowledge and power. And it goes on at multiple different levels. The people in power control what knowledge is.
They control what constitutes knowledge. They control who gets tenure and gets to disseminate knowledge. They control the norms and taboos of social institutions and therefore control how people view them and then how people view themselves within society at large.
Just as an example of how all this stuff operates, think of something like education, which often presents to people as something that's unbiased and neutral. It's often expected that as teachers, as professors, you're not trying to bring any bias into the equation here. You're just trying to spread knowledge about the facts of the world. That's what you might expect when you go to school.
But just consider how it actually plays out in the real world. Take a random example of something that's taught to people. Something like history. And because I'm an American, I'll stay in my own lane here. I'll just talk about American history right now. Okay. Well, not many people would deny that there's a lot of different ways you can teach someone American history. 70 years ago or so, American history may have been taught by playing up a particular narrative. Maybe by talking about the glorious American Revolution against the British Empire.
Then we fought a civil war to end slavery. Then we peacefully stayed out of World War II until we were viciously attacked. And then we had no choice but to step in and help save the whole world from the evil Nazis. You could teach American history in that way. And someone like Foucault would say that when you teach it in that way, it breeds a certain kind of attitude in the student about their position in American history. And it affects the way they see themselves in the world.
Now contrast that version of American history with a different one that's been discussed more recently, where the decision by people's been to teach students more about the details of specific things. The details of slavery in the US. The details of the treatment of Native Americans, of women's suffrage. You change the perspective that the narrative is coming from, and lo and behold, some people have a problem with that version of history because they say it's creating a generation of people who hate the country. That it's teaching them to identify the country solely on its worst qualities.
But again, other people would say that the American history from 70 years ago created a generation of people that sweep the details of the past under a rug and just focus on the victories. Now, whether either of these narratives fully embody the truth doesn't really matter. Because the point is, what we have in an education system to Foucault is not some neutral enterprise that's trying to offer up the truth about the universe around us.
No, like in the Panopticon, there is an asymmetry between knowledge and power. The people in power that control the curriculums in education determine what knowledge is. And simply by doing that, they end up determining what normal is. And because they determine what normal is, they also determine what an anomaly is to that norm.
Then to Foucault, they come up with social categories that label someone as one of these anomalies in a negative way, which then encourages a system of conformity. For example, you can differ from the norm politically to an extent, but you go too far and now you magically become an extremist or a terrorist. You're hit with that label for being too far outside the norm. You can deviate from the norm psychologically to an extent, go outside the norm too much, and now you become mentally ill.
There were sexual norms during the time of Foucault. Go too far outside of them and now you become a pervert. None of this is Foucault saying that norms shouldn't exist or don't serve a social function. He's merely trying to understand the architecture of how modern power dynamics work. And the labels of these social anomalies play an important role in how this is all maintained. Just like in the Panopticon, you create an asymmetry in levels of knowledge between the observers and the observed.
And just like in the Panopticon, people internalize normative behaviors and end up regulating themselves to go along with them. Forget ever thinking too far outside the bounds of what the approved knowledge is of the day. No one wants to deal with the social backlash of all that. Who wants to be called an extremist? Or a failure? Or mentally ill? Or erratic?
It is not a coincidence to Foucault that in the world in 1975, prisons start to look like factories, which start to look like schools, which start to look like hospitals, which start to look like military barracks. To Foucault, Jeremy Bentham was right. The Panopticon truly does make better prisoners, students, and workers. It creates an environment where conformity is rewarded. And just like the prisoner in a cell in the Panopticon, it creates an environment where you always have to be worried and can never really relax.
Now, this is the view of the Panopticon in 1975. But the prison that people fear is emerging now, 50 years later, it's similar in some ways, but it's more diabolical. Foucault would be having fits of giggling laughter and then blacking out if he was alive today.
And to help build a case for why, I want to talk about some interesting ideas from the work of a philosopher named Stephen Cave, who, among other things, thinks that something incredibly important that we need to acknowledge if we want to understand the state of the society we have right now is we have to look at the role of intelligence all throughout history as something that's been used as a justification for dominating and controlling people. He says, you know, when we talk about intelligence and casual conversation, it's really easy for it to seem like a politically neutral thing that's going on.
Like when you say someone's stupid, all you're really saying is something very innocent. You're just saying that we can't trust them with fireworks while they're alone. You're just saying that they probably drive a really loud vehicle of some sort. Seems to be some kind of correlation there between intelligence and how loud someone is. The idea is that all you're saying when someone scores low on an intelligence test is that there's something about their mental faculties that's lower than average.
But the reality, he says, is that historically, intelligence has been used to justify things that are absolutely horrible. Historically, just the way that it's played out, to say that someone is stupider than someone else is really saying something greater about what they should be allowed to do within a society, about what rights they may have as a person. He says we have this long-standing idea in the Western world, it's very politically entrenched at this point,
That the more intelligent and educated you are, the more it makes sense for you to be the person that's in charge of everything. Now, let's just hold on there for a second, because I think an obvious response back to this could be, well, that's not a Western world thing. That's a human thing. As people, we just naturally want the smartest people we have to be making the decisions near the top. But that's not actually how history has gone.
There's lots of reasons people have been in charge in the past that have nothing to do with intelligence. The strongest would rule in a lot of cases in a might-makes-right sort of situation. People would inherit positions of power, being the relative of a former ruler in an aristocratic situation. People would use religious reasons to determine the next leader. Point is, this whole idea that the most intelligent and educated among us should be the ones that are ruling...
Stephen Cave says, "Far from this being the norm, this was actually a radical idea in the ancient Greek world where the foundations of our political philosophy were laid.
And when it comes to the value of intelligence from a philosophical perspective, philosophers like Plato, he says, were obsessed with intelligence. It's just back then, given the non-existent field of psychology and how we understand intelligence today, to be obsessed with intelligence back then was to be obsessed with one important subset of intelligence called reason. Reason, it was said by the philosophers, is what separates us from the animals. Reason is part of the essence of what makes us human.
And for Plato, when he writes the Republic and designs the structure of an ideal society, not surprisingly, he puts the philosopher king as the person who should be in charge of it all. In other words, the guy in charge should be the guy that uses reason to gain a better understanding of all the different components of a society, and the guy who's been highly educated since birth to be a ruler. Now again, this is a radical idea in ancient Greece at the time.
And it's not long, Stephen Cave says, before Plato's student Aristotle comes along and forms what is now known as one of the first philosophically grounded naturalistic social hierarchies.
The thinking is at the time that the reality of the world we live in is one where some people are more rational and educated than other people. So if you're going to try to structure a society in the best way possible, and you want to do it in a way where people can do what they're naturally best at, thus provide as much value as they can to that society, then obviously the people who are the most rational and educated should be the ones that are leading. Again, the thinking is that's not just better for them, that's better for everyone, all the way down the line.
people that are better suited to lead should be leading. Like, why would you ever want the town drunk up there at the podium? That's not what your country can do for you. No, you want a certain kind of person that's in charge. Now, what inevitably comes out of that kind of setup, though, is a hierarchy of rationality. If the most rational, educated men are supposed to be the ones that are in charge, then less rational people should never be considered for leadership positions.
So at the time, Stephen Cave says, in a world where women are seen as more, quote, sentimental and flighty than men are, women were seen as just better suited to serving other roles in society where their natural abilities could be more accentuated. Move further down the hierarchy and you come across people of other skin colors or genetics, people who are viewed at this time as not too smart, but really physically gifted. So within this social hierarchy, the way it's seen is that their natural gift was to use their body to contribute to society.
And this goes all the way down to animals, and then to trees and rocks. The lower your level of rationality and education, the less you have a right to be leading a society. And this way of looking at things gets embedded into Western philosophy so deep that it's still there in the work of Descartes over 1500 years later. Even later than that, you got Kant saying that rational beings are ends in themselves, non-rational beings only have a relative value as means, and are therefore called things.
Later on, people use intelligence to justify the age of colonialism. That these less intelligent people all across the world need European culture to be able to civilize them. In fact, it would be inhumane for us to not try to govern these people who clearly are less capable of governing themselves. You have various examples of intelligence being used to justify slavery. We even have examples of people being sterilized because they have lower intelligence.
Yeah, that one's pretty crazy. Stephen Cave explains that Charles Darwin had a cousin, and his name was Sir Francis Galton. He's typically thought of as being the originator of psychometrics and a leading proponent at the time of eugenics. See, when Darwin writes The Origin of Species, Sir Francis over there gets inspired and thinks, oh, well, intelligence must be something that people are born with.
And obviously, just like when a bird with a longer beak needs to breed with another bird with a longer beak and they'll have long beak babies, the way you make the human species smarter is you take the smartest people you can find and you breed them together with the other smartest people you can find. Not only that, but anybody who's not that intelligent, we should tell them not to breed. Because really, they're only making the species dumber the more kids they have. Why would we ever want that?
But he has a problem. How do you find out who the smartest people are? Well, you would need a scientific way of measuring people's intelligence. So Sir Francis Galton creates an intelligence test to be able to measure it. As Stephen Cave says, quote, Thus eugenics and the intelligence test were born together, end quote. Because while there were other intelligence tests before that one, fact is tens of thousands of women were forcibly sterilized after scoring poorly on one of these intelligence tests.
We don't even draw the line at just human beings. I mean, Jesus, we even eat animals. We actually chew things up, swallow them, and use them for fuel. And for some people that's okay because a chicken's less intelligent and has a less rich experience of the world than we do. Point is, when you say something about someone's intelligence level, you're not just making an innocent claim that someone's stupid. You know, this is just, they're just the kind of person that really thinks they want $100 when someone randomly texts them.
No, given the history we have, you are potentially making all kinds of other claims about what that person should be allowed to do with their life because they're less intelligent. Now, not only is Stephen Cave saying that we need to be aware of this history, but another one of the points he's making with all this has to do with artificial intelligence. Because he says, if we're living in a world where there are masses of people worried about AGI and super intelligent robots taking over the world,
Well, it makes sense why we're so scared of it. We've already set the precedent over and over again throughout history that if a being's more intelligent or educated than another being, they can essentially do whatever they want to them. Of course we'd be worried about bringing a superintelligence to life. And when we start to think about machine algorithms and the progressively expanding role that they're starting to play in our lives as these things that are far more educated than us and they can recognize patterns that tell you what you want before you even know what you want, could it be that
That there's a sense of willingness from people to outsource these sorts of decisions that they're helping with to something that seems more intelligent than they are? And could it be that that willingness is just one factor that limits the freedom of someone in the modern world who voluntarily places themselves inside of a digital form of a panopticon? What would a digital panopticon even feel like if you were in it? Would you ever even know?
Before we explore that possibility deeper, I think it'll be helpful to consider another interesting idea from the work of Stephen Cave, where he examines the concept of free will. Because if we're interested in understanding how exactly a digital panopticon would limit someone's freedom, then having some common language when it comes to understanding what precisely constitutes a free choice, that's going to help us out a lot. Because see, Stephen Cave doesn't talk about free will in the same way we did in our Free Will and Determinism episode that just happened.
He's more interested in quantifying what exactly do we mean when we say that someone made a free choice.
As the director of the Leva Hume Center for the Future of Intelligence and a philosopher interested in trying to better understand all sorts of mental states, Stephen K. says that if we're willing to think about how intelligent someone is in terms of an IQ or an intelligence quotient, and if we're willing to think of how emotionally intelligent someone is in terms of an EQ or an emotional quotient, is it that crazy of an idea to think that we may be able to have a freedom quotient or an FQ that measures how free somebody is when making a decision?
This may seem like kind of a wacky idea on the surface, but as Stephen Cave says, we're actually already making these sorts of considerations in an informal way in legal proceedings all the time. A judge will look at a case, they will consider all the evidence, they'll consider who the person is and their history,
They'll weigh things like the intent behind the act, the consequences of the act. And after referencing all sorts of different psychological and philosophical measuring sticks, they will determine what a suitable punishment is for that person. And part of that is determining how free they were to make the choice they did. That kind of stuff goes on in courtrooms all the time right now. But the question is, why does that process have to only go on inside of the head of the judge? Why is there not a more scientifically quantifiable way of measuring this kind of stuff?
Just to be clear, he says, totally realize we're not quite there yet. Okay, we still have a lot we need to understand about the capacities that underlie behavioral freedom. That is true. But is it too early to be thinking about this FQ as a potential possibility in the future? Because one thing's for sure to him, the way we're talking about free will right now is not satisfying anybody. We need a new way. So many of the ideas we have about it come from a pre-scientific age where we were just sort of spitballing.
But it's an understandable place to be in, because the challenges here are big. How do you even start to define how much free will somebody has if that was something you wanted to do? And while he acknowledges there's no consensus on this whatsoever, but he says if you wanted to try to get started, trying to connect the dots between the different ways people have defined it over the years, you could say that free will has three primary components. One, the ability to generate options for oneself.
2. The ability to choose. And 3. The ability to pursue one or more of those options after choosing. Three different stages. And what you'll notice is that each one of these stages requires a totally different skill that's going to have to factor in when determining someone's FQ score.
Let's look at them one by one. The first stage: being able to generate options for yourself. This is the part of any truly free choice that you make in life where you're faced with a decision point and you have to rack your brain to come up with all the possible options for you to choose from. The thinking here is that generally speaking, if someone doesn't have a lot of options or if a person can't see very many options to choose from, we generally don't consider them to be as free as someone who had a lot of options. The second stage was the ability to choose one of those options.
The skill that's required there is being able to reason, thinking critically, weighing the pros and cons of different decisions. Anyone who makes a free choice is at some point going to need these reasoning skills to be able to choose what the best option is to move forward. And the last stage is the ability to actually pursue one of these options. In other words, this is the part of the decision where you actually do the thing. This is the part that's connected to what we would typically call the will. There's a sense in which you need to be able to do all three of these things if you're
And you can imagine how certain people are going to be skilled in each of these different areas in slightly different ways. Sometimes it can be totally imbalanced. For example, somebody could really struggle with the first skill. You know, they might struggle to creatively come up with a lot of different options to choose from, but it never really hurts them that much in life because on the other hand, they could be really good at the will side of things and they're always able to execute the option that's mostly good for them.
On the other hand, you could have somebody that sees all the options in the world. They can have 50 to 100 options to choose from, and maybe they're great at reasoning between them and choosing the best one, but they really struggle with being able to execute the choice they made so their FQ score would actually be really low.
Stephen Cave says that what we might find if we started measuring things in this way is that prisoners, behind bars in particular, might have a lower FQ score on average when it compares to the rest of the general population. Meaning that they're people that either lacked options to choose from, lacked the ability to think critically about what the best choice was, or had difficulty executing the right choice. Maybe all three. You think about the way that prisons are designed in the modern world, you know, mostly low stimulation environments with very little decision making going on.
And Stephen K says, you wonder if maybe putting prisoners in a situation where they can't develop any of these skills that would raise their FQ, you wonder if there might be a better way to do it.
More than that, he says, if FQ was a score that was as prevalent as IQ in terms of public awareness, we may find that it benefits society greatly to nurture these skills that raise the FQ of the population at large. And in that world, why not focus on these skills in schools? If we did, wouldn't that just create more empowered citizens on the other side of it? But then again, what if you didn't want to be creating empowered citizens?
What if instead, whether by a single organized body or a bunch of different distributed organizations all competing for people's attention, what if the goal was not to create people whose free will score is as high as possible, but to create an environment where each one of these three stages of free will is systematically weakened? When you think about the possibility of a digital panopticon like this, it really does start to raise the question of what exactly is freedom?
Like Isaiah Berlin talks about, is freedom simply freedom from constraints? Is it simply just not being prevented from doing certain things in life? Or does freedom also necessarily require that people have the skills and opportunities to be able to pursue the life they want to live?
Say you wanted to control a population of people. And say you're living in the modern world where, as we've established, the best and most efficient way to control a population isn't by beating them when they get out of line, but to create an asymmetry of knowledge and a panopticon. How would you do it? If you weren't able to lock people inside of cages, how would you limit each stage of their freedom that we just talked about? Well, let's start with the first one, limiting the options that people have to choose from.
Should be said, this is not a new idea for keeping people under control. Governments have been doing this for centuries. Of course, governments pass laws, they create fines, regulations, but that only covers a relatively small number of things that we don't want people to do.
To really be able to control a population, governments realized long ago you gotta give people a limited, state-approved story to believe in about what's going on around them. This is why so many governments use propaganda as a tool. This is the classic asymmetry in knowledge from the Panopticon. This is why if you wanted to control a particular group, you would pass laws that would not allow certain groups to be educated.
This is why abusive people in general limit the information of the people that they're abusing. You know, an abusive parent doesn't want their kid going to school and talking to all their teachers and counselors about what's going on in the house.
An abusive spouse doesn't want the person they're abusing having friends and talking to them about what's going on. They'll tell them what's going on is wrong. Limiting someone's options is a powerful way to limit their freedom. We know this. And it is built into Foucault's analysis of controlling social institutions and the norms and taboos of a society. We talked about it before. But if in 1975, Foucault feared a panopticon where we would control the minutia of people's lives at the level of the institution,
What if we could all of a sudden, through the technology of machine algorithms and the ever-expanding sophistication of artificial intelligence, what if we could now control the minutiae of people's lives all the way down to the level of the individual transaction? What if you were living in a world where everything that's recommended to you, from the stuff that you buy to the news stories you read, was ultimately controlled by an AI that's building a progressively more detailed profile on you with every ad that you click,
every browser window you open, every video you watch, the exact point that you stop watching the video, it knows about that too. Much more. What if this was all data that was being gathered to create an ongoing cumulative profile on you to be able to sell to you better and to be able to know what you're up to every day? Oh wait, no. That's not an "if." That's already happening. Everybody knows this. It's not even surprising. What may be surprising to you though is the rate at which these profiles of people are becoming more legible for companies and governments to read.
See, they've had mountains of data about your behavior to sift through for years now. The real question's always been, what can they really know about any one person with the filters they can run this data through? What, that you're into philosophy? That you looked at baby blue paint on the Home Depot website last week?
Someone grab the Reynolds wrap and make me a helmet over here. In the past, this hasn't really been legible information. Well, the more sophisticated the technology of AI gets, the more complex the patterns are that it can recognize in your behavior. And we've seen this development coming for years. It started with algorithms, then along came machine learning, then from within machine learning came deep learning, where they added neural networks into the equation, and now we're creating AI even more focused than that.
On the corporate side of this, we already know that companies have been trying to track and predict our behavior for years. We know there are cubicles full of balding dudes and wingtips frothing at the mouth to be able to predict what you're going to do next. And on the government side of this, in the United States at least, we already know about programs like PRISM or Boundless Information that track internet activity and your emails, text messages, phone calls.
We already know that they used to compile it all into a giant data center and then run these massive amounts of data through filters, where if certain words or subjects come up, people are flagged as threats for further review. We know this was going on 10 years ago when Edward Snowden leaked it, and as he says, what, even if Prism ended, do you really think programs like that aren't still going on? Do you think they won't use the ever-expanding sophistication of AI to be able to create a more detailed, granular profile on who you are?
What happens when AI becomes sophisticated enough that they don't need real people like Snowden analyzing the people who are getting flagged? Does it create a space where there's no longer room for whistleblowers? Will the machines keep the secrets of the people that are in charge of these programs? More than that, doesn't this term "flagging someone for anomalous behavior" start to bear a strange resemblance to the way Foucault talks about society labeling people as social anomalies?
Because that's the thing. Maybe right now your emails are only getting flagged if you talk explicitly about bombing an endangered species of chipmunk, right? But in the future, with AI already in China being advanced enough to spot a single person in a sea of people who's acting in an anomalous way, do you think in the future what gets someone flagged could become things that are far more granular as well?
Given what we know about how there's a direct relationship between the accepted knowledge of the day and the people in power, think of how an environment where something is constantly monitoring every article you read, every idea you consider, think of how that panopticon-like situation has the potential to impact people's education, their development, their personality. Real question.
Do you, right now, have private conversations with people you trust, about concepts you don't agree with, that you would never talk about publicly because it's irresponsible to, but nonetheless these are crucially important conversations to your own development because they allow you to entertain ideas without the fear of social backlash?
Yeah, me too. All the time. But if you're living in a world where you can't know whether every conversation you're having is being listened to, whether every digital fingerprint you leave isn't being cataloged and recorded into a profile that represents you somewhere, would you be a little more hesitant about the stuff that you read? Would you be a little more cautious about the conversations you're having with people?
Surveillance impacts people's moral development. This is why constant surveillance is such an important aspect of many religions. God is always watching, right? It affects the way that people behave. It affects the ideas people are willing to entertain.
And when norms and anomalies can be recorded and analyzed at not just the institutional level, but at the level of the individual transaction, people will consider outside opinions less out of fear of being flagged, and it creates an environment where people will be less skilled at critical thinking and using reason to determine what the best option is. Which, remember, was the second thing you'd want to limit if you wanted to lock people inside of a digital prison.
It has been said that in the West, we will slowly, voluntarily hand our rights over to the people that control these machine algorithms. And that when we do, it will be done in the name of two things. Convenience and security. It will be done in the name of getting a roast beef sandwich delivered to you five minutes faster, and in the name of sterilizing the world around you of any danger to anyone, anywhere. Some kind of utopia.
But when you sacrifice options in the name of that convenience and security, you also limit your field of view. You run the risk of only seeing the options that an algorithm decides to give you based on what it's optimized for. I mean, we are already living in a world where a totally open-minded person that hears about something happening in the news, that wants to hear the intelligent arguments on the other side of the issue, they can already have a very hard time even finding anything other than the strawman version of the other side that their side of the algorithm is roasting.
See, that's the thing about the digital version of this panopticon in particular. If you are a prisoner that's inside of a cell right now, then obviously the cell isn't made of concrete and iron bars. It's a digital echo chamber created by algorithms.
And the warden of the prison, in the center tower of the Panopticon, the threat's not just that they could be watching you at any moment. The asymmetry in knowledge is more than in Bentham's Panopticon, because you don't know anything about the people who are watching, but they potentially know everything about you, down to the most granular detail, and they can predict and guide your behaviors in ways that you don't have the capacity to resist. You can't resist it because the warden of the prison in this modern version of the Panopticon is also the activities director of the prison in a way.
They control every idea that you have access to and every solution that you can possibly think of. They limit your ability to think critically and choose better or worse options. And when it comes to the will, the actual execution side of making a free choice, instead of like in earlier designs of a prison where they would keep people in low stimulation, monotonous boredom all day, this prison warden keeps people hyper-stimulated.
Dialing in, fine-tuning exactly the media, the video games, the drama that keeps you scrolling and distracted, keeping you constantly in a state of anesthesia, too numb to ever feel the pain of being in a prison, pain that may otherwise cause you to change something about your situation. I mean, Nietzsche talked about the plight of someone living in modernity, and he predicted that after the death of God, people would still have this propensity to attach themselves to an ideology, and that what he predicts is that that ideology would be political in nature.
But Nietzsche could never imagine during his time the option that exists for someone living in today's world to essentially choose to be functionally on drugs for every second of their day, never feeling bad about it. How many prison riots would break out if it was legal to drug the prisoners to sleep every day of their lives?
And with generative AI being able to produce this prison instantly and at zero cost, and with people being more legible than ever before to companies and governments around the world, this is the development and near perfection of the panopticon as a method of social control. But that said, I'm done steelmanning that whole side of it. As we do on this show, let's slow down. Let's do an inventory of the different dimensions of the conversation that could exist. First of all, someone could say, yes, we do live at the mercy of these algorithms to an extent.
But how all-encompassing are they really to someone's whole reality that they're living in? I mean, sure, if someone's a total passenger to everything that's going on in the world around them, maybe you can effectively keep them locked in an echo chamber for their entire life. But remember, like we started the episode with, we're trying to live the examined life here. We are paying attention. And if there's any truth to all this, maybe there's ways that people can resist this sort of thing from inside the panopticon.
Also, it absolutely needs to be said that there are people out there that wouldn't necessarily buy into the doom and gloom of all this. There's people out there that say that increased security is an undeniably good thing. They might say, "Look, all the stuff you just described there, sure, you can call it a prison or a panopticon if you want, but other people might just call it a safer society. On the corporate side, all you're talking about is people getting better and better at their job of showing me what I might possibly want. It's my decision ultimately to choose to buy it or to not explore other options.
And with the government side of this, all you're really saying is that there's a group of highly skilled people out there whose entire job every day is to watch people's back in a more sophisticated, effective way. What's wrong with that? God, you know who else used to watch my back? My mom. You gonna attack her now? That poor woman's been through enough already. How dare you? There's a perspective some people have of, well, I'm not doing anything wrong. What do I care if a machine knows everything about me? What do I have to hide?
And the machine gathering that information is gathering information on people that could potentially hurt me or my family. Maybe this is just the next evolution of what a society is.
Next episode, we're going to talk about a lot. We're going to talk more about surveillance. We're going to talk about the age-old philosophical relationship between freedom and security. Thomas Hobbes, John Stuart Mill, we'll talk about some of the most important voices on either side of this debate of whether a surveillance state is a good thing for a society. Then I want to talk about some tactics for how to deal with living in a world that feels like a panopticon sometimes. How would someone resist against this stuff if they wanted to?
And look, the good news for us is that by this point, there have been a lot of philosophers throughout history who have spent actual time in prison and wrote about how to deal with it. Or, in other cases, they at least lived in a world that was so chaotic it can sometimes feel like you're a prisoner. Either way, there's a lot of important advice to take from them, and I'm going to be doing a bit of a philosophical roundtable comparing different thinkers and what they might say if they were alive today. Until then, which will be seven days from now on August 30th, try not to throw your phone into the nearest river. Try not to punch any cameras.
Try to stay calm. I want to thank everyone on Patreon. Shoutouts this week. Faith McEwen, Tom Arneman, Andrea Wu, Derek Davalos, and Alex May. Thank you for listening. Talk to you next time.