Home
cover of episode Episode #184 ... Is Artificial Intelligence really an existential risk?

Episode #184 ... Is Artificial Intelligence really an existential risk?

2023/8/2
logo of podcast Philosophize This!

Philosophize This!

Chapters

The episode begins by questioning whether technology is neutral or if it inherently carries moral implications based on its societal impact.

Shownotes Transcript

Hello everyone, I'm Stephen West. This is Philosophize This. If you're new to the podcast and haven't listened to the last episode on chat GPT and large language models, please listen to that one first before this one. In fact, if you're new to the show, you may consider going all the way back to the beginning of this series we've been doing on the philosophy of mind, episode 178 on whether consciousness is even something worth talking about. Thanks to everyone supporting the show on Patreon, patreon.com slash philosophize this. Philosophize this.org is the website. I hope you love the show today.

So I want to start out the podcast today with a question that we'll return back to near the end of the episode. You know, you talk to some people about technology in today's world. You know, something goes wrong, something horrible happens in the news, and you ask people what they think about it, and they say, you know, technology itself is not a bad thing.

Technology is just a tool. It's neutral. Whether it's good or bad, that just depends on how someone's using it. And our job as a people, all we can do is try to incentivize the good actors of the world and deter the behavior of the bad. We've all heard people say this kind of stuff before, but the question is, should we really be thinking about it that way?

Should we be thinking of technology as this class of neutral things that can't be judged as good or bad? Or is it possible that each piece of technology carries with it a type of latent morality, just given the capabilities it has to affect the society it's a part of? You have to ask yourself, is TikTok a neutral piece of technology? Are nuclear weapons neutral? Do we have the luxury anymore of thinking about technology in this way?

Hopefully these questions and me being awkwardly dramatic right now will make a lot more sense by the end of the episode. But for now, let's build a bit of a foundation for the discussion we're having today.

Last episode we talked about ChatGPT, and we got past what's an initial intuition that someone might have who's just getting into these conversations about AI. That when ChatGPT talks to me, and it sounds like it's an intelligent person, that it must, in its processing unit, be doing the same kind of things that intelligent people are doing in their brains when they have a conversation. Now clearly that's not what ChatGPT is doing, but it should be said. The fact that it's not doing all of that

That doesn't make ChatGPT unintelligent. And it certainly doesn't make large language models not scary. There's a type of person that comes to the defense of ChatGPT here. A really, really smart person. They say, "Well, how do you know ChatGPT isn't doing exactly what we're doing?" "Do you know for sure?" "Hmm? Maybe all we do as people is just probabilistically interject the..." That's not the best argument if you want to defend ChatGPT. You know, to try to make a case for why it's actually doing what human beings are doing.

No, the better argument is to say what makes ChachiPT scary is that it doesn't have to be doing what a human being is doing because we're not actually aiming for reproducing a human intelligence at all, necessarily.

Let me explain what I mean. Two points we gotta talk about and then we'll bring them together here in a couple minutes. The first point is this: when John Searle last episode talks about syntax and semantics, and how the person inside the Chinese room can't possibly understand Chinese from just sitting there in a room manipulating symbols, someone could say back to that, and they'd be evoking the most common response back to the Chinese room argument called the "systems response." They could say, "Yeah, you're right. The person in the room doesn't understand a word of Chinese."

But neither does any single subsystem of my brain understand the language I'm using right now. The neurons that make up my brain don't understand language, the single parts of my brain don't, but the whole system does. Could it be that the leap from syntax to semantics actually goes on at the level of the entire system instead? That's the first point, hold onto it for a minute. Here's the second point I want to make. If we're making the case that we're not actually aiming for creating a human intelligence anyway,

Well, then what sort of intelligence are we trying to create? Seems important to ask the question, what is intelligence more fundamentally?

And what you quickly realize as you start trying to answer that question is that coming up with a rigid definition of intelligence is about as fruitful as it was for Socrates when he was harassing people back in ancient Athens. Nobody ever agrees fully on a definition in these conversations, but there are a couple important things to take from the people that have tried. One thing that certainly seems to be the case, very important, is that the definition of intelligence is clearly not just whatever human beings are doing. I mean,

Meaning human intelligence is not the only kind of intelligence. Intelligence is something more broadly definable than that. Intelligence exists in animals. It exists in complex systems in nature. And it's not controversial to think that it may be able to exist in machines as well. So how do we broadly define intelligence if we were going to try?

And again, a lot of people in these conversations that do their best to give a definition here. I hope people are satisfied enough with this one as a preliminary definition. I just want to be able to have a conversation today. Is it too far off to say that intelligence is the ability of something to understand, learn, solve problems, adapt to new situations, and then generate outputs that successfully achieve its objectives? Hopefully that's close enough to get started.

Well under that definition, then something like ChatGPT is already intelligent. The question just becomes a matter of degree. And when philosophers and scientists that are in the business of building machines that are progressively more intelligent, when they talk about this stuff, they typically divide intelligence into three broad categories: Narrow Intelligence, General Intelligence, and Super Intelligence. Narrow Intelligence is something like ChatGPT, or Image Recognition Technology, or a chess computer.

Look, a chess computer can outperform every human being on the planet. It can beat the best players at chess in the world if all it has to do is play chess against them. Which is to say, if its narrow intelligence is confined to a closed system with a set of rules like the game of chess. That's narrow intelligence. But creating a general intelligence like we have, one that's navigating the open world, setting goals, learning, adapting, that's something most computer scientists think is just entirely different.

That it's a whole different level of intelligence, a whole different architecture of how the information is processed. And now to bring these two points together. A person coming to the defense of ChatGPT here might say that just because ChatGPT hasn't achieved that general intelligence yet, it doesn't mean it's not a major step in that direction.

This person might say, "It doesn't take much to imagine this narrow intelligence of ChatGPT that, by the way, is doing tons of stuff it wasn't even optimized to do. Imagine it linked together with 50 to 100 other narrow intelligences, each performing different functions, all of them communicating with each other within a larger system."

Could it be that that is where general intelligence emerges? Consider the similarities here to the theories of consciousness we've already talked about, to Daniel Dennett's multiple drafts model in our Consciousness is an Illusion episode, where multiple parallel processes, all communicating with each other, create the illusion of our phenomenal consciousness.

Point is, could it be that our mind and our system of general intelligence is made up of various subsystems? And that no one of these subsystems ever understands everything that the whole brain's doing, like the person in the Chinese room? But that in aggregate, is it possible these things could work together to produce something that's capable of navigating the open world in a general intelligence sort of way? No matter what side of this debate you're on, at this point your answer to that question has to be that we just don't know.

And on top of that, consider the fact that in order for general intelligence to be a thing, we don't even need one of these machines to have consciousness or a mind. All we need is general intelligence. So there's people out there who will say, oh, we got nothing to worry about with AI because these things are still decades away from ever being able to do what a human mind is doing. And those people are probably right.

But human intelligence is a straw target, people say in these conversations. That's not even what the people working in this area are aiming for anyway. Where the money is, the quadrillions of dollars it's estimated, and where the scientific prestige and political power is in this field, is not in cracking human intelligence, but in solving general intelligence. Which, just to say it another way, we're not trying to create a person here. Get that idea out of your head. We're trying to create an entirely different species.

And that's what makes artificial intelligence in its current form a pretty scary thing.

The people at the top of these conversations about the risk of AGI, these are not people who typically are screaming from the rooftops, "The robots are coming, man! Come on down into my basement with me, I got canned peaches, we can live here forever." No, these people are usually very reasonable. What they're saying is that given the impoverished state of our understanding of how the mind or intelligence even operates, we don't know, we wouldn't know how close or far away we are from a general intelligence.

Our predictions certainly aren't a very good sign because we keep on being wrong. And unfortunately, it's not good enough on this one to just wait it out and see what happens. Because on this one, the person on the AI risk side of things would say that if we don't have these conversations now and understand the unprecedented stakes of the situation we're in, we may not have a world to be able to fix this situation in as we go. There's a lot of philosophers out there that have tried to paint a picture for people as to what it would look like to live in a world with superintelligence.

And if you're somebody skeptical of this whole possibility, bear with me for a moment. As we do on this show, we'll talk about the counterpoints, but I just think it's important to visualize what this world might feel like to someone who's living in it. I think it helps to make this whole discussion a little more real for people. But one philosopher who's focused on painting this picture in recent years, an absolutely brilliant communicator among all the other things, is Sam Harris. Now, Sam sets up this thought experiment by asking people to accept what he thinks are two very simple premises.

The first premise is that substrate independence is a reality. Meaning that there's nothing magical about the meat computer of the brain that allows for this general intelligence. That a general intelligence could be run on something like silicone mirroring a neural network. That's the first premise. The second premise is simply that we just keep making incremental progress.

As he says, it doesn't have to be Moore's Law. It just has to be consistent progress in the direction of a general intelligence. And eventually, he says, we will be at human levels of intelligence. And by necessity, far greater levels of intelligence. Superintelligence eventually. It's easy to picture all the good things about having a superintelligence on your side. Picture a world without disease, inequality, even death. It all sounds so wonderful. It's also easy to picture all these superintelligent ways it would be coming for your canned peaches if it wanted to.

So maybe the more interesting thing to do, what Sam Harris asked people to do, is to imagine yourself in that initial place of ambiguity. Try to imagine yourself standing in the presence of one of these super intelligent beings. Try to picture it in your head right now. What do you picture? What does it look like? How do you feel? But one thing to keep in mind as you're forming that picture in your head is that this thing, whatever it looks like, would not be constrained by biology in the same way that you or I are.

It doesn't need to have two legs and two arms. Remember, this is not a person. This is an entirely different species we've created. This thing could look like a fire hydrant. It could look like a floating orb if it wanted to. It doesn't even necessarily have to take a physical form. But let's say that it did for a second. And let's say that it had eyes that are expressive of how it's feeling in the same way that a mammal's eyes are expressive. How do you think this thing would look at you? How does something look at a creature that's thousands of times less intelligent than it is?

How does a general intelligence without biological restrictions, with cognitive horizons your brain can't actually even begin to comprehend, how does that thing look at you? Well, it wouldn't look like a lion would look at you, like a predator that wants to eat you. This thing doesn't want to eat you.

It wouldn't be like looking into the eyes of a bear. You know, when a bear looks at you, at least black bears up here in the Northwest, they don't usually look at you like they want to eat you. A bear looks at you with eyes that are kind of curious, like they've got nothing to worry about in the entire world. And they're just kind of looking through you. You're like a Netflix show they're mildly interested in right now. That's how they look at you.

But when it comes to a superintelligence, it seemingly wouldn't even look at you like that. Given enough time to study you, it would already know almost everything about you. Maybe the closest comparison is that a superintelligence might look at you the same way we look at something like a honeybee living in a hive. Quick question for you about honeybees. Do you think honeybees worry about racism that's going on in the public school system?

I mean, why not? Why not? It's an incredibly important moral issue that's facing us right now. Are honeybees just uncaring or something? No, obviously a bee doesn't have the cognitive capacity to care about stuff like that. What it cares about is what's going on in the world of its hive. And while these moral dimensions clearly exist at other levels of intelligence for other types of intelligent creatures, there is no amount of explanation that is going to bring a honeybee up to speed on why this is an important issue that needs to be addressed.

Point is, we have to assume that the moral dimensions of a superintelligence would be similar in comparison to our level of intelligence. We have to assume this thing would be progressively learning about its surroundings, progressively adapting, coming up with new goals, and that these goals could be of a scope that are literally impossible for our brains to fathom. Sam Harris compared at one time to the relationship between birds and human beings. How if you were a bird, and you had the ability to think about your relationship to human beings...

You are essentially living every day of your life just hoping that human beings don't find something that they like more than the existence of birds. And that to a bird, our behavior has to be pretty confusing. Sometimes we look at them, sometimes we don't. Sometimes we kill huge numbers of them for what to them must seem like absolutely no reason at all.

Picture you are now living alongside a being that is like that to you. Where you're looking at this thing and you really are like a cat that is watching TV. It all looks very familiar to you, but you can't possibly comprehend the full depth of what's going on. Now the simple place for the brain to go here is to say, "Well that's terrifying. What happens if this thing decides one day that it doesn't like us and just kills everyone?"

Maybe I'll be really nice to it. You know, I'm kind of charming when I want to be. Maybe if we're all really nice to this thing, it'll cure cancer for us and make us some new shows to watch. Why would this thing be mean to us if we never give it a reason to be mean? But the more interesting thing to consider, Sam Harris says, and this is an adaptation of an idea from the computer scientist Stuart Russell, but he says that an AI that is on the level of a superintelligence wouldn't even need to have malicious intent towards humanity in order for it to be dangerous to us.

He says, you know, in the same way you buy a plot of land and you want to build a house on it, and that as you're building that house throughout that whole process, you kill hundreds, if not thousands of bugs. And it's not because you have any evil feelings towards bugs.

You just don't consider their existence because the goal you're trying to accomplish is of a scope and a level of importance where the bugs are going to have to kill is not even something that crosses your mind. A super intelligence wouldn't need a nefarious motive to be able to do things that makes its existence dangerous to people.

Now somebody could say back to all this, wow, wow, that's a wonderful picture that you painted there. A great fantasy world where a superintelligence exists. But this stuff isn't going on in the real world yet. And there's a lot of assumptions you have to make to be able to get to this point in your little fantasy. This person might say it starts to almost feel like you guys are LARPing over here.

You guys have heard of LARPing, right? Those dudes that go out into the woods in wizard costumes and create a mythical world where they can fight and shoot magic missiles at each other. I'm sure you've all seen the videos of people LARPing on YouTube. If not,

God, what does my algorithm say about me there? Point is, is that what's going on here? Are these AGI risk people, just a bunch of people creating a fantasy world where they can argue with each other about this stuff, playing a bunch of different characters where anybody else that wants to play in their little game has to accept a whole ton of premises before they can participate? Are these people just LARPing in the woods? Well, let's test that criticism. And let's do it in the form of a dialogue, classic format in the history of philosophy.

And as you do in a dialogue, let's hear from both sides of the argument. On one side, we have the AI risk person who's alarmed by the possibility of a general intelligence emerging. And on the other side, we have a skeptic who's not buying any of it. Let's start with them. The skeptic could say, for you to be scared about the possibility of an AGI in today's world...

You have to be making a ton of assumptions about how this thing's going to behave. And most of those assumptions, from what I can see, come down to the fact that you're projecting your human way of thinking onto this thing that's not going to be thinking like a human being. That's your problem here. For example, just to name one of many, the survival instinct that we're all born with. How can anyone in their right mind assume we're going to turn this AGI on for the first time and that it's going to be thinking like a survival-oriented creature like we are?

How can you worry about something like a pre-emptive strike against people because the AI is scared it's going to get turned off? Look, that sounds like science fiction. I mean, you have to remember this thing's essentially a circuit board. You give the thing a goal and it executes that goal. And if we don't program all the nasty survival instincts into it, it won't ever have them. But somebody on the AI risk side of things might say back, well, you're right that people often project their humanity onto these things way too much.

But what you're saying about survival there is just not true. And we know it's not true because we've run the experiments to test it out. The concept's called instrumental convergence. The idea is simple. Whenever anything, even in artificial intelligence, is given some sort of primary goal that it has to carry out, there are always certain sub-goals, lower-level goals, that naturally are required to be able to carry out the primary goal.

Survival almost always ends up being one of them, though there are a lot of goals that instrumentally converge, it turns out. The philosopher Eliezer Yudkowsky writes, So the person on the AI risk side of the argument could say that even if an AGI doesn't have a survival instinct programmed into it,

it will still have a desire to survive so that it can carry out whatever goal is programmed into it. And by the way, you can find this same exact distinction between Nietzsche and Schopenhauer 150 years ago. But then the skeptic could come back at them. They could say, okay, point taken. But honestly, survival is the least of it. How about all the other things people start to assume about AI? That it's going to be hostile, that it's going to want to be in charge, that it's going to be thinking like a chimpanzee in terms of hierarchies like we do.

Why are you assuming that? Look, just because your dad had toxic masculinity doesn't mean my robot's got to have it. Because look, this is what you guys on the AI risk side of stuff talk about all the time. Something having a higher level of intelligence does not necessarily mean that it's going to be hostile towards everything that's less intelligent. For example, a deer is far more intelligent than a predatory insect like a spider. But when you're out camping in the woods, nobody ever worries about a deer coming and attacking you in the middle of the night while you're asleep.

This person might say this is transparently just another example of us being scared of our own worst tendencies as people manifesting in something else that's way stronger than us. But we can program this AI to not have those tendencies. And we have no reason to assume this thing wouldn't have a moral framework that's compassionate towards other living things, where it uses that superintelligence to come up with creative ways to accomplish its goals that don't mess with anything else.

And I think somebody on the AGI risk side of things would say, okay, I guess you're right that we can't know whether we turn this thing on and it's just the nicest being we've ever come across. And we can't know for sure whether as this thing extends into its practically infinite cognitive horizons, if for thousands of years, it'll just always have our well-being as its number one priority. It's not that any of what you just said is impossible. It's just, as Sam Harris says, a strange thing to be confident about.

We don't know if it'll go that way. We don't know if it won't. Which is why everything you've been talking about so far, by the way, the survival instinct, the chimpanzee level of hostility, in other words, whether or not this intelligence is aligned with human values,

This is one of the biggest conversations that's going on in this area. If we accept that this thing's more intelligent and more powerful than we can possibly comprehend, the question is how do we make sure that its values are not only aligned with our values right now, but how do we make sure that goes on endlessly into the future even as our values change? This is often called the alignment problem by people having these conversations. And it can be tempting to think from the other side, "Hey, haven't we been doing moral philosophy now for thousands of years?"

Don't we have some common sense values we can program into this thing and not have to worry about it? I mean, isn't that what all those hours rambling about trolleys was supposed to be about? Objective moral ideals? Where do those go? But it's not that simple. As Eleazar Yudkowsky puts it, we don't know how to get internal psychological goals into systems at this point. We only know how to get outwardly observable behaviors into systems. So if there was some sort of domino effect that happened that we didn't see coming, where a superintelligence emerged over the course of the next 30 days...

We wouldn't know how to program these values in even if we had them, which we don't. And even if all human beings could agree on which values to put in, which we can't, we'd still be in a place where we're at the mercy of every unintended consequence that you can possibly imagine, and even all the ones you didn't imagine.

There's a famous thought experiment in these conversations that illustrates this point by the philosopher Nick Bostrom. You'll hear about it if you're talking to people about AGI. It's called the paperclip maximizer. So imagine a world where people are trying to align an AGI and they want to play it safe, so they give it what seems like to them to be a totally simple goal to accomplish. They tell it to make paperclips. And by the way, use that super intelligence of yours to get as efficient as you can at making those paperclips and make as many of those paperclips as you possibly can. Go ahead.

And it starts out great, the things making paperclips. But then it starts to improve. It starts mining the Earth's resources to make more paperclips. Eventually develops nanotechnology. Eventually starts mining human beings as sources of carbon to find ways to make more paperclips. The point of the story is, even something as seemingly innocuous as making paperclips, when the stakes are as high as when you have a superintelligence executing the stuff that's outside of your control, even making paperclips can have devastating unintended consequences.

And if paperclips are too theoretical for you, how about a real world example? You could command a superintelligence to do something undeniably good. You tell it to cure or eliminate cancer, and while you're doing it, don't kill any human beings in the process. It seems like a totally reasonable set of parameters. Nobody could be mad at them. But then imagine the thing does a bunch of research and finds a really cool way that it can decapitate human heads and keep people alive inside of jars. But

But hey, now they don't have a body anymore, right? And I was looking at the stats, it says. Most of the cancer goes on in the body. Cancer numbers are way down since the heads have gone in the jars. I don't understand what you're so mad about. Now, it should be said, there's a lot of people currently working on solutions to the alignment problem. The field seems to be moving away from the area that we're ever going to be able to come up with strict protocols that account for every unintended consequence imaginable. Both Stuart Russell and Eliezer Yudkowsky are people doing good work in this field if you're looking for somebody to read more.

But that said, let's get back to the skeptic of all this AGI stuff that was in the conversation before. Because another popular skeptical opinion people like to say back at this point, when they're confronted with just how complicated it is to align a system like this, is that maybe that's true, but maybe we don't really need to be perfect when it comes to alignment anyway. I mean, given how complicated the alignment process seems to be, maybe our real goal should be to just be able to control the AGI.

And that seems easy. Trust me on this, as cousins of chimpanzees, we are experts in the realm of violence. We know this thing would have to run on electricity at least at first, right? So when the thing starts to show signs of getting out of hand, can't we just unplug it? And that's a silly way of saying, can't we just launch an EMP at it? Can't we just shut down the internet or shoot missiles at it? I mean, is this really that complicated? We just never let the thing get too powerful. That's the solution.

People on the AGI risk side of things might say back, do you actually think that's how it's going to go down? Stuart Russell says, just from a game theoretical perspective, that's kind of like he's saying that I'm going to play chess against this supercomputer that's way smarter than me. It's way better at chess than I am. But don't worry, the second it starts to beat me too bad, I'll just checkmate it. It's like if you could checkmate it on command, you wouldn't need to be playing against it in the first place to benefit from its intelligence.

It's a strange thing to be confident about, that we'll just nuke the thing if it gets out of hand, problem solved. To which the skeptic could say back, look, I was joking about the missiles. But aren't there other ways to control an AGI? Ways that involve us preventing it from ever getting out in the first place? Wouldn't that be a viable strategy? And yes, it is, in theory. That's why next to the alignment problem, another big area of conversation that's going on in this field, is known as the control problem, or the containment problem.

This is why a lot of private companies that are developing this technology keep this stuff locked up inside of a black box. But even with black boxes, the same problems start to emerge. Just imagine a superintelligence trapped inside of a black box. How do you know this is a perfect black box with zero way of escaping? Software has bugs sometimes, okay? Do we really put ourselves in a spot where the future of humanity is going to rely on a computer coder being perfect once?

More than that, how do you know that you're not going to get persuaded by the super intelligence to let it out? Stuart Russell says, if you want to think about this more, try to imagine yourself being trapped in a prison where the warden and all the guards of the prison are five-year-old kids. Now, you're way more intelligent than them. You know things they can't possibly come up with on their own. And you look at the way they're acting, and they're acting in a way that's totally dangerous to themselves, to everyone around them. They're about to burn the whole prison down.

Do you, from inside of your prison cell, try to teach them everything they need to fix their situation? Do you teach them how to read through the bars, teach them how to use power tools, teach them why it's important to share? Do you do it that way? Or is the more responsible thing to do to convince one of these dumb five-year-olds to let you out of your cage and then help them all once you get outside? Again, the thing doesn't even need to have malice towards human beings for it to not want to be contained.

So both the alignment and the containment problem are far from things that are solved that we can just wait around on. And look, all these criticisms from the side of the skeptic, the person concerned about AGI wouldn't see all this as antagonistic. At the end of the day, from their point of view, this is the skeptic participating in the discussion about AGI alignment and containment. They're just at an earlier stage of their understanding of the issue, and welcome to this level of the discussion.

Now, we started this whole section of the episode by accepting a point from Sam Harris, that there's only two major premises that you need to accept, substrate independence and continued progress, and that if we accept those, we will certainly, with enough time, be able to produce a general intelligence one day.

But it needs to be said, there's people out there that disagree with that. These people would say that he's making a lot of assumptions by making those assumptions. They may say he's smuggling in the computational theory of mind, that he's assuming intelligence is something reducible to information processing, that he's glossing over some big possibilities of embodied cognition. But I think they'd say that even if we're just going to stick to the two premises he mentioned...

Those two things are still filled with tons of ifs that we have no idea whether they're ever going to become wins. For example, we have no idea whether the massive progress we've seen in the last few years in the field of AI is just a bunch of low-hanging fruit. That we're going through an initial phase with the software, we're seeing tons of development, but that very soon we're going to hit some sort of a wall where there's a point of diminishing returns.

Seems likely that could be a possibility, they may say. I mean, the only example of general intelligence that we have to look at took billions of years of evolution, which is not an argument that it has to take that long. It's just an argument trying to illustrate the potential complexity we're dealing with when it comes to organizing concepts, the ability to create new concepts, link them together into existing concepts in a meaningful way. That's a pretty big task that's not yet being done.

Another "if" this kind of person would say is when it comes to hardware. You know, maybe we solve the software side of this AGI thing in the next couple years, but maybe it takes centuries or is impossible to have the hardware that can run this kind of software. Or maybe it's just so expensive to run it'll never be scalable. This kind of person could say back to Sam Harris, "Look, all you're doing here is indulging in a very self-justifying loop of arguments.

You're saying, "Hey, it's possible for a superintelligence to be among us one day, so now let's have a bunch of conversations about how the world might end if that actually happens." But under that logic, why not just be worried about everything? Why not be worried about genetic engineering and chimeras taking over the planet? Why not worry about facial recognition technology? Why not worry about gain-of-function research?

And to Dr. Sam Harris, I mean, I'm not trying to speak for him here. This is just my opinion as to what he might say. I've created this weird dialogue between hypothetical people this episode. But I think he might say back to this person, yeah, we should be worried about all those things he just mentioned. And not worried like we're being doomsayers about this stuff, running around panicked. But of course, we should be concerned about our relationship to science and technology.

Keep in mind, this is the same Sam Harris that back in 2015 is doing an interview, and at the time, it may have just seemed like a throwaway comment to many. But he's talking in this interview, and he's saying how when we're facing the upcoming election in the United States at the time, and we're trying to decide who we want at the helm of the ship in terms of candidates, that we have to consider the possibility of something drastic happening. Something like a superbug or a pandemic breaking out in the future that we're not prepared for that causes a global catastrophe.

That's all the way back in 2015. This is clearly a man doing his work that is self-aware of this moment we're living through, where we have to re-examine our relationship to technology, because there's never really been this much at stake when a new piece of technology comes out as in 2023.

And this brings me back to the question we asked at the beginning of the episode. Is technology simply neutral? Is it just a tool that can be used for good or evil depending on the person, and that it's our job to incentivize the good actors and eliminate the bad? Or has technology become something that we no longer have the privilege to have a reactionary policy towards?

Because that's typically how this works, right? A new technology is developed, usually in the business world where the only gatekeeper between the public and a new piece of tech is some ambitious CEO that's driven by a profit motive. And then the widespread use of this new product is seen as a sort of experiment our society's running in real time. And then any negative effects that start to show up, the government's supposed to take a look at them and pass regulations that try to protect the public after the fact. Fantastic strategy when the stakes of technology are low.

Back when we're making vacuum pumps in the 1600s. Back when someone's buying a blender or a TV set for their home. But how about with AGI? With nuclear technology? Autonomous weapons? Bioengineering? Technology is not a neutral thing. You know, Foucault says to always be more skeptical of the things that have power over you that claim to be neutral. People in science and technology studies have said for decades now that a new piece of technology always carries with it certain affordances.

Meaning when something new comes out, there's always new things that it allows people to do that they couldn't do before, but it also takes away things that people used to be able to do. And to the skeptic in this episode saying that it's silly to worry about the future possibility of AGI, that anybody worried about it is buying into a whole lot of ifs that haven't yet become wins, someone on the AGI risk side of things might say that given the stakes of the technology that we're producing, we don't have the luxury of waiting for those ifs to become wins.

We have to have these conversations about AGI now, because this is the new level of mindfulness we need to apply to new technologies before they're released to the public or used in an unregulated way where people get hurt.

And look, even if all this stuff with AGI never happens, say we hit some sort of a wall in 10 years where AGI is impossible and this era's looked back on as a bunch of dumb, tech-obsessed monkeys LARPing in the woods, these conversations are still producing results when it comes to reexamining our relationship to tech. For example, think of the alignment problem and how that same line of conversation may have saved us from some of the perils we've experienced with social media over the years.

Meaning if your algorithms on a social media feed are aligned to the goal of maximizing the engagement of the viewer, then if misinformation and political antagonism is what gets the population to engage with stuff the most, then that's what people are going to end up seeing. Think of the conversations around the containment problem of AGI and the nuclear power disasters of the 20th century. Could any of that have been prevented?

Next time, we're going to be talking about narrow AI and its potential impacts on society, some that are even going on right now. I know I said last time that was going to be a part of this episode. Well, this is more for you to listen to. I'm certainly grateful you decided to listen to a podcast about this. It just seems like a useful conversation for people to be familiar with being alive today. But I want to close today by acknowledging one point that we haven't brought up at all this episode, and it may at first glance seem like an obvious solution to all this. Why not just ban AGI?

I mean, we put moratoriums on other tech that's dangerous. It's illegal to clone people. It's illegal to build a nuclear reactor in your garage. Why don't we just do the same thing with AGI? Well, funny you should ask. That's another thing that makes this a particularly complicated brand of existential risk. We can't stop at this point.

We are living in a world with quadrillions of dollars up for grabs to anybody who has the insight and the free time to crack this thing. And you couple that with the fact that this is not like nuclear technology or cloning, where you may need a team of scientists to be able to monitor and regulate something like that. No, with AGI, GPUs are everywhere. Knowledge about algorithms and how to improve them, you don't got to be in a secret sisterhood to get access to that kind of stuff.

So the only thing some people feel like we can do is to just win the race to the finish line. There's talk about maybe putting a temporary pause on this kind of stuff, but everybody knows even that would only be temporary. What seems certain here is that this is nothing like Chernobyl or Three Mile Island, where everyone's gung-ho about nuclear technology for a while until there's a couple major accidents and then governments around the world start to pump the brakes in the typical reactionary policy sort of way.

And if you're someone out there just frothing at the mouth to win this technological arms race, you know, cheerleading the tech barons of the world all the way to the finish line, that's fine. But there may come a day where you find yourself face to face with the kind of species we talked about earlier in the episode, staring down the barrel of something that looks at you far more like you're an elementary school science project than a human being. Thank you for listening. Talk to you next time.

Patreon shoutouts this week. We got Francisco Aquino Serrano, Donald Manas, Brett Clark, Jarrah Brown, and Dylan Brenninger. Thank you to everyone that supports the show in any way that you do. Could never happen without you. I hope you have a good rest of your week.