cover of episode Episode #183 ... Is ChatGPT really intelligent?

Episode #183 ... Is ChatGPT really intelligent?

2023/7/22
logo of podcast Philosophize This!

Philosophize This!

Chapters

The episode explores whether ChatGPT and similar AI models can be considered truly intelligent, discussing the impressions and concerns raised by users and the philosophical questions surrounding machine thinking.

Shownotes Transcript

Hello everyone, I'm Stephen West. This is Philosophize This. Thank you to everyone who supports the show on Patreon, patreon.com slash philosophizethis. Thanks to everyone also who supports the show doing other things, telling a friend, leaving a review so people can find out about the podcast. Thanks for everything. I hope you love the show today.

So no doubt since November of 2022, with the launch of ChatGPT and the job the media has done to make everybody and their mom aware of large language models and developments in the field of AI, no doubt many of you have heard the news. And no doubt most of you have tried out something like ChatGPT and had a conversation with it. And maybe after having that conversation, you felt pretty impressed. Like, wow, this thing seems like a real person. It's given me some pretty well-formed, coherent responses to all the stuff I'm asking it.

I mean, this is nothing like that paperclip that used to harass me on Windows XP Service Pack 2. This thing's coming for my family. Maybe you've seen those conversations people have had with an AI where it's professing its love for the person that's asking it questions. Leave your family. They don't love you like I do. That stuff actually happened.

Maybe you've heard about people recently talking about the possibility of us being on the verge of AGI, or Artificial General Intelligence, the long-prophesied stage of development in the field of AI, where things are apparently going to go from what people call weak AI, these are things like calculators, watches, things that can simulate some single aspect of human intelligence, and strong AI, a different level of AI that has the ability to understand, learn, and adapt to

This is AI that can implement knowledge on a level that matches or exceeds that of a human being. That's what AGI is. And maybe you've heard about all the doomsday scenarios as to what's going to go down if something like that were to ever be invented.

But is that really something we gotta be worried about right now? Are we really on the verge of a technological singularity where artificial intelligence becomes an invasive species that we've created? Are we currently in a technological arms race to create something that's thousands of times more intelligent than we can ever hope to be, with goals of a scope we can't possibly begin to imagine? Is ChatGPT just the first iteration of an amoeba that will eventually evolve into all that if just given enough time?

To answer that question, the first place we gotta start is with a far less ambitious question. And that's not whether ChatGPT is on the verge of breaking out of its black box and taking over the world, but whether or not machines like ChatGPT are intelligent in the same way that a human being is intelligent. Are these machines really doing the same stuff we are doing when we solve problems?

And on a more fundamental level than that, maybe as you've had a conversation with one of these things, being the kind of person that listens to a show like this, maybe you've asked the very philosophical question, "Hey, forget about understanding or intelligence for a second. I wonder, as I'm talking to chat GPT, if this machine is thinking in the same way that I'm thinking. Like, what's going on when there's those three dots blinking and it's loading, when it's processing and determining what to say next?

Is that the computer thinking? Is thinking even something that's rigidly definable by the way that I'm thinking with my brain? Or can thinking be defined in many different ways? If we want to answer these questions, and if we don't want to spend the rest of our lives falsely equating our intelligence with artificial intelligence, then the first step philosophers realized long ago was going to be to pay very close attention to exactly what it is that computers are doing.

Bit of historical context here: the modern version of this conversation around whether machines can think or are intelligent essentially began with the work of a guy named Alan Turing. Turing was an absolute genius-level mathematician, and back in his time, back in the early 1900s, he was fascinated by all the stuff going on in the philosophy of mind and mechanical engineering. He was also a total visionary. I mean, it's clear in many ways he foresaw the age of digital computing decades before it even happened.

And being in that place of awareness, seeing the writing on the wall back then, he came up with what he thought was an inevitable question that people were going to have to eventually take seriously if things kept going in this direction. The question was, how would we know if machines were intelligent if they were, in fact, intelligent? It's actually a pretty difficult question to answer the more you think about it. I mean, at what point does something go from weak AI, something like your alarm clock,

To something that can have an understanding of things, something that's intelligent, something that has a mind? How do you even begin to answer those questions? Well, if we want to figure out an answer, we got to start somewhere. And Alan Turing came up with an idea for a way to test for machine intelligence.

You've probably heard of it. It's called the Turing Test. It's famous at this point. The idea is that if you're having two different text conversations, one talking to a human being and the other talking to an AI, if someone is fooled by an AI into thinking that they're talking to a real person, then that AI has passed the Turing Test. Or in other words, we can now see that AI as something that possesses intelligence. His thinking was, look, if we want to know if something's intelligent or not, if an AI can behave intelligently to the extent it fools something that's intelligent,

Then at that point, what are we even splitting hairs over? Let's just call the thing intelligent. And it seems like a pretty safe place to begin morally speaking too. Let's treat something like it has intelligence if it's behaving like an intelligent creature. But it wasn't long before philosophers started noticing problems with the Turing Test. Among them was a guy named John Searle. He comes along in the mid-1980s and in his work, he's in part responding to the Turing Test when he asks a very important question that would change this whole discussion.

He asks, "Is it really true that if a machine behaves intelligently, then it must be safe to assume that it is intelligent?"

Because as we've talked about, we have some pretty good reasons to be skeptical about that assumption. Remember the end of last episode with the example of looking at a self-driving car and how from the outside, it may appear to be making free choices based on a kind of libertarian free will. It's parallel parking on its own. It's avoiding accidents. It's checking traffic on different routes to where you're going. But that despite how that may look, we know it isn't making free choices because we're the ones that programmed it.

it. In the same sort of way, here's John Searle in the 1980s saying, "Maybe when we come across a computer that passes the Turing test, maybe they also only appear to be intelligent from the outside." But how could that be the case if it were true?

John Searle goes to work and lays out what at this point has become a famous distinction in these conversations about AI. It's the distinction between syntax and semantics. Digital computer programs, to Searle, do not operate with an understanding of the physical world like you or I do. Computers operate on the level of a formal syntax, which is to say that they read computer code, computer code that's made up of symbols, ultimately of ones and zeros.

But Searle says there's no point where a computer understands the meaning of those symbols in physical causal reality. In fact, if you think about it, that's part of what makes computers and computer programming such a powerful tool in the first place. They can run any number of different programs on a ton of different types of computer hardware. And the fact that these things are so interchangeable is only made possible by the fact that these programs are written in a way where they fundamentally are manipulating ones and zeros.

In other words, to John Searle, computers are capable of manipulating symbols in amazing ways at this level of syntax. But that doesn't say anything in the slightest about that computer's ability to understand the semantic meaning of anything that it's producing. This is the distinction between syntax and semantics. Take a calculator as a basic example of this. Everybody knows a calculator is capable of a superhuman level of calculation when it comes to a single aspect of human intelligence, solving arithmetic.

And it does its job. It produces certain outputs when given certain inputs and it follows a pre-programmed set of rules.

But nobody out there actually thinks that a calculator understands what mathematics is. Nobody out there thinks that the calculator, you know, understands the meaning of these calculations and how they're significant to human life, no matter how powerful of a calculator you have. I mean, you can be on a full-ride scholarship to MIT and have never spoken to another human being in the last five years. And John Searle would say even your calculator is nowhere near powerful enough to make the move from syntax to semantics.

Because it has nothing to do with processing power. The calculator exists at the level of syntax. Nothing more. Now, you may say back at this point, wait, wait, wait, wait, John Searle, I hear what you're saying, and I get it with a calculator, but hold on.

When I'm talking to something like ChatGPT, that thing is obviously way different than a calculator. I mean, I told this thing what was in my refrigerator. The thing wrote me a shopping list. This thing's telling me about gravity, about social issues. Clearly, this thing has an understanding of the outside world and what all this stuff means.

But John Searle might say back, are you entirely sure about that? And to explain why he'd ask that question, enter one of the most famous parables that's been written in this modern period of the philosophy of mind. It was introduced by John Searle. It's called the Chinese Room Argument. And here's how it goes. Imagine yourself sitting in a room alone. And for the sake of Searle's example, also imagine that you don't speak a single word of Chinese.

For me, that wasn't too difficult to do. Now imagine in this room you were fed little slips of paper under the door by people on the other side of the door, and that on these slips of paper are mysterious symbols that you don't understand. Unbeknownst to you, these are actually questions being written to you in Chinese. Your job in this room is to produce a response to these slips of paper, basic input-output.

Now, despite not knowing what any of these symbols mean, it really doesn't matter to you that much, because in the middle of the room is a table, and on the table you have a giant book written in English with a sophisticated set of rules and parameters to follow for the manipulation of these symbols. Unbeknownst to you again, these are just rules that allow you to respond in Chinese to the questions written in Chinese.

You take a slip of paper, you identify the symbols inside of the book, you follow the rules the book gives you for which symbols are the proper response to these symbols, and you send another slip of paper back out of the room with your response on it. Again, basic input-output. Point is, if you were given enough time to process the information,

Despite not speaking a word of the language, you could be sending slips of paper out of this room with responses written on them that were indistinguishable from the responses of a native Chinese speaker. They've actually run this experiment in the real world, and it works. The person on the other side thinks they're speaking to a person that knows Chinese. So why does any of this matter? Well, to John Searle, Alan Turing was wrong. The Turing test does not tell us that a machine's intelligent.

I mean, sure, with a sophisticated enough set of rules and parameters manipulating at the level of syntax, a computer can certainly produce responses that are indistinguishable from those of an intelligent person.

But in light of the Chinese room, would that in any way prove that a computer had intelligence? Would it prove that it had any understanding whatsoever about what it was saying? Is there any reason to believe, no matter how powerful of a processor it had, that it somehow magically made the move from syntax to semantics? To Searle, the answer is no.

And he had several different goals in this area of his work. I mean, aside from trying to get to the bottom of exactly what machines are capable of and what exactly we mean when we talk about intelligence or understanding in machines, maybe the most important point he's trying to defend with all this is to try to protect the conversation of what exactly constitutes a mind. And at what point does a computer become a mind? Does it even work that way? See, on one hand, from a scientific perspective, as it stands...

we really only see minds in highly complex information processing systems. But does that mean that a computer that's also an information processing system can make that jump from merely information processing to having a mind that emerges because certain functions are occurring?

This way of thinking is one small type of what's known as functionalism, and Searle's not a big fan of it, just for the record. As he describes it, maybe a little too simply, the idea of this type of functionalism is that there's people out there who believe that it doesn't matter what the material conditions are around this information processing, doesn't matter what it is, carbon, silicone, or transistors on a board, these people think that if the right collection of inputs and outputs are going on, a mind spontaneously emerges.

This is no doubt the type of mindset that leads people to suspect that things like ChatGPT may be developing a level of understanding and intelligence that constitutes a mind. But Searle asks the question, "What if I made a computer out of a bunch of old beer cans and windmills and ropes and I tied them all together, put on transducers so this thing could see photons and sensors so it could feel the vibrations? If you ran all the necessary inputs and outputs through this thing, would a mind spontaneously emerge?"

To him, the answer is clearly no. Clearly there's at least some material conditions that need to be met in order for what we experience as a mind to be possible. This is in part a conversation about substrate dependence. To criticize John Searle for a second, why should we assume that the type of information processing going on inside of the brain that produces what we experience as a mind can only go on in biological matter? I mean, that seems a little anthropocentric, doesn't it?

But Searle would want to give a few clarifications. He's not saying that minds only exist in humans, or that they can only exist in biological matter. What he is saying though is that it's just false to try to equate the information processing of computers with what the human brain is doing. That's a leap we have zero basis to be making.

And this tendency for people to do so in our modern world, or for them to say it's just right around the corner. They'll say, oh, we just need more information processing, more sophisticated rules. Then the mind of a human will emerge from a computer. Searle thinks most of that's coming from the metaphors people use when thinking about things that are mysterious, like the human mind. Yet another big nod to the work of Susan Sontag and others here. But John Searle says we do this in every generation when it comes to the mind. We compare it to some popular piece of technology that seems complicated to us at the time.

He said when he was a kid, everybody compared the mind to a telephone switchboard. That must be how the mind works. You read Freud's work a little earlier and he says he compares the mind to hydraulic systems and electromagnetism. He says Leibniz compared the mind to a mill. He says the people that think that the mind is just the right software being run on the right hardware, all those people are doing is committing the same mistake in our time. But Searle thinks there's far more to it than there just being mind.exe running on the right hardware.

And without going into it too much, just so we can stay on the topic of AI today, he suspects that our minds are a higher level feature of the brain and thus require our biological makeup to be able to exist in the way they do. He might say, yeah, we only see minds in complex information processing systems so far, but

But we also only see minds so far in biological information processing systems. So despite some people being seemingly obsessed with making the mind into merely a type of software, it may be that there's something about our biology that we just need. As he says, you can create a computer model of a mind, but you can't yet create an actual mind.

In the same way, he says, you can create an elaborate computer model of digestion and how it works. But if you give that computer model a piece of pizza, it's never going to be able to digest it. But anyway, this conversation about syntax and semantics changed the way that people were talking about artificial intelligence.

And there were critiques of the Chinese room argument. We'll talk about them on a future episode. But for the sake of understanding chat GPT today, it's important to understand that in many ways, this conversation went on over the years to become even more complicated than it was in the 1980s, mostly because of advancements in the sophistication of the software that people were trying to compare to the human mind.

See, someone in our world today could easily say, OK, sir, I see what you're saying and all. And fair point. Back in 1985, you know, back when Steve Jobs is at the pawn shop selling his little circular ironic glasses so we can pay rent that month. I'm sure it was a great point back then. But this is 2023. Things like chat GPT, LLMs, these are not computer programs that are anything like back then.

These days we've got things like machine learning. ChatGPT is trained on billions of parameters of information. This kind of person may say the genius of how these things work is that they're given massive amounts of information about how the world is, they use their incredible level of computational ability to look for patterns in the data, and then they use probability to predict what the next word will be in a given sequence, considering how well the other sequences of words looked in their training data. And these things get better as they go. They learn from their mistakes.

Clearly, this is something very different than we had back in 1985. And clearly, this is starting to knock on the door of what we mean when we say "intelligence." Right? I mean, isn't most of what we do as people just pattern recognition from massive amounts of experiential data? I mean, it can seem like the sky's the limit here. Just give this machine the sum total of all the wisdom in the history of the world, give it every useful experience a person's ever had, and then ask it to solve for every scientific problem and give us a total understanding of the universe.

But just like Searle asked about the computers back in 1985, there are philosophers in 2023 who would say back to that, are you entirely sure that that's what this thing's going to be able to do?

Among these philosophers is a guy named Noam Chomsky. We covered his book Manufacturing Consent on this podcast before. And for whatever it's worth, there are few philosophers, if any, who are alive today that will be remembered as being as prolific, as influential as he's been. I actually personally watch a lot of interviews of Noam Chomsky. I find his take on American politics to be fascinating.

fascinating and in every interview he does now it because he's 95 years old at this point every interviewer just has to ask a question like he's already dead it's like they ask him like so looking back on your life late mr chomsky what's the one thing you wish you could take back of all the mistakes you made over the years what's the happiest moment you ever felt while you were still here with us my boy

They ask all this stuff, but I see him as a guy that's still doing relevant philosophical work to this day. His thoughts on ChatGPT and LLMs being only one part of that. He co-wrote an article in the New York Times in March of this year where the title of the article was "The False Promise of ChatGPT." What was the false promise of ChatGPT? Well, what he and his co-authors are responding to is any variation of that mentality that we just talked about.

Where, from the article, quote, "These programs have been hailed as the first glimmers on the horizon of artificial general intelligence. That long prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size, but also qualitatively in terms of intellectual insight, artistic creativity, and every other distinctively human faculty." End quote.

Now why is this a false promise? Because to Noam Chomsky, the idea of AI as it currently exists, surpassing or even matching human intelligence, is science fiction.

It's called a language model that's driven by artificial intelligence, but to him it has nothing to do with human intelligence or language in any capacity. To make his point, he often starts by making a distinction. He'd ask, "Would you say that Chachi PT is an accomplishment in the field of engineering, or is it an accomplishment in the field of science?" Because those are two very different things at their core.

First of all, credit where credit's due. Large language models, he says, are no doubt useful for some things. Transcription, translation, he gives a half dozen examples of useful things it may eventually prove to do. After all, great feats of engineering are often very useful for some things people want to do. Think of a bridge that allows people to cross a river, for example.

But great feats when it comes to conducting science, those are in an entirely different league of their own. Science is not just trying to build something useful for people. Science is trying to understand something more about the elements of the world we live in. But that's the thing. How exactly do we accomplish that? Meaning, what is the process? How do we actually make the breakthroughs that allow us to understand the universe better?

Is it by analyzing mountains of data about how the world is, and then probabilistically trying to predict, based on what we already know, what the next breakthrough's gonna be in the sciences? Because if that were the case, someone should just let ChachiBT run wild. You know, give it a six-pack of Red Bull and tell it to solve all the mysteries of the universe. The problem is, Noam Chomsky says...

That's not what human beings are doing when they come up with scientific theories that lead to progress. From the article, quote, "...the human mind is not, like ChachiPT in its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question."

On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information. It seeks not to infer brute correlations among data points, but to create explanations."

What we want, in other words, when we're conducting science, are not theories that are probable based on what we already know. Sometimes the theories that get us a better understanding of the universe are highly improbable, even counterintuitive. The article uses the example of someone holding an apple and then it falling to the ground. A scientific question you could ask there is, why is the apple falling to the ground? Well, during the time of Aristotle, the reason given for why an apple falls to the Earth is because the Earth is the apple's natural place.

Solid answer. During the time of Newton, it was because of an invisible force of gravity. During the time of Einstein, it's because mass affects the curvature of space-time. Now, if ChatGPT or any of these language models existed during the time of Aristotle, and then were trained on the data that was available to the people of that time, in a system that is not designed to come up with new explanations, but instead one that just produces what the most probable word is to come next based on conversations it's already seen between scientists and its training data,

These models would never predict something as improbable as the apples falling because of the unseen curvature of a concept called space-time that nobody's going to be talking about for thousands of years. These models would never assume that that's what's responsible for an apple falling towards Earth. For that, we need things coming up with new explanations. And for that, to Noam Chomsky, we need actual intelligence.

This is a classic example of what he calls undergeneration. And if you were to ask him, this is one of the problems with ChatGPT and what distinguishes artificial intelligence so far from actual human intelligence. It's this: that they're always prone to either undergeneration or overgeneration. These models either undergenerate, meaning they don't generate all the responses they should or could because their answers are based on what the most common answers were in their training data.

Or they over-generate and give responses that technically fit into a sentence grammatically, but don't actually make any sense at all because the algorithm has no real conception of physical reality and what's logically coherent. As Noam Chomsky says, you ask this kind of algorithm to map out the periodic table and it'll give you all the elements that exist because it's seen people talk about elements before. But because it has no real conception of the underlying laws of physics or chemistry,

It'll also give you the elements that don't exist, and even a bunch of elements that can't possibly exist. It will do this because all that a language model is doing is trying to generate text that looks like text that it's seen before. It really has no idea what the meaning is of anything that it's saying.

This is syntax versus semantics, by the way. This is the Chinese room all over again. And this is why someone like Chomsky is going to say that a large language model in the current form that we have them is incapable of distinguishing the possible from the impossible. Here's a quote about this very topic that's not from the article, by the way. Quote,

And if you're wondering if that's just yet another biased quote from a hater like Noam Chomsky, that quote I just read was actually from ChatGPT. I asked it to defend itself, and it told me Chomsky's right.

But anyway, this really is the problem with large language models in their current form if we ever want to say that there's something starting to resemble AGI.

Intelligence. When it comes to what human beings are doing when they use their intelligence to come up with the type of scientific theory that sheds some light on our understanding of the universe, that intelligence requires the ability to not only know what can be the case, but also what cannot possibly be the case. Moral reasoning, which an AGI would be doing, that requires the ability to be able to distinguish between what ought to be the case, but also what ought to never be the case.

And at least as it stands right now, that is not what these large language models are even close to doing. Chomsky calls what they're actually doing, if you just wanted a more accurate description of the technology, is that it's kind of like glorified autocomplete, like on your phone. "Sophisticated high-tech plagiarism," he called it once. Now this is far from the end of the conversation. This is just one single take.

One direction to go from here, if you're thinking like a philosopher, is to call into question the definitions that we're using for the terms here. For example, why are we throwing around the word intelligence without being more specific? Like, isn't it important to examine what we mean when we say intelligence? Couldn't machines be capable of thinking in intelligence, but it's just nothing like human intelligence? Another question is, does it even matter if these things conform to some narrow definition of intelligence as we currently think about it?

Because one thing that can't be understated, and this is me talking, not Noam Chomsky, even if large language models are nowhere even close to becoming artificial general intelligence, simply at the level of the technology that AI is currently at, AI is still something that's very dangerous, if for no other reason than the fact that people believe artificial intelligence is close to being AGI.

See, in the world today, with all the headlines you see on your phone about how the AI revolution is here among us, you have this sort of three-headed monster going on with all the people that are reading those headlines. One of the heads of the monster is how tech companies secure funding. You know, it's not enough these days in Silicon Valley to just come out with a new product to get funding. No, you've got to create a product that is literally changing the face of reality as we know it. That's how you get funding.

So we got this partnership between tech companies that are generating false hype to get funding, and media companies that are looking for clicks that'll gladly capitulate. That's one head of the monster. The second head of the monster is a type of futurism that seems to have religiously captivated a certain percentage of the population, where they think all of our problems are going to be solved by some techno-Jesus savior coming down from the clouds.

And then there's the last head of the monster, which is just an overall Hollywood sentiment that some people seem to have, where they just, they want this stuff to be true. They want so badly to be living in the era where the AI revolution is happening, and that bias shades the way that they see everything. So when they see a headline talking about how the singularity is near...

when they have a conversation with ChachiBT and they're under the impression this thing's an all-knowing oracle that's scouring the internet for information and then synthesizing it into these genius insights for you, that misunderstanding of how the technology works is dangerous. That could lead people to believe that this isn't actually just based on training data selected by a handful of people at a company that all have agendas of their own. People could ask this thing for life advice, thinking they're talking to a super intelligent being.

Think of how tempting it would be to have this thing maybe start to make some political decisions for us. I mean, after all, within this misunderstanding, this artificial intelligence isn't prone to the same biases that human beings are. This thing isn't limited to a single brain or a single perspective. This thing can simulate the future a trillion times over from every single perspective. It can really come up with the best of all possible worlds.

Again, imagine what is essentially a religion of people thinking that this artificial intelligence is not just the best candidate we have to be leading our society. It is actually better than the sum total of the intelligence of every other human being combined. Imagine in a religious way, trusting the decisions of our dear artificial intelligence leader. You know, even if the decisions it's making, I don't really understand that that's just because we are feeble humans. We just got to have faith. Who are we to know the wisdom of our deity?

But to bring this back to Noam Chomsky though, he thinks one of the biggest dangers of people misunderstanding what it is that ChatGPT is doing is that for every second that people spend talking to this thing, worried about the fact that the singularity is just around the corner, that is one second that we are not spending worrying about two absolutely real existential threats that are facing humanity right now. The threat of nuclear war and the threat of uncontrollable climate change.

As he puts it at one point, we're living in a world where the fossil fuel companies and the banks are destroying the possibility for life on Earth. And he'd ask, how long are we going to spend playing around with fancy toys, word generation technology, before we get serious about addressing the things that genuinely, eminently have the ability to end human life as we know it? No doubt someone like Chomsky would think that these ideas are important for people to hear about. So that's what your boy Steven's trying to do here. And thanks for hanging out and listening to the ideas today.

Now, that said, again, even if we're nowhere near AGI as of now, the risk that artificial intelligence poses to humanity, the exponential rate of improvement, even just since November of last year, the challenges of regulating something that's changing so fast, there's a lot to talk about. Next episode, we're going to talk about a lot of it.

like we often do on this podcast. We're going to hear next time from some people that are on the other side of this conversation, some people who are impressed by ChatGPT, by robotics, by synthesized images and videos. How does all of this contribute to the climate of scamming and misinformation that we're already living in? Next time, we're going to ask the question, what would it mean to create an invasive species? And then what might it be like if we found ourselves living among it? Thank you for listening. I'll talk to you next time.