In this post we are going to talk about a famous argument by American philosopher John Searle. It is called the Chinese room argument, and it tries to prove something deep and fundamental about our human minds.

Searle’s Chinese Room Argument

Alright, so what is the Chinese room argument all about? It basically tries to show that computers or computer programs can never develop a mind like a human. So, no matter how intelligent AI becomes, they can never reach a point where we can say that it is conscious. So, many of you may have seen the movie I, Robot, or some other movie where a robot suddenly develops consciousness. Right, so Searle basically says that this is not real consciousness like the stuff we have. It may look like it but it’s not the real deal. That’s what his argument tries to prove.  

Alright, so his actual argument takes the form of a thought experiment. For those of you who don’t know, a thought experiment is a common way in philosophy to try and prove something. It is like an experiment, but you just don’t act it out in the real world. Instead you just think about what would happen. It’s kinda like a lazy way to do an experiment. No, but all kidding aside, thought experiments are a great tool, when doing the actual experiment is impossible, at least for now, or just very impractical.

Okay, so the Chinese room argument goes like this. First, let us imagine that in some point in the future a computer has been made that behaves like it understands Chinese. So, you feed it some Chinese characters, and then it replies by giving you some other Chinese characters. So, for example if you would ask it in Chinese: Hey, how’s your day going? It would reply with: Good, just chilling, how about you? Something like that.  

The computer can do this so well, that his responses are indistinguishable from interacting with an actual human. So, let’s say some Chinese person needed to chat with this computer program for around 10 minutes, he would be convinced that he is interacting with a real live person.  

So, we have this computer, you can give it Chinese characters and it will reply back to you. And it does so in a way that it feels like talking to a real person. Okay now suppose that Philosopher John Searle, who doesn’t speak a word of Chinese, is sitting in a room with a pen, papers, and a book with all instructions of the computer program.

In this book, you can exactly find how to reply to anything being said to you in Chinese. Okay, now suppose that some Chinese person is going to interact with John Searle. They do this by sliding notes with Chinese characters on them through a slot in the door. They don’t see Searle or the book with the codes, all they know is that someone is sitting there, and that they are going to talk Chinese with them. So, Searle receives a note, he looks in his book, and writes down the appropriate reply, and then slides the note back. The person who sent the note receives it, and is happy with the response.

Alright, so Searle can do just as well as the computer program, simply by following the instructions. When people interact with him, they also get the feeling that they are talking to someone who actually speaks Chinese.  

The Point of This Argument

Okay, now comes the critical point of Searle’s argument. Recall, that Searle doesn’t speak a word of Chinese. He doesn’t understand the conversation at all! He just gets a bunch of symbols on a piece of paper, and then replies by writing down some other symbols. But he his doing exactly the same thing as the computer, just following the instructions. So, he argues, the computer doesn’t understand the conversation either. So, even though it looks like the computer is intelligent and capable of understanding the conversation, it actually doesn’t. Searle then continues that if the computer doesn’t understand the conversation, we also cannot say that it is thinking in any sense. Or that it has a mind of sorts. It is just a machine following instructions, nothing more than that. Searle concludes that the machine just simulates understanding Chinese, but it doesn’t actually understand it, like a regular Chinese speaking person would do.  

Two Viewpoints

His viewpoint, that consciousness and understanding cannot be found in machines, but only with biological entities, is called biological naturalism. It states that we need actual brains for mental phenomena like thoughts and desires to arise. Without our brains, and perhaps even our specific types of bodies, machines cannot have the same mental life as we do. 

This position goes against another philosophical viewpoint called functionalism. This is a position in the philosophy of mind which states mental phenomena like beliefs and desires are defined by completely by their functions. So, for example, you might think to yourself one afternoon, I wanna have a cookie! So you go to the kitchen and get one. Now, we could in theory program a robot to also walk into the kitchen every now and then and get a cookie as well. So it would function in the same way as a human in this scenario. According to the functionalist, this desire of having a cookie is  similar between the human and the robot. There is nothing wrong with saying that the robot also desired a cookie. For the biological naturalist however, there is just something fundamentally different in the desire of having a cookie. 

Functionalism is also closely related to the philosophical viewpoint known as computationalism. This viewpoint thinks that the mind can be correctly defined as an information processing system. So our mind and consciousness and all that is basically just a very special kind of computer. This entails that at some point, if AI becomes advanced enough they might develop a mind or consciousness that is similar to ours. 

This brings us to the heart of the matter. Is our mind just something unique because it is made out of organic material? Or is it more like any other computer, having some sort of code underneath which allows us to reply accordingly to certain input. This is a debate that is still going on today, and which will probably not be resolved in many years to come.

We would love to hear your viewpoint on this matter. Let us know in the comments.

error

Comments (1)

  1. Reply

    Pretty nice post. I just stumbled upon your weblog and wished to say that I have truly enjoyed surfing around your blog posts. After all I will be subscribing to your feed and I hope you write again soon! Livy Marcel Serrano

Leave a comment

Your email address will not be published. Required fields are marked *