The Chinese room argument

I love problems with funny names.  In artificial intelligence circles, one such problem is called the Chinese room argument.  Ostensibly it is an argument against so-called Strong Artificial Intelligence (Strong AI).  Strong AI says that simply running a program imbues the system with “mentality,” one aspect of which is understanding.  So, for example, I can read a story in Chinese but I won’t have the slightest understanding of it since I don’t know any Chinese.  J.R. Searle gave the following argument against Strong AI (called the Chinese room argument).

Imagine a monolingual English speaker/reader in a room.  The person has on a table an instruction booklet, pen, and paper.  Notes written in Chinese are then passed into the room.  The instruction booklet tells the person things like, “If you see Chinese character X on a slip of paper and Chinese character Y on another slip of paper, write Chinese character Z on your pad.”  Chinese speakers outside the room label the slips going in ‘stories’ and ‘questions’ and the slips coming out ‘answers to questions.’  The instruction manual can be as sophisticated as you’d like.  The question is, does our English speaker/reader – who only speaks and reads English – understand the Chinese, i.e. the details of the story and the associated questions and answers?  Searle says no.  To Searle, the room and the English speaker/reader is a computer and you can run as sophisticated a program as you’d like, but that it cannot understand Chinese regardless of the program.  As such, Searle claims no program can be constitutive of understanding. 

There have been critiques to Searle’s argument and the first one that comes to mind is adaptability.  In a sense, for example, one might say that Akismet, the WordPress spam filter, “learns” what is spam and what is not and thus “understands” spam.  Likewise, linguistic programs can “learn” language.  Is there not an element of understanding inherent in learning?

In any case, a completely different set of arguments against Searle’s Chinese room has recently been published by Mark Sprevak in the British Journal for the Philosophy of Science.  Amazingly enough, BJPS happens to have a considerable amount of freely available content and, hence, you can read all about Sprevak’s arguments here.


6 Responses to “The Chinese room argument”

  1. […]  Notice that the answer to the first part of this question is at least partially related to the Chinese room argument I recently commented […]

  2. […] some of Kelly Neill’s posts. What have I got so far? Well, let’s take a look at the Chinese room argument again from a new […]

  3. Hi
    I noticed your this link to me, so I suppose you expect to see my comment.
    But this is much more complex, as there are a lot more languages much more incomprehensible than chinese.

  4. quantummoxie Says:

    Very true, but it’s not the complexity of the language that matters. It is the fact that Chinese is a non-phonetic language while English is phonetic. Thus it should be entirely indecipherable to someone accustomed to a phonetic language.

  5. What about the systems approach where one states that while the gentlemen is not intelligent the system (i.e the entire room) is. Take for example the human brain, one could argue that none of our neurons are intelligent however them combined together within the human brain represent an intelligent system (i.e the human).

    • quantummoxie Says:

      The only problem with the systems approach in this particular case is that the room isn’t necessarily an interconnected system. The person and the papers are, but the actual processing boils down to neurons in the guy’s brain in the end. So I’m not sure I buy that argument, particularly since it implies that a highly complex robot would be intelligent and suggests we likely would already have created AI then. It does bring up a very interesting question, though. If it is true that a systems approach is the answer, at what point does a system become complex enough to exhibit true intelligence? I think folks studying emergence are looking at this type of thing.

Comment (obtuse, impolite, or otherwise "troll"-like comments may be deleted)

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: