How long does it take you to add 3,456,732 and 2,245,678? Ten seconds? Not bad--for a human. The average new PC can perform the calculation in 0.000000018 second. How about your memory? Can you remember a shopping list of 10 items? Maybe 20? Compare that with 125 million items for the PC.

On the other hand, computers are stumped by faces, which people recognize instantly. Machines lack the creativity for novel ideas and have no feelings and no fond memories of their youth. But recent technological advances are narrowing the gap between human brains and circuitry. At Stanford University, bioengineers are replicating the complicated parallel processing of neural networks on microchips. Another development--a robot named Darwin VII--has a camera and a set of metal jaws so that it can interact with its environment and learn, the way juvenile animals do. Researchers at the Neurosciences Institute in La Jolla, Calif., modeled Darwins brain on rat and ape brains.

The developments raise a natural question: If computer processing eventually apes natures neural networks, will cold silicon ever be truly able to think? And how will we judge whether it does? More than 50 years ago British mathematician and philosopher Alan Turing invented an ingenious strategy to address this question, and the pursuit of this strategy has taught science a great deal about designing artificial intelligence, a field now known as AI. At the same time, it has shed some light on human cognition.

Beginnings: Testing Smarts
So what, exactly, is this elusive capacity we call "thinking"? People often use the word to describe processes that involve consciousness, understanding and creativity. In contrast, current computers merely follow the instructions provided by their programming.

In 1950, an era when silicon microchips did not yet exist, Turing realized that as computers got smarter, this question about artificial intelligence would eventually arise. [For more on Turings life and work, see box on opposite page.] In what is arguably the most famous philosophy paper ever written, "Computing Machinery and Intelligence," Turing simply replaced the question "Can machines think?" with "Can a machine--a computer--pass the imitation game?" That is, can a computer converse so naturally that it could fool a person into thinking that it was a human being?

Turing took his idea from a simple parlor game in which a person, called the interrogator, must determine, by asking a series of questions, whether or not an unseen person in another room is a man or a woman. In his thought experiment he replaced the person in the other room with a computer. To pass what is now called the Turing Test, the computer must answer any question from an interrogator with the linguistic competency and sophistication of a human being.

Turing ended his seminal paper with the prediction that in 50 years time--which is right about now--we would be able to build computers that are so good at playing the imitation game that an average interrogator will have only a 70 percent chance of correctly identifying whether he or she is speaking to a person or a machine.

So far Turings prediction has not come true [see box on page 80]. No computer can actually pass the Turing Test. Why does something that comes so easily for people pose such hurdles for machines? To pass the test, computers would have to demonstrate not just one competency (in mathematics, say, or knowledge of fishing) but many of them--as many competencies as the average human being possesses. Yet computers have what is called a restricted design. Their programming enables them to accomplish a specific job, and they have a knowledge base that is relevant to that task alone. A good example is Anna, IKEAs online assistant. You can ask Anna about IKEAs products and services, but she will not be able to tell you about the weather.

What else would a computer need to pass the Turing Test? Clearly, it would have to have an excellent command of language, with all its quirks and oddities. Crucial to being sensitive to those quirks is taking account of the context in which things are said. But computers cannot easily recognize context. The word "bank," for instance, can mean "river bank" or "financial institution," depending on the context in which it is used.

What makes context so important is that it supplies background knowledge. A relevant piece of such knowledge, for example, is who is asking the question: Is it an adult or a child, an expert or a layperson? And for a query such as "Did the Yankees win the World Series?" the year in which the question is asked is important.

Background knowledge, in fact, is useful in all kinds of ways, because it reduces the amount of computational power required. Logic is not enough to correctly answer questions such as "Where is Sues nose when Sue is in her house"? One also needs to know that noses are generally attached to their owners. To tell the computer simply to respond with "in the house" is insufficient for such a query. The computer might then answer the question "Where is Sues backpack when Sue is in her house?" with "in the house," when the appropriate response would be "I dont know." And just imagine how complicated matters would be if Sue had recently gotten a nose job. Here the correct answer would have been another question: "Which part of Sues nose are you talking about"? Trying to write software that accounts for every possibility quickly leads to what computer scientists call combinatorial explosion.

Human or Just Humanlike?
The Turing Test is not without its critics, however. New York University philosopher Ned Block contends that Turings imitation game tests only whether or not a computer behaves in a way that is identical to a human being (we are only talking about verbal and cognitive behavior, of course). Imagine we could program a computer with all possible conversations of a certain finite length. When the interrogator asks a question Q, the computer looks up the conversation in which Q occurred and then types out the answer that followed, A. When the interrogator asks his next question, P, the computer now looks up the string Q, A, P and types out the answer that followed in this conversation, B. Such a computer, Block says, would have the intelligence of a toaster, but it would pass the Turing Test. One response to Blocks challenge is that the problem he raises for computers applies to human beings as well. Setting aside physical characteristics, all the evidence we ever have for whether a human being can think is the behavior that the thought produces. And this means that we can never really know if our conversation partner--our interlocutor--is having a conversation in the ordinary sense of the term. Philosophers call this the "other minds" problem.

Chinese, Anyone?
A similar line of discussion--the Chinese Room Argument--was developed by philosopher John Searle of the University of California, Berkeley, to show that a computer can pass the Turing Test without ever understanding the meaning of any of the words it uses. To illustrate, Searle asks us to imagine that computer programmers have written a program to simulate the understanding of Chinese.

Imagine that you are a processor in a computer. You are locked in a room (the computer casing) full of baskets containing Chinese symbols (characters that would appear on a computer screen). You do not know Chinese, but you are given a big book (software) that tells you how to manipulate the symbols. The rules in the book do not tell you what the symbols mean, however. When Chinese characters are passed into the room (input), your job is to pass symbols back out of the room (output). For this task, you receive a further set of rules--these rules correspond to the simulation program that is designed to pass the Turing Test. Unbeknownst to you, the symbols that come into the room are questions, and the symbols you push back out are answers. Furthermore, these answers perfectly imitate answers a Chinese speaker might give; so from outside the room it will look exactly as if you understand Chinese. But of course, you do not. Such a computer would pass the Turing Test, but it would not, in fact, think.

Could computers ever come to understand what the symbols mean? Computer scientist Stevan Harnad of the University of Southampton in England believes they could, but like people, computers would have to grasp abstractions and their context by first learning how they relate to the real, outside world. People learn the meaning of words by means of a causal connection between us and the object the symbol stands for. We understand the word "tree" because we have had experiences with trees. (Think of the moment the blind and deaf Helen Keller finally understood the meaning of the word water that was being signed into her hand; the epiphany occurred when she felt the water that came out of a pump.)

Harnad contends that for a computer to understand the meanings of the symbols it manipulates, it would have to be equipped with a sensory apparatus--a camera, for instance--so that it could actually see the objects represented by the symbols. A project like little Darwin VII--the robot with the camera for eyes and metal mandibles for jaws--is a step in that direction.

In that spirit, Harnad proposes a revised Turing Test, which he calls the Robotic Turing Test. To merit the label "thinking," a machine would have to pass the Turing Test and be connected to the outside world. Interestingly, this addition captures one of Turings own observations: a machine, he wrote in a 1948 report, should be allowed to "roam the countryside" so that it would be able to "have a chance of finding things out for itself."

Toward Robots
The sensory equipment Harnad thinks of as crucial might provide a computer scientist with a way to supply a computer with the context and background knowledge needed to pass the Turing Test. Rather than requiring that all the relevant data be entered by brute force, the robot learns what it needs to know by interacting with its environment.

Can we be sure that providing sensory access to the outside will ultimately endow a computer with true understanding? This is what Searle wants to know. But before we can answer that question, we may have to wait until a machine actually passes the Robotic Turing Test suggested by Harnad.

In the meantime, the model of intelligence put forth by Turings test continues to provide an important research strategy for AI. According to Dartmouth College philosopher James H. Moor, the main strength of the test is the vision it offers--that of "constructing a sophisticated general intelligence that learns." This vision sets a valuable goal for AI regardless of whether or not a machine that passes the Turing Test can think like us in the sense of possessing understanding or consciousness.