Testing for Consciousness in Machines

Asking people and computers what's wrong with manipulated photos may tell if there is "anybody home"

Dan Burn-Forti Getty Images

HOW WOULD WE KNOW if a machine is conscious? As computers inch closer to human-level performance—witness IBM's Watson victory over the all-time champs of the television quiz show Jeopardy—this question is becoming more pressing. So far, though, despite their ability to crunch data at superhuman speed, we suspect that unlike us, computers do not truly “see” a visual scene full of shapes and colors in front of their cameras; they don't “hear” a question through their microphones; they don't feel anything. Why do we think so, and how could we test if they do or do not experience a scene the way we do?

Consciousness, we have suggested, has two fundamental properties [see the July/August 2009 column by Christof Koch, “A Theory of Consciousness”]. First, every experience is highly informative. Any particular conscious state rules out an immense number of other possible states, from which it differs in its own particular way. Even the simple percept of pitch-blackness implies you do not see a well-lit living room, the intricate canopy of the jungle or any of countless other scenes that could present themselves to the mind: think of all the frames from all the movies you have ever seen.

Second, conscious information is integrated. No matter how hard you try, you cannot separate the left half of your field of view from the right or switch to seeing things in black and white. Whatever scene enters your consciousness remains whole and complete: it cannot be subdivided into unrelated components that can be experienced on their own. Each experience, then, is a whole that acquires its meaning by how it can be distinguished from countless others, based on a lot of knowledge about the world. Our brain, with its multitude of specialized but interacting parts, seems optimally adapted to achieving this feat of information integration. Indeed, if the relevant parts of our cerebral cortex become disconnected, as occurs in anesthesia or in deep sleep—consciousness wanes and perhaps disappears.

What's Wrong?

If consciousness requires this ability to generate an integrated picture that incorporates a lot of knowledge about the world, how could we know whether a computer is sentient? What is a practical test?

As we propose in the June 2011 issue of Scientific American, one way to probe for information integration would be to ask the computer to perform a task that any six-year-old child can ace: “What's wrong with this picture?” Solving that simple problem requires having lots of contextual knowledge, vastly more than can be supplied with the algorithms that advanced computers depend on to identify a face or detect credit-card fraud.

Views of objects or natural scenes consist of massively intricate relations among pixels and objects—hence the adage “a picture is worth a thousand words.” Analyzing an image to see that something does not makes sense requires far more processing than do linguistic queries of a computer database. Computers may have beaten humans at sophisticated games, but they still lack an ability to answer arbitrary questions about what is going on in a photograph. In contrast, our visual system, thanks to its evolutionary history, its development during childhood and a lifetime of experience, enables us to instantly know whether all the components fit together properly: Do the textures, depths, colors, spatial relations among the parts, and so on, make sense?

Take just one example, a photograph of your workspace. Unless it is specifically programmed for that purpose, a computer analyzing the scene would not know whether, amid the usual clutter on your desk, your iMac computer on the left and your iPad on the right make sense together. It would not know that while the iMac and the iPad go together well, a potted plant instead of the keyboard is simply weird; or that it is impossible for the iPad to float above the table; or that the right side of the photograph fits well with the left side, whereas the right side of a multitude of other photographs would be wrong. But you would know right away: to you an image is meaningful because it is chock-full of relations that make it what it is and different from countless others.

This article was originally published with the title "Consciousness Redux: Testing for Consciousness in Machines."

or subscribe to access other articles from the September 2011 publication.
Digital Issue $7.95
Digital Subscription $19.99 Subscribe
Share this Article:


You must sign in or register as a member to submit a comment.

Starting Thanksgiving

Enter code: HOLIDAY 2015
at checkout

Get 20% off now! >


Email this Article