Therein lies the secret of determining whether a computer is conscious. To do so, pick some images at random from the Web. Black out a strip running vertically down the central third of every one, then shuffle the remaining left and right sides of the pictures. The parts of the composites will not match, except in one case where the left side is evidently from the same picture as the right side. The computer would be challenged to select the one picture that is correct. The black strip in the middle thwarts the simple image-analysis strategies that computers use today—say, matching lines of texture or color across the separated, partial images. Another test inserts objects into several images so that these objects make sense in all images except one, and the computer must detect the odd one out. A keyboard placed in front of an iMac is the right choice, not a potted plant. A variety of dedicated modules looking for specific high-level features, such as whether a face rests on a neck and so on, might manage to defeat one of these tests. But presenting many different image tests, not unlike asking many arbitrary questions about the image, would defeat today's machines.
Yet a different kind of machine can be envisioned, too—one in which knowledge of the innumerable relations among the things in our world is embodied in a single, highly integrated system. In such a machine, the answer to the question “What's wrong with this picture?” would pop out because whatever is awry would fail to match some of the intrinsic constraints imposed by the way data are integrated within a given system. Such a machine would be good at dealing with things not easily separable into independent tasks. Based on its ability to integrate information, it would consciously perceive a scene.
In the next Consciousness Redux column, we'll tell you about the surprising results of a near-identical test that psychologists devised to probe the extent to which the unconscious can solve such problems.