HOW MANY TIMES have you heard people say that something is “black and white,” meaning it is simple or crystal clear? And because black and white are so obviously distinct, it would be only natural for us to assume that understanding how we see them must be equally straightforward.
We would be wrong. The seeming ease of perceiving the two color extremes hides a formidable challenge confronting the brain every time we look at a surface. For instance, under the same illumination, white reflects much more light to the eye than black does. But a white surface in shadow often reflects less light to the eye than a black surface in sun. Nevertheless, somehow we can accurately discern which is which. How? Clearly, the brain uses the surrounding context to make such judgments. The specific program used to interpret that context is fraught with mystery for neuroscientists like me.
Recent studies of how we see black and white have provided insights into how the human visual system analyzes the incoming pattern of light and computes object shades correctly. In addition to explaining more about how our own brains work, such research could help us in the design of artificial visual systems for robots. Computers are notoriously horrible at the kind of pattern recognition that comes so naturally to people. If computers could “see” better, they could provide more services: they could recognize our faces for keyless locks, chauffeur us around town, bring us the newspaper or pick up the trash.