When we look at a photograph, we effortlessly identify people and objects—re-creating a three-dimensional scene in our mind from the two-dimensional image. As easy as that task seems, scientists have long puzzled over exactly how our brain does it; even the most powerful computers still struggle to pick 3-D objects out of 2-D images. Until now, most research has focused on the simpler neural representation of 2-D patterns, but a new study shows for the first time that some neurons are also tuned to 3-D details.
The sheer number of possible 3-D shapes has made it hard to study how the brain processes them. A team headed by Charles Connor and Yukako Yamane, neuroscientists at Johns Hopkins University, sidestepped this problem by using a computer program that generated a series of shapes that evolved according to which items provoked the greatest response from certain neurons. They eventually pinpointed several neurons that each responded to specific 3-D configurations.
Object fragments such as projecting points or ridges elicited the greatest response. “Neurons carry very clear information for 3-D parts and for where those parts are relative to each other,” Connor says. The findings support a classical theory that the brain can comprehend objects as spatial combinations of 3-D parts rather than only learning to recognize objects from different 2-D per-spectives. Connor notes, however, that the brain may still rely heavily on faster 2-D processing in situations that require rapid recognition.
Note: This article was originally printed with the title, "Seeing In Three Dimensions".
This article was originally published with the title Seeing in Three Dimensions.