So, with all this frantic movement (not to mention temporary blindness), how is our brain able to piece together a complete, detailed, uninterrupted picture of the world?
The answer could lie in a wrinkle of image processing called "boundary extension," whereby the brain represents a scene not only by inputted information, such as a picture, but also by what it extrapolates is beyond the picture's borders.
"The world surrounds you, but you only have those two eyeballs right in front or your head" with which to take it in, says Helene Intraub, a psychologist at the University of Delaware and a co-author of a new study in Neuron on visual processing.
In 1989 Intraub co-authored a paper that first characterized boundary extension. At the time she found that when people are shown the same picture twice—say a bicycle in front of a white fence—within several milliseconds, they will mistake the second picture as a close-up of the bike and not the same portrayal. Interestingly, if people are shown the same picture and then a wide-angle version of it—with the bike a little smaller (because more of the fence is in the shot)—they may mistake the second photo as being the same as the first. Basically, Intraub says, it appears that "the brain is already planning around the edges," and this may be a way to "help integrate successive eye fixations."
In the current study, Intraub teamed up with psychologist Marvin Chun's lab at Yale University to see, via functional magnetic resonance imaging (fMRI), if the behavior she had observed was actually taking place in the brain. The team focused on two brain regions known to be associated with scene-specific perception: the parahippocampal area (PPA), an area in the bottom half of the brain between the two cerebral hemispheres, and the retrosplenial cortex (RSC) in the outermost layer of the cerebrum. Eighteen subjects were shown two successive photos of scenes, such as a fire hydrant on a lawn, in one of four pairings: close-up then wide-shot, wide-angle then close-up, wide followed by wide or two close-ups in a row.
In the RSC there was a spike of electrical activity when the first photo was flashed. An equivalent activation was measured when the second picture appeared in every scenario—except when a close-up followed a wide-angle shot. The attenuation of activity when a wide-angle shot was shown after a tighter picture indicates, Intraub says, "the brain [region] is going, 'ho-hum, I've seen that before.'" In the PPA there was decreased activity in every scenario, except when a close-up followed a wide-shot. This pattern implies that the PPA experiences boundary extension, Intraub explains, but also that it does pick up features in a scene, as well. "In the PPA," she says, "what we found is if you show a close-up and then the same close-up again, there is still a recognition—it got a little attenuation."
The researchers also tested an area called the lateral occipital cortex, a region located at the back of the brain, which is known to be object-oriented. In every trial, this sector attenuated when the second photo was shown, implying it was only noting the presence of the hydrant.
While boundary extension appears to be a coping mechanism for integrating information from dispatches taken in by the eyeball, Intraub notes, "We can't make any direct connection," because these regions "are not involved in moving the eyes." Instead, she says, "this part of the brain, we think, is telling us about the mental representation," which it renders from the different elements the eyeball has taken in. She adds that there is more fMRI work needed to further characterize the exact roles of the PPA and RSC in visual processing.