The brain is an amazingly dynamic organ. Millions of neurons in all corners of our gray matter send out an endless stream of signals. Many of the neurons appear to fire spontaneously, without any recognizable triggers. With the help of techniques such as electroencephalography (EEG) and microelectrode recordings, brain researchers are listening in on the polyphonic concert in our heads. Any mental activity is accompanied by a ceaseless crescendo and diminuendo of background processing. The underlying principle behind this seeming racket is not understood. Nevertheless, as everyone knows, the chaos creates our own unique, continuous stream of consciousness.
And yet it is very difficult to focus our attention on a single object for any extended period. Our awareness jumps constantly from one input to another. No sooner have I written this sentence than my eyes move from the computer screen to the trees outside my window. I can hear a dog barking in the distance. Then I remember the deadline for this article--which isnt going to be extended again. Resolutely, I force myself to type the next line.
How does this stream of impressions come to be? Is our perception really as continuous as it seems, or is it divided into discrete time parcels, similar to frames in a movie? These questions are among the most interesting being investigated by psychologists and neuroscientists. The answers will satisfy more than our curiosity--they will tell us if our experience of reality is accurate or a fiction and if my fiction is different from yours.
Did You See That Animal?
Nothing that we perceive, think or feel falls out of the blue into our inner eye. Each mental feat is grounded in particular processes in the brain. Scientific research methods are not well suited to studying the neuronal processes that accompany our conscious experience. Yet much has been learned concerning the neural basis of subjective experience. My old friend and colleague, the late Francis Crick, and I coined a term for these fascinating processes: neuronal correlates of consciousness, or NCCs--the set of firings among neurons that correlates with each bit of awareness that we experience.
How are we to understand the creation and disappearance of such NCCs? Do they spring--like Athena from the head of Zeus--completely formed from unconscious brain activity, only to dissolve instantly again? Such an all-or-nothing principle would certainly conform to our subjective experience, in which a thought or sensation is suddenly there and then disappears. On the other hand, NCCs might build up over a longer time until they intrude into our awareness and may then only slowly fade until they are so slight that we can no longer perceive them.
Something like this second theory is advanced by psychologist Talis Bachmann of the University of Tartu in Estonia. Bachmann believes that consciousness for any one sensation takes time, comparable to the development of a photograph. Any conscious percept--say, the color red--does not instantly appear; we become aware of it gradually. A large body of experimental work seems to support this hypothesis.
Measuring reaction times is the most obvious approach to studying the temporal structure of consciousness. As early as the 19th century, psychologists exposed test subjects to flashes of light that varied in duration and intensity. They were attempting to discover how long an individual had to be exposed to a stimulus to perceive it consciously and how close in time two stimuli had to be to be perceived as one continuous sensation.
Today researchers flash a small black bar on a computer screen and ask subjects to press a button as soon as they recognize whether the bar is vertical or horizontal. Measured this way, however, the reaction time includes not only the interval it takes for the eye and brain to process the stimulus but also how long it takes for the desired motor response--pressing the button.
To separate these components, researchers such as Simon J. Thorpe of the Brain and Cognition Research Center in Toulouse, France, measure so-called evoked potentials--changes in the electrical activity of neurons. This brain signal can be captured by electrodes attached to the scalp, as in an EEG recording. In one experiment, subjects were asked to decide quickly whether an image that flashed on a screen for fractions of a second contained an animal or not. This task did not prove difficult, even though they had no idea what kind of animal would be projected.
It became evident that the individuals needed less than half a second to give the correct answer. The time was about the same when they were asked to press a button to indicate whether an image showed a car or another means of transportation. The researchers then compared the brain reactions triggered by the animal images with those elicited by scenes containing no animals. In the initial fractions of a second after presentation, the EEG patterns were nearly identical.
It takes approximately 30 to 50 milliseconds for nerve impulses to travel from the eye's retina to the visual centers of the cerebral cortex at the back of the head. By 150 milliseconds, the evoked potential in response to animal images diverged from the electrical brain potential following nonanimal images. In other words, after about one tenth of a second something in the cerebral cortex began to distinguish animal from nonanimal pictures. Given that the processing time of lone neurons is in the millisecond range, this categorization is remarkably swift and can be accomplished only via massive parallel processing.
This result does not mean, however, that the information animal or not animal is consciously accessible within 150 milliseconds. Sight occurs in a flash, but the brain needs more time to create conscious impressions.
Odd things can happen when stimuli follow in rapid succession, and it doesnt matter whether they are visual, acoustic or tactile. For example, registering one image can distort previous or subsequent images or suppress them completely if they are flashed quickly on a monitor. Psychologists refer to this effect as masking.
Masking makes it clear that our perception can deviate significantly from reality. Such systematic distortions of perception teach researchers the rules that the mind uses to construct its view of the world. The most frequently used technique is backward masking, in which the mask follows an initial stimulus. Here both stimuli can fuse completely, as neuropsychologist Robert Efron of the University of California at Davis found out. When Efron flashed a 10-millisecond-long green light immediately after a 10-millisecond-long red light, his subjects reported a single flash. What color did they see? Yellow, rather than a red light that changed into green. Two images in rapid succession sometimes result in a single conscious impression.
Recently Stanislas Dehaene, a cognition researcher at INSERM in Orsay, France, used the masking technique to study word processing. Dehaene presented subjects who were lying in a functional magnetic resonance imaging (fMRI) scanner with a series of slides in rapid succession. On the slides were simple words like lion. These words appeared for barely 30 milliseconds--just long enough for the individuals to decode them correctly. Yet if a series of random images appeared before and after the target word, recognition fell off dramatically.
When the word was seen, the fMRI machine recorded vigorous brain activity in multiple locations, including in vision and speech centers. Masked, however, by the random images immediately preceding and following the word lion on the screen, brain activity was muted and confined to parts of the visual cortex involved in early phases of vision. Masking eliminated conscious recognition of lion; only the input stages of the visual brain were activated.
Researchers have prolonged the interval between stimuli and still achieved masking--up to 100 milliseconds. This means that even an image that strikes the retina one tenth of a second after a prior image can cancel out conscious perception of the first image. And yet, although the masking thwarts the development of a visual impression, it cannot prevent unconscious processing: test subjects who were encouraged to guess often correctly identified the initial images, even though they had been masked from conscious perception.
How Long Is a Moment?
How can we explain such aberrations? How is it even possible for a second stimulus to alter the perception of one that has already arrived? Think of two waves approaching a beach; if they move at the same speed, the second one should never be able to catch up with the first. But feedback mechanisms are involved in neural processing. As soon as neuronal signals within the visual cortex or even between the cortex and deeper brain regions start shuttling back and forth, as they do, subsequent information can distort the processing of earlier information.
How far back in time masking can extend tells us something about temporal delay in the brain's feedback loops. If we add the experimentally derived maximum masking span of approximately 100 milliseconds to the 150 milliseconds that are required to discern a visual signal, this means that a minimum of about a quarter of a second is needed to consciously see a stimulus. Depending on its characteristics, the time span can be even longer but hardly ever shorter. Our perceptions, it seems, lag considerably behind reality--and we dont notice that.
Neuronal correlates of consciousness have a kind of minimum life span, and this existence corresponds in our experience more or less to what can be called the minimal perceptual moment. In all probability, subsequent brain activity during backward masking disturbs precisely those processes that signal the onset and disappearance of a target stimulus. Looked at the other way around, remnants of previous activity remain for a short time and may momentarily prevent the development of new NCCs. This competition among overlapping neural coalitions may be a significant feature of consciousness.
Sensory impressions come and go for various reasons: eye movements, a change in attention, or simply sensory cells becoming fatigued. With increasing visual input, for example, the firing activity of the visual cortex rises steadily and may shoot up precipitously once a certain threshold has been reached. This is why, for example, a light that is flashed briefly appears to be brighter than a steady beam of the same intensity. After the initial rapid increase, the perceived brightness of the steady beam gradually begins to drift to a lower value.
If sensing such a simple input can be so variable, imagine how complicated it must be for the brain to assess the actual world. One of the significant issues facing consciousness research is the fact that the world around us is so incredibly complex and multifaceted. Objects can only rarely be reduced to qualities that are as easily measured as simple brightness or color. A face, for example, is characterized by unique shapes, contours, colors and textures. The position and gaze of the eyes, the play of the mouth, the form of the nose, skin folds and blemishes--how do we integrate all these details into a unified image that conveys a person's identity, gender and emotional state?
This question goes to the core of the so-called binding problem. If NCCs arise within the various processing centers in the brain at different times, shouldnt each of the attributes be perceived with a time lag? How is the brain able to integrate all these individual activities?
Neurobiologist Semir Zeki of University College London has been researching this problem for many years. By measuring how subjects perceive squares that can randomly change color as they move on a screen, he has shown that a change in color of such an object is seen 60 to 80 milliseconds faster than a change in the direction of that object's movement. That is, one attribute is registered at a different time than another attribute of the same moment. This finding suggests that there may not be much truth to the presumed unity of consciousness--at least not when we are looking at extremely short time spans.
Such discrepancies rarely make themselves felt in our everyday lives, however. When a car races past me, its form does not seem to lag behind its color, even though each processing step--awareness of form, color, sound, speed and direction of movement--requires separate assessments by different regions of my brain, each with its own dynamic and delay. A unified impression is rapidly reached because the brain has no mechanism for registering the asynchrony. We are almost never aware of the differing time lags. We simply perceive all the qualities of an object simultaneously--as incoherent as that composite image might be.
Snapshots in Time
A common metaphor for consciousness is that we live and experience things in a river of time. This implies that perception proceeds smoothly from our first waking moment of the day until we sink our heads onto the pillow at night. But this continuity of consciousness may be yet another illusion. Consider patients who experience cinematographic vision resulting from severe migraine headaches. According to Oliver Sacks, the neurologist and noted author who coined the term, these men and women occasionally lose their sense of visual continuity and instead see a flickering series of still images. The images do not overlap or seem superimposed; they just last too long, like a movie that has been stuck on freeze-frame and then suddenly jumps ahead to catch up to a real-time moving scene.
Sacks describes one woman on a hospital ward who had started to run water into a tub for a bath. She stepped up to the tub when the water had risen to an inch deep and then stood there, transfixed by the spigot, while the tub filled to overflowing, running onto the floor. Sacks came upon her, touched her, and she suddenly saw the overflow. She told him later that the image in her mind was of the water coming from the faucet into the inch of water and that no further visual change had occurred until he had touched her. Sacks himself has experienced cinematographic vision following the drinking of sakau, a popular intoxicant in Micronesia, describing a swaying palm as a succession of stills, like a film run too slow, its continuity no longer maintained.
These clinical observations demonstrate that under normal circumstances, temporal splitting of sensations is barely, if ever, noticeable to us. Our perception seems to be the result of a sequence of individual snapshots, a sequence of moments, like individual, discrete movie frames that, when quickly scrolling past us, we experience as continuous motion. The important point is that we experience events that occur more or less at the same moment as synchronous. And events that reach us sequentially are perceived in that order.
Depending on the study, the duration of such snapshots is between 20 and 200 milliseconds. We do not know yet whether this discrepancy reflects the crudeness of our instruments or some fundamental quality of neurons. Still, such discrete perceptual snapshots may explain the common observation that time sometimes seems to pass more slowly or quickly.
Assume that the snapshot of each moment increases in duration for some reason, so that fewer snapshots are taken per second. In this case, an external event would appear shorter and time would seem to race by. But if the individual images were shorter in duration--there were more of them per unit of time--then time would appear to pass more slowly.
People who have been in automobile accidents, natural catastrophes and other traumatic events often report that at the height of the drama, everything seemed to go in slow motion. At present, we know little about how the brain mediates our sense of time.
If, in fact, changing coalitions of larger neuron groups are the neuronal correlates of consciousness, our state-of-the-art research techniques are inadequate to follow this process. Our methods either cover large regions of the brain at a crude temporal resolution (such as fMRI, which tracks sluggish power consumption at timescales of seconds), or we register precisely (within one thousandth of a second) the firing rate of one or a handful of neurons out of billions (microelectrode recording). We need fine-grained instruments that cover all of the brain to get a picture of how widely scattered groups of thousands of neurons work together. Eventually this level of interrogation may enable us to manipulate our flow of consciousness with technology. As things stand now, this is only a dream.