AT THE HEART OF SCIENCE are judicious observations and measurements. This reality presupposes that something can be measured. But how can consciousness—the notorious ineffable and ethereal stuff that can’t even be rigorously defined—be measured? Recent progress makes me optimistic.
Consider a problem of great clinical, ethical and legal relevance, that of inferring the presence of consciousness in severely brain-damaged patients. Often the victims of traffic accidents, cardiac arrests or drug overdoses, such patients have periods when they are awake, and they may spontaneously open their eyes. On occasion, their head turns in response to a loud noise, or their eyes might briefly track an object, but never for long. They might grind their teeth, swallow or smile, but such activities occur sporadically, not on command. These fragmentary acts appear reflexlike, generated by an intact brain stem.
As many as 25,000 such “vegetative” patients in hospices and nursing homes hover for years in this limbo, at a steep emotional and financial cost. The extent of the damage and the persistent absence of purposeful behavior usually leave little doubt that consciousness has fled the body for good. Terri Schiavo was such a case, alive but unconscious for 15 years before her court-ordered death in 2005 in Florida.
Even worse, though, is the possibility that some of these patients may experience some remnants of consciousness, unable to communicate their feelings of discomfort or pain, agonizing thoughts or poignant memories to the outside world. Until recently, nothing could be done to diagnose when an awake mind was entombed inside a damaged brain.
Technology has come to the rescue with the demonstration—by Adrian M. Owen and his research group at the University of Cambridge—of awareness in an unresponsive patient with the aid of functional brain imaging. The patient, a young woman who sustained massive head injury as a result of a car accident, fulfilled all criteria for the vegetative state. In particular, she was unable to signal with her eyes or hands in response to commands. Owen placed the noncommunicative patient in a magnetic scanner and asked her to imagine playing tennis or to imagine visiting the rooms in her house. You and I have no trouble doing these tasks. In healthy volunteers given these instructions, regions of the brain involved in motor planning, spatial navigation and imagery light up. They did likewise in the unfortunate woman. Her brain activity in various regions far outlasted the briefly spoken words and in their specificity cannot be attributed to a brain reflex. The pattern of activity appeared quite willful, indicating that the patient was, at least occasionally, conscious but unable to signal this fact, more effectively cut off from her loved ones than any prisoner in solitary confinement. It may be possible to develop this technique into a kind of two-way radio between the patient and the rest of humankind.
It remains an open question how prevalent such a tragic condition—aware yet utterly uncommunicative—is. Brain scans of 17 vegetative patients have turned up only one other non-responsive patient with such a voluntary brain signal. Keep in mind, however, that absence of evidence is not evidence of absence and that the presence of consciousness will depend on the exact nature of the brain injury. The point I want to emphasize is that Owen and other researchers like him are developing scanning tools to spot consciousness without any external behavior.
Betting on Consciousness
The ultimate judge of any conscious feeling is the subject itself. This truism is used in everyday life: Can you see the angry face? Well, if you can’t, then you’re not conscious of it. This seductively simple strategy has drawbacks; in particular, people disagree on what exactly “consciously seeing” is if the face was only briefly flashed on a computer display screen. (Did you see any part of a face? Did you think you saw something like a face?) To get around this problem, neuropsychologists Navindra Persaud, Peter McLeod and Alan Cowey of the University of Oxford exploit gambling.
Their research is based on the insight, backed up by a philosophical theory of consciousness called higher-order thought, that when you are conscious of something, you can confidently judge what you saw. Say you come to my lab and I show you a number of fake six-letter words such as XTNVMT and ask you to remember as much about them as possible. After you have seen these training words, I tell you that they are actually generated by some fixed rules (for example, that an X is always followed by a T). Next, I show you similar nonsense words you have not seen before, and you have to judge whether you think each test word obeys the same unknown rules as do the training words you have just seen. It is well known that you will do much better than chance even though you feel that you are guessing. You are not conscious of the grammatical rules, yet something in your brain knows whether or not the test words follow the rules, without you feeling confident about this knowledge.
Persaud and his colleagues varied this game in a very clever way, relying on people’s instinct to make money. In this variant, every time you decide whether or not the word follows the unknown rule you bet either $1 or $2 on your decision. If you’re right, you get to keep the money, and if you’re wrong, you lose it. You clearly should wager high if you are confident that the six-letter word either follows or does not follow the rule. The Oxford volunteers confounded these expectations. In most trials they made the correct choices, but they placed low wagers. The volunteers thus failed to convert their above-chance performance on the yes-no decisions into money. Their failure to reap a profit despite performing better than expected by pure guessing indicates that the subjects were using unconscious processing. One advantage of the wagering measure is that it does not force subjects to focus their consciousness on what they are conscious of, in the process perturbing the very phenomenon that scientists wish to measure.
Ironically, the leitmotif of Western philosophy since the days of Apollo’s temple at Delphi, “know thyself,” could have been put to pecuniary use if subjects would have learned to trust their gut instincts and bet on something about which they were not yet conscious. I leave it to others to figure out whether such unconscious thought patterns have contributed to the abysmal state of the financial markets and our retirement accounts.
Instead of arguing with people about whether or not they are conscious of grammatical rules or when these rules are violated, wagering means that we can study consciousness without having an agreed-on formal definition of consciousness.
Both the brain-based measure and the wagering technique are far from ideal instruments to infer the presence or absence of feelings in any creature, whether healthy human adult or baby, monkey or bee. The situation is a bit analogous to detecting a black hole. You can’t see it directly, as it sucks up all matter and all radiation. Yet its position can be inferred by the gravitational effect it exerts on nearby stars. I have no doubt that science will develop better consciousness meters. And herein lies progress, for what can be measured has a much better chance of being understood by us than does something that can only be argued about. Hence the motto of this essay.
Note: This article was originally printed with the title, "Measure More, Argue Less".