Can a lobster ever truly have any emotions? What about a beetle? Or a sophisticated computer?

The only way to resolve these questions conclusively would be to engage in serious scientific inquiry—but even before studying the literature, many people have pretty clear intuitions about what the answers are going to be. A person might just look at a computer and feel certain that it could not possibly be feeling pleasure, pain or anything at all. That is why we do not mind throwing a broken computer in the trash. Likewise, someone putting a lobster in a pot of boiling water does not worry too much about the crustacean feeling angst about its impending doom. In the jargon of philosophy, these intuitions we have about whether a creature or thing is capable of feelings or subjective experiences are called “intuitions about phenomenal consciousness.”

The study of intuitions about consciousness has long played a crucial role in the discipline of philosophy, in which facts about intuitions such as these form the basis for some complex and influential arguments [see “The Movie in Your Head,” by Christof Koch; Scientific American Mind, Vol. 16, No. 3, 2005]. But traditionally, the examination of these intuitions has employed a somewhat peculiar method. Philosophers did not actually ask people what intuitions they had. Instead each philosopher would simply think the matter over for himself or herself and then write something like: “In a case such as this, it would surely be intuitive to say...”

The new field of experimental philosophy introduces a novel twist on this traditional approach. Experimental philosophers continue the quest to understand people's ordinary intuitions, but they do so using the methods of contemporary cognitive science—experiments, statistical analyses, cognitive models, and so forth. Just in the past year or so, a number of researchers have been applying this new approach to the study of intuitions about consciousness. By analyzing how people think about three different types of abstract entities—a corporation, a robot and God—we can better understand how people think about the mind.

The Mental Bottom Line on Corporations

In one recent study, philosopher Jesse Prinz of the University of North Carolina at Chapel Hill and I looked at intuitions about the application of psychological concepts to organizations composed of groups of people. Consider Microsoft Corporation, for example. One might say that Microsoft “intends to adopt a new sales strategy” or that it “believes Google is one of its main competitors.” In sentences such as these, people seem to be applying certain psychological concepts to a corporation.

But which psychological concepts are people willing to use in this way? The study revealed an interesting asymmetry. Subjects were content to apply concepts that did not attribute any feeling or experience. For example, they indicated that it would be acceptable to use sentences such as:

  • Acme Corporation believes that its profit margin will soon increase.
  • Acme Corporation intends to release a new product this January.
  • Acme Corporation wants to change its corporate image.

But they balked at all the sentences that attributed feelings or subjective experiences to corporations:

  • Acme Corporation is now experiencing great joy.
  • Acme Corporation is getting depressed.
  • Acme Corporation is experiencing a sudden urge to pursue Internet advertising.

These results seem to indicate that people are willing to apply some psychological concepts to corporations but that they are not willing to suppose that corporations might be capable of phenomenal consciousness.

Bots and Bodies

Perhaps the issue here is that people attribute phenomenal consciousness only to creatures that have the right type of body. To test this hypothesis, we can look to other kinds of entities that might have mental states but do not have bodies that look anything like the bodies that human beings have.

One promising approach in this case would be to look at people's intuitions about the mental states of robots. Robots look very different from humans from a physical perspective, but we can easily imagine a robot that acts very much like a human does. Studies could then determine what kinds of mental states people are willing to attribute to a robot under these conditions. This approach was taken up in work by experimental philosophers Justin Sytsma and Edouard Machery of the University of Pittsburgh and in separate work by Larry (Bryce) Huebner of Tufts University. All the experiments arrived at the same basic answer.

In one of Huebner's studies, for example, subjects were told about a robot that acted exactly like a human and asked what mental states that robot might be capable of having. Strikingly, the study revealed the same asymmetry observed in the case of corporations. Subjects were willing to say, for instance:

  • It believes that triangles have three sides.

But they were not willing to say:

  • It feels happy when it gets what it wants.

Here again we see a willingness to ascribe certain kinds of mental states but not those that require phenomenal consciousness. Interestingly enough, this tendency does not seem to result entirely from the fact that the robot has a central processing unit in place of an ordinary human brain. Even when the researchers controlled for whether the creature had a CPU or a brain, subjects were more likely to ascribe phenomenal consciousness when it had a body that made it look like a human being.

God in the Machine

What if an entity has no body at all? How does that change our intuitions about whether it is capable of conscious experience? To address this question, we can turn to intuitions about the ultimate disembodied being: God. A study published in 2007 by Harvard University psychologists Heather M.Gray, Kurt Gray and Daniel M. Wegner looked at people's intuitions about which kinds of mental states God could have. By now you have probably guessed the result. People were content to say that God could have psychological properties such as:

  • Thought.
  • Memory.
  • Planning.

But they did not think God could have states that involved feelings or experiences, such as:

  • Pleasure.
  • Pain.
  • Fear.

In subsequent work, the researchers directly compared attributions of mental states to God with attributions of mental states to Google Corporation. These two entities—different though they are in so many respects—elicited exactly the same pattern of responses.

If we look at the results from these various studies, it is hard to avoid having the sense that scientists should be able to construct a single unified theory that explains the whole pattern of people's intuitions. Such a theory would describe the underlying cognitive processes that lead people to think that certain entities are capable of a wide range of psychological states but are not capable of truly feeling or experiencing anything. Unfortunately, no one has proposed such a theory thus far. Further theoretical work is badly needed.