ADVERTISEMENT
See Inside Scientific American Mind Volume 25, Issue 4

Will Your Smartphone Ever Love You?

The movie Her makes a compelling case for a computer program with feelings. Is that actually possible?

But Would They Be Conscious?
Just because an AI program can talk like a smart and hyperefficient woman with a seductive voice does not imply that the program feels anything or is conscious. That's not to say that people wouldn't react, as Theodore did, by behaving as if the program had actual feelings for them. We have an innate tendency to impute feelings to many things, from our canine and feline companions to teddy bears, dolls, cars and other inanimate objects. That is the psychological reality of the human condition, which is why Samantha and her male-voiced counterpart would be a huge commercial success if they eventually make their way into the marketplace.

But that does not detract from the ontological question: Is simulating the relevant behavior—attraction, passion, desire, betrayal, angst, and so on—the same as having these feelings? The traditional answer is no. God endowed us humans, and only us, with an immortal living soul—without a soul, there is no consciousness.

Of course, we children of the Enlightenment know better. Consciousness is a product of the most highly organized chunk of matter in the universe, the central nervous system. And once the brain ceases to function, the conscious mind likewise dissolves. To put it as succinctly as a Zen koan: no brain, never mind.

On the bright side, this contemporary view also implies that if all the relevant neural mechanisms that underlie consciousness were to be faithfully replicated in an artificial brain, then this construct would be conscious. Function follows from mechanism as long as all the interactions involved in biological cause and effect are present.

Consider Scarlett Johansson, the actress who voiced Samantha, and a yet to be invented technique that could somehow scan her brain without harming it and map its 100 billion neurons and quadrillion synapses. From this scan, future neuroengineers might construct a gigantic software package, SimSamantha. It would need to run on a supercomputer that mathematically simulated the biochemical and biophysical activity of Johansson's brain.

If the neuroscientists had accurately captured all of her brain's bioelectric activity, SimSamantha would replicate the behavior of Johansson. But that's not all. According to the commonly held view, if this simulation were to capture all aspects of brain processing relevant to consciousness, then the computer program would experience the infatuation and rapture of being entranced by somebody else. SimSamantha would know real love.

In this line of argument, known as functionalism, the conscious mind is nothing but the brain at work—brain circuits pulsing on and off give rise to perception and thought. So functionalists, just like Theodore, believe that SimSamantha is capable of feeling, of loving him. (Whether she could simultaneously love 641 other people, as she claims, is a thornier matter.)

Functionalism is part of the miasma that hovers over and sometimes obscures the thinking of computer scientists and software engineers in Silicon Valley. The belief that sooner or later computers will become conscious is widespread among the digerati and engineers. But in thinking about one of the most difficult problems in all of science, brain researchers have come up with theories of consciousness that break with the functionalist tradition. The integrated information theory (IIT) of psychiatrist and neuroscientist Giulio Tononi offers a strikingly different perspective. (I previously described IIT in “Ubiquitous Minds,” Scientific American Mind, January/February 2014.)

I am partial to IIT as the most plausible theory of consciousness and have worked with Tononi on aspects of it. The theory postulates that conscious experience arises from what Tononi terms “integrated information”—the multitude of sensory, motor and cognitive processes that are tied together to form the basis of any one subjective experience.

Any system that possesses some integrated information experiences something. This emphasis on integration reflects a fundamental characteristic of conscious experiences. Each one is highly integrated, holistic. As you are watching the colorful minimalist furniture and futuristic architecture in Her, you can't suddenly force yourself to see it in black-and-white. Its color is an integrated part of your experience. Whatever information you are conscious of is presented to you wholly and completely; it cannot be subdivided. Underlying this unity of consciousness is a multitude of cause-and-effect interactions among the relevant parts of your brain.

Phi Meters
Integrated information can be calculated by considering your brain in a particular state. Taking into account the brain's immediate past and future, the theory computes a number that indicates how irreducible the brain is, that is, how much it resists being broken down into component parts. The bigger this number, denoted by the Greek letter &PHgr;, or phi (pronounced “fi”), the broader and more sophisticated the conscious experience of the brain. If the organism has many neurons that are amply endowed with synaptic connections, &PHgr; will be high. If the system is reducible to smaller, independent, noninteracting parts, &PHgr; is zero. It has no experience at all. Nada, rien, nothing.

The brain of a patient in whom the entire corpus callosum—the 200 million fibers that connect the left cerebral hemisphere with the right—has been surgically cut to prevent epileptic seizures from spreading can be reduced to two independent hemispheres, each of which is conscious by itself. The once whole brain of the split-brain patient is now reduced to zero &PHgr; because the shared contents of the two halves have been sundered. Meanwhile the two hemispheres are now endowed with nonzero &PHgr;, a measure of consciousness present in each half.

To capture the experience of Samantha, it is essential to replicate the entire repertoire of interactions within her brain—what philosophers call its intrinsic causal properties—not just its input-output behavior, such as hearing and speech. This task can be accomplished only by building a faithful copy of her real brain using wires, transistors and other devices that have exactly the same cause-and-effect relations among all components, as in the real brain. A hypothetical artificial organ—call it BrainSamantha—that reflected the physical interactions among neurons, with one nerve cell changing the way another functions, would reproduce the same experiences as Samantha's brain does.

The situation for SimSamantha software running on a digital computer would be quite different, however. The intrinsic causal properties of this program—how any circuit element in the computer switches on or off—are not the same as those of the biological brain being imitated. Ultimately what the computer does is shuffle binary charges from one transistor to a handful of others rather than sending electrical activity from one neuron to thousands of others. Paradoxically, SimSamantha would have the same ability to hear and speak as BrainSamantha, but without any feelings at all. A simulation is not the same as building an exact model. IIT stipulates that consciousness is an inherent feature of a highly complex set of interactions, taking into account the changes occurring within the system itself, not just the output of its processing. Consciousness cannot be reduced to something more elemental.

The causal properties of digital simulations are very different from BrainSamantha's. A computer program that simulates the weather illustrates what is missing. Although it can accurately forecast an approaching rainstorm, it will never be soaked with rain inside the computer (fortunately). And so it is with consciousness. While Theodore could not tell SimSamantha from BrainSamantha, only the latter could truly love him. Only the latter is endowed with consciousness, with true human feelings.

FURTHER READING

From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. Masafumi Oizumi, Larissa Albantakis and Giulio Tononi in PLOS Computational Biology (in press).

This article was originally published with the title "Does My Smartphone Really Love Me?."

Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Special Universe

Get the latest Special Collector's edition

Secrets of the Universe: Past, Present, Future

Order Now >

X

Email this Article

X