Recently, at a wedding reception, I polled some friends about immortality. Suppose you could upload your brain tomorrow and live forever as a human-machine hybrid, I asked an overeducated couple from San Francisco, parents of two young daughters. Would you do it? The husband, a 42-year-old M.D.-Ph.D., didn't hesitate before answering yes. His current research, he said, would likely bear fruit over the next several centuries, and he wanted to see what would come of it. “Plus, I want to see what the world is like 10,000 years from now.” The wife, a 39-year-old with an art history doctorate, was also unequivocal. “No way,” she said. “Death is part of life. I want to know what dying is like.”
I wondered if his wife's decision might give the husband pause, but I diplomatically decided to drop it. Still, the whole thing was more than simply dinner-party fodder. If you believe the claims of some futurists, we'll sooner or later need to grapple with these types of questions because, according to them, we are heading toward a postbiological world in which death is passé—or at least very much under our control.
The most well-imagined version of this transcendent future is Ray Kurzweil's. In his 2005 best-selling book The Singularity Is Near, Kurzweil predicted that artificial intelligence would soon “encompass all human knowledge and proficiency.” Nanoscale brain-scanning technology will ultimately enable “our gradual transfer of our intelligence, personality, and skills to the nonbiological portion of our intelligence.” Meanwhile billions of nanobots inside our bodies will “destroy pathogens, correct DNA errors, eliminate toxins, and perform many other tasks to enhance our physical well-being. As a result, we will be able to live indefinitely without aging.” These nanobots will create “virtual reality from within the nervous system.” Increasingly, we will live in the virtual realm, which will be indistinguishable from that anemic universe we might call “real reality.”
Based on progress in genetics, nanotechnology and robotics and on the exponential rate of technological change, Kurzweil set the date for the singularity—when nonbiological intelligence so far exceeds all human intelligence that there is “a profound and disruptive transformation in human capability”—at 2045. Today a handful of singulatarians still hold to that date, and progress in an aspect of artificial intelligence known as deep learning has only encouraged them.
Most scientists, however, think that any manifestation of our cyborg destiny is much, much farther away. Sebastian Seung, a professor at the Princeton Neuroscience Institute, has argued that uploading the brain may never be possible. Brains are made up of 100 billion neurons, connected by synapses; the entirety of those connections make up the connectome, which some neuroscientists believe holds the key to our identities. Even by Kurzweilian standards of technological progress, that is a whole lot of connections to map and upload. And the connectome might be only the beginning: neurons can also interact with one another outside of synapses, and such “extrasynaptic interactions” could turn out to be essential to brain function. If so, as Seung argued in his 2012 book Connectome: How the Brain's Wiring Makes Us Who We Are, a brain upload might also have to include not just every connection, or every neuron, but every atom. The computational power required for that, he wrote, “is completely out of the question unless your remote descendants survive for galactic timescales.”
Still, the very possibility of a cyborg future, however remote or implausible, raises concerns important enough that legitimate philosophers are debating it in earnest. Even if our technology fails to achieve the full Kurzweilian vision, augmentation of our minds and our bodies may take us part of the way there—raising questions about what makes us human.
I ask David Chalmers, a philosopher and co-director of the Center for Mind, Brain and Consciousness at New York University who has written about the best way to upload your brain to preserve your self-identity, whether he expects he will have the opportunity to live forever. Chalmers, who is 50, says he doesn't think so—but that “absolutely these issues are going to become practical possibilities sometime in the next century or so.”
Ronald Sandler, an environmental ethicist and chair of the department of philosophy and religion at Northeastern University, says talking about our cyborg future “puts a lot of issues in sharp relief. Thinking about the limit case can teach you about the near-term case.”
And, of course, if there is even the remote possibility that those of us alive today might ultimately get to choose between death or immortality as a cyborg, maybe it's best to start mulling it over now. So putting aside the question of feasibility, it is worth pausing to consider more fundamental questions. Is it desirable? If my brain and my consciousness were uploaded into a cyborg, who would I be? Would I still love my family and friends? Would they still love me? Would I, ultimately, still be human?
One of the issues philosophers think about is how we treat one another. Would we still have the Golden Rule in a posthuman world? A few years ago Sandler co-authored a paper, “Transhumanism, Human Dignity, and Moral Status,” arguing that “enhanced” humans would retain a moral obligation to regular humans. “Even if you become enhanced in some way, you still have to care about me,” he tells me. Which seems hard to argue with—and harder still to believe would come to pass.
Other philosophers make a case for “moral enhancement”—using medical or biomedical means to give our principles an upgrade. If we're going to have massive intelligence and power at our disposal, we need to ensure Dr. Evil won't be at the controls. Our scientific knowledge “is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process,” philosophers Julian Savulescu and Ingmar Persson wrote recently. “We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.”
In an op-ed this past May in the Washington Post entitled “Soon We'll Use Science to Make People More Moral,” James Hughes, a bioethicist and associate provost at the University of Massachusetts Boston, argued for moral enhancement, saying it needs to be voluntary rather than coercive. “With the aid of science, we will all be able to discover our own paths to technologically enabled happiness and virtue,” wrote Hughes, who directs the Institute for Ethics and Emerging Technologies, a progressive transhumanist think tank. (For his part, Hughes, 55, a former Buddhist monk, tells me that he would like to stay alive long enough to achieve enlightenment.)
There is also the question of how we might treat the planet. Living forever, in whatever capacity, would change our relationship not just to one another but to the world around us. Would it make us more or less concerned about the environment? Would the natural world be better or worse for it?
The singularity, Sandler points out to me, describes an end state. To get there would involve a huge amount of technological change, and “nothing changes our relationship with nature more quickly and robustly than technology.” If we are at the point where we can upload human consciousness and move seamlessly between virtual and non–virtual reality, we will already be engineering nearly everything else in significant ways. “By the time the singularity would occur, our relationship with nature would be radically transformed already,” Sandler says.
Although we would like to believe otherwise, in our current mere mortal state we remain hugely dependent on—and vulnerable to—natural systems. But in this future world, those dependencies would change. If we didn't need to breathe through lungs, why would we care about air pollution? If we didn't need to grow food, we would become fundamentally disconnected from the land around us.
Similarly, in a world where the real was indistinguishable from the virtual, we might derive equal benefit from digitally created nature as from the great outdoors. Our relationship to real nature would be altered. It would no longer be sensory, physical. That shift could have profound impacts on our brains, perhaps even the silicon versions. Research shows that interacting with nature affects us deeply—for the better. A connection to nature, even at an unconscious level, may be a fundamental quality of being human.
If our dependence on nature falls away, and our physical ability to commune with nature diminishes, then “the basis for environmental concern will shift much more strongly to these responsibilities to nature for its own sake,” Sandler says. Our capacity for solving environmental problems—engineering the climate, say—will be beyond what we can imagine today. But will we still feel that nature has intrinsic value? If so, ecosystems might fare better. If not, other species and the ecosystems they would still rely on might be in trouble.
Our relationship to the environment also depends on the question of timescales. From a geologic perspective, the extinction crisis we are witnessing today might not matter. But it does matter from the timeline of a current human life. How might vastly extended life spans “change the perspective from which we ask questions and think about the nonhuman environment?” Sandler asks. “The timescales really matter to what reasonable answers are.” Will we become more concerned about the environment because we will be around for so long? Or will we care less because we will take a broader, more geologic view?
“It's almost impossible to imagine what it will be like,” Sandler says, “but we can know that the perspective will be very, very different.”
Talk to experts about this stuff for long enough, and you fall down a rabbit hole; you find yourself having seemingly normal conversations about absurd things. “If there were something like an X-Men gene therapy, where they can shoot lasers out of their eyes or take over your mind,” Hughes says to me at one point, then people who want those traits should have to complete special training and obtain a license.
“Are you using those examples to make a point, or are they actual things you believe are coming?” I ask.
“In terms of how much transhumanists talk about these things, most of us try not to freak out newbies too much,” he replies obliquely. “But once you're past shock level 4, you can start talking about when we're all just nanobots.”
When we're all just nanobots, what will we worry about? Angst, after all, is arguably one of our defining qualities as humans. Does immortality render angst obsolete? If I no longer had to stress about staying healthy, paying the bills, and how I'll support myself when I'm too old and frail to travel around writing articles, would I still be me? Or would I simply be a placid, overly contented ... robot? For that matter, what would I daydream about? Would I lose my ambition, such as it is? I mean, if I live forever, surely that Great American Novel can wait until next century, right?
Would I still be me? Chalmers believes this “is going to become an extremely pressing practical, not just philosophical, question.”
On a gut level, it seems implausible that I would remain myself if my brain was uploaded—even if, as Chalmers has prescribed, I did it neuron by neuron, staying conscious throughout, becoming gradually 1 percent silicon, then 5, then 10 and onward to 100. It's the old saw about Theseus's ship—replaced board by board with newer, stronger wood. Is it or isn't it the same afterward? If it's not the same, at what point does the balance tip?
“A big problem,” Hughes says, “is you live long enough and you'll go through so many changes that there's no longer any meaning to you having lived longer. Am I really the same person I was when I was five? If I live for another 5,000 years, am I really the same as I am now? In the future, we will be able to share our memories, so there will be an erosion of the importance of personal identity and continuity.” That sounds like kind of a drag.
Despite the singularity's utopian rhetoric, it carries a tinge of fatalism: this is the only route available to us; merge with machines or fade away—or worse. What if I don't want to become a cyborg? Kurzweil might say that it's only my currently flawed and limited biological brain that prevents me from seeing the true allure and potential of this future. And that the choices available to me—any type of body, any experience in virtual reality, limitless possibilities for creative expression, the chance to colonize space—will make my current biological existence seem almost comically trivial. And anyway, what's more fatalistic than certain death?
Nevertheless, I really like being human. I like knowing that I'm fundamentally made of the same stuff as all the other life on Earth. I'm even sort of attached to my human frailty. I like being warm and cuddly and not hard and indestructible like some action-film super-robot. I like the warm blood that runs through my veins, and I'm not sure I really want it replaced by nanobots.
Some ethicists argue that human happiness relies on the fact that our lives are fleeting, that we are vulnerable, interdependent creatures. How, in a human-machine future, would we find value and meaning in life?
“To me, the essence of being human is not our limitations ... it's our ability to reach beyond our limitations,” Kurzweil writes. It's an appealing point of view. Death has always fundamentally been one of those limitations, so perhaps reaching beyond death makes us deeply human?
But once we transcend it, I'm not convinced our humanity remains. Death itself doesn't define us, of course—all living things die—but our awareness and understanding of death, and our quest to make meaning of life in the interim, are surely part of the human spirit.