In early March—the Before Time—I sat in a packed auditorium at my school and listened to a philosopher talk about things utterly unrelated to, well, you know. The speaker, Susan Schneider, considered how artificial intelligence and other technologies might alter our bodies and minds, for good or ill. She also investigates these topics in her lively new book Artificial You: AI and the Future of Your Mind. Recent events have distracted us from pondering technological enhancement, the Singularity and all that jazz, but these issues still matter, and Schneider has provocative takes on them. Schneider, who holds the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, has been busy lately. She recently became the William F. Dietrich Distinguished Professor of Philosophy at Florida Atlantic University, with a joint appointment in the Brain Institute; she is also founding a center on the future of intelligence. She nonetheless found the time to answer a few questions. — John Horgan

An edited transcript of the interview follows.

Horgan: Neuroscientist Christof Koch has suggested that we get brain implants to keep up with machines. Does that strike you as a good idea?

Schneider: It depends upon the larger social and political setting. Several large research projects are currently trying to put AI inside the brain and peripheral nervous system. They aim to hook you to the cloud without the intermediary of a keyboard. For corporations doing this, such as Neuralink, Facebook and Kernel, your brain and body is an arena for future profit. Without proper legislative guardrails, your thoughts and biometric data could be sold to the highest bidder, and authoritarian dictatorships will have the ultimate mind control device. So, privacy safeguards are essential.

I have other worries as well. I worry that people will feel pressured to use brain implants so that they can stay competitive in their jobs or college. Enhancement should be truly optional. In addition, I’m concerned that if only the elite few are enhanced there will be a vast intellectual gulf between the have and have-nots. Today’s digital divide could morph into a digital divide of the mind! I’m sure Christof wouldn’t want this sort of future, and he was assuming an egalitarian backdrop for these enhancements. But the problem is how to get there from here.

Horgan: Are philosophical investigations of consciousness relevant to the debate over enhancement?

Schneider: Yes. Here’s one hypothetical scenario I explore in Artificial You. Suppose it is 2040, and you are out shopping. You stroll into a store that engages in “cosmetic neurology”: the Center for Mind Design. There, customers can choose from a variety of brain enhancements. For example, “Human Calculator” will give you savant-level mathematical abilities; “Zen Garden” can give you the meditative states of a Zen master. It is also rumored that if clinical trials go as planned, customers will be able to buy an enhancement bundle called “Merge.” Merge is a series of enhancements allowing customers to gradually augment and transfer all of their mental functions to the cloud over a period of five years. 

Assuming these enhancements are medically safe, what would you buy, if anything?

Here are two philosophical concerns to bear in mind in making your decision, in addition to the concerns I just raised. First, to understand whether it is wise to enhance in these radical ways, you must first understand what and who you are. Philosophers have long debated the nature of the self and mind, and suffice it to say there is a good deal of philosophical disagreement. For instance, if the self or mind intimately depends upon your having a well-functioning brain, if you replace too much of your brain with microchips, you will at some point kill yourself! That’s hardly an enhancement. I call this phenomenon “brain drain.”

A second concern involves the nature of consciousness. Notice that throughout your waking life, and even when you dream, it always feels like something to be you. At this moment you are having bodily sensations, hearing background noise, seeing the words on the page. You are having conscious experience.

Philosophers have long viewed the nature of consciousness as a mystery. They point out that we don’t fully understand why all the information processing in the brain feels like something. They also believe that we still don’t understand whether consciousness is unique to our biological substrate, or if other substrates—like silicon microchips—are also able to be a substrate underlying conscious experiences.

For the sake of argument, assume microchips are not the right substrate for consciousness. In this case, if you replace one or more parts of your brain with microchips, you would diminish or end your life as a conscious being!

If this is true, then you would not want to buy Merge, at least if you cared about surviving. Indeed, you would want to be very careful in adding even a single brain chip, for fear that doing so would impact consciousness or alter who you are in too radical of a manner. Otherwise, you could be paying money to end your very existence!

If chips are the wrong substrate for consciousness, consciousness may be a sort of design ceiling on human intelligence augmentation. AI, in contrast, wouldn’t have this limit, but they would be incapable of consciousness. But they may still outthink us, and if they did, this would mean the most intelligent entities on Earth wouldn’t even be conscious. This issue makes it rather important to determine if machines are conscious. Fortunately, I suspect we can eventually learn whether microchips are a substrate for consciousness. I advance two tests for artificial consciousness in the book.

Horgan: In Artificial You, you describe yourself as a transhumanist, and yet you express doubt about transhumanist goals, such as uploading as a route to immortality. In what way are you a transhumanist?

Schneider: As a college student, I was enthralled by the transhumanist vision of a technotopia on Earth. Transhumanists believe that augmented human intelligence and radical longevity are desirable, both from the standpoint of one’s own personal development and for the development of our species as a whole. All around us, the transhumanist vision is becoming more real. For example, Ted Berger’s lab has created an artificial hippocampus to replace lost hippocampal functioning in those who are unable to lay down new memories. The U.S. Defense Department funded a program, called “Synapse,” that is trying to develop a computer that resembles the brain in form and function. And the futurist Ray Kurzweil, who is now a director of engineering at Google, has even discussed the potential advantages of forming friendships, Her-style, with personalized AI systems.

I still consider myself a transhumanist because it is my hope that emerging technologies will provide us with disease cures, radical life extension, and even enhance our mental lives, should we wish to enhance. However, I believe that certain transhumanist goals are ill-conceived, such as mind uploading. My worries are akin to those I discussed concerning your hypothetical visit to a Center for Mind Design.

Consider, for example, the popular Amazon show Upload. It is 2033, and Nathan Brown has a rare accident, and in a last-ditch effort to survive, and egged on in the emergency room by a controlling girlfriend waiting to get her claws into his soon-to-be avatar, he uploads his brain onto a computer. He wakes up in “digital heaven,” sort of. (This is a comedy.) Heaven is, in fact, a corporate cheap out of the ultimate mountain lodge vacation, full of technicolor fall foliage yet rife with algorithmic glitches, causing him to sometimes long for authentic death. (This show is fun to watch during a pandemic; Nathan’s home isolation was worse than mine.) Artificial You argues that even if the technology works, people like Nathan wouldn’t survive. At best, a digital doppelganger is created by the attempted scan. Perhaps this being will be conscious, perhaps not, but it wouldn’t be him in any case. Sadly, the true Nathan died on the scanning table, when his biological brain was destroyed.

Horgan: Is there any kind of cognitive enhancement that you would eagerly embrace? Would you like to be immortal?

Schneider: I’m torn. On the one hand, I’m drawn to the idea of enhancing consciousness, the emotions, cognitive abilities and perceptual abilities. Bring it on! On the other hand, I’m afraid that given the metaphysical uncertainty involving the nature of the person, we may face enhancement decisions before we have a clear, uncontroversial answer to the question “What is the nature of the self or person?” Of course, I may have little to lose in trying. Perhaps, for instance, I have learned I will die in a month due to a progressive brain tumor. I’m convinced uploading would just create a doppelganger, but I might make a leap of faith and opt for some brain chips, given that grim situation. Maybe I’ll get lucky, and brain drain will not kick in.

Of course, true immortality would require a soul or immaterial mind. Those longing for immortality through radical enhancement are seeking something different, a technological setup that allows them to be around until the big crunch or even be a spectator in the heat death of the universe. I call this “functional immortality” to distinguish it from true immortality.

My former teacher Bernard Williams famously found immortality unappealing. He worried that, lacking new and challenging tasks, we would find immortality tedious. Maybe we would feel bored, as Williams suspected, but I doubt we can anticipate how we’d feel in this situation. I, for one would love the option of finding out how knowing I could live until the end of the universe feels, 2,000, 200,000 or two million years in!

There are all sorts of cognitive enhancements I would gladly embrace, if only I knew that I would still be me. To be safe, I’d stick to slow, biological enhancements so as to not damage the biological brain or cause radical shifts in my abilities in a short period of time. Perhaps, after hundreds of years of being myself, I would long for a change, and I would enhance even knowing that I may be ending my own existence. In this case, I wouldn’t be really immortal, even in the sense of functional immortality, but my “descendant” might live until the end of the universe.

Horgan: Some experts, like Gary Marcus, are suggesting that AI has been overhyped and might be headed for another “AI winter.” But in Artificial You, you predict that the long-sought goal of artificial “general” intelligence, or AGI, might be achieved “within the next several decades.” What makes you so optimistic?"

Schneider: I was trying to keep things open by using the word “several.” I’m happy to try to be more specific, but first, let me say that I think current discussions of AGI run a bunch of issues together and that “human-level AI” is a misleading benchmark. Today’s AIs are domain-specific systems—systems that excel in a single domain, such as chess or facial recognition. General intelligences go beyond mere domain specific processing. Humans, as well as nonhuman animals (mice, cats, cuttlefish, etc.) are general intelligences; they integrate material across sensory domains and exhibit cognitive flexibility.

Generality is a matter of degree. Indeed, some claim that the human brain is more like a Swiss Army knife, having a variety of domain specific modules that are not integrated by a central executive, or CPU-like structure. Perhaps the AI systems of the far future will be more general than we are, exhibiting more flexible, domain-general processing and more intermodal integration between sensory modalities. Perhaps there are alien intelligences with brains that exhibit far more integration than the human brain. So maybe we aren’t very “general” cognizers at all!

This being said, I suspect that the first few generations of synthetic general intelligences will be deficient in ways normal adult humans are not. They will be savantlike, surpassing us in certain ways that involve sophisticated memory databases, pattern recognition, mathematical processing and so on. I call these hypothetical general intelligences “savant systems” because they have all sorts of deficits relative to normal humans.

By the way, this has important implications for understanding debates involving superintelligent systems. A superintelligence is, by definition, a hypothetical machine that outthinks us in every respect: scientific reasoning, social skills and more. Nick Bostrom, Bill Gates, the late Stephen Hawking and many others have stressed that such systems present dangers because they would be hard to control. If superintelligent AI outthinks us, why believe it will adhere to our ethical values? It may formulate its own goals or interpret our goals in perversely literal ways that turn out to be harmful to us. This problem is called “the control problem.”

Upon reflection, savant systems present a control problem as well, even if they are not superintelligences, and the very fact that they exhibit deficits in certain areas, perhaps, for instance, moral judgement, could make them far more dangerous than superintelligent systems.

As a result, I worry that the issues that Gates and others have raised may actually kick in earlier, before superintelligence is developed.

All this said, you may still want me to benchmark human-level AGI. This is tricky, but here are some factors that inform benchmarking efforts. First, one thing that may accelerate the creation of AGI is if regions of neocortex may turn out to be fairly uniform, so that different parts of neocortex all follow the same algorithm. So, if we discover the precise algorithm run by one small area, we discover a good deal about how to build an artificial neocortex.

Now here’s something that may hold it back, which pertains to the current emphasis in AI research on deep learning systems, in particular. As my dissertation supervisor, the famous AI skeptic, Jerry Fodor, liked to ask: is cognition really a species of pattern recognition? Today’s deep learning systems are sophisticated association engines that input gargantuan volumes of data and, over time, develop algorithms that recognize patterns with an impressive degree of accuracy. Consider, for instance, facial recognition systems. Will we see similar impressive feats with deep-learning systems, across the board? That is, is cognition just a species of pattern recognition? I doubt it.

So why did I say “several,” if I have this doubt? First, I am far more optimistic than Fodor. For machine learning involves a variety of programming techniques that move beyond mere deep-learning resources. As cognitive scientists uncover the neural algorithms computed by different areas of the brain, we can reverse engineer these features into AIs.

Second, while uncovering neural algorithms is a complex and lengthy research enterprise, we don’t have to wait for a full understanding of every part of the brain. Again, I wouldn't expect a synthetic general intelligence to be a lot like us. Just as Alpha Go beat the world Go champion through algorithmic techniques that were surprising and not humanlike, so too, the synthetic general intelligences of the future will achieve many cognitive and perceptual tasks in ways we do not. This is why I do not really like the expression "AGI"—the expression often suggests that the synthetic general intelligence computes the same kind of algorithms that the brain uses, and that it will behave much like we do. They may be, instead, what I’ve called “savant systems.”

Horgan: In Artificial You, you ponder the possibility of superintelligent extraterrestrials. Deep down, do you think they’re out there?

Schneider: If I had to guess, microbial life will abound, and there is highly complex life out there too, even technologically sophisticated civilizations. Astrobiologists claim that there are many exoplanets out there which are habitable, in principle. Earth doesn’t seem to be all that special, so life should be springing up throughout our galaxy. Indeed, because Earth is a relatively young planet, there should be many planets with civilizations older than ours.

Think about what all this means. There may be civilizations that have, or will be creating, their own AIs as well as augmenting their brains and bodies. In Artificial You, I argue that the greatest intelligences in the universe may be synthetic, having developed from civilizations that were once biological, like our own.

So, the same issues I raised, about whether radical enhancement is compatible with the continuation of one’s consciousness and selfhood, are germane to discussions about the evolution of intelligent life throughout the universe. That’s both sobering and amazing!

Further Reading:

Do We Need Brain Implants to Keep Up with Robots?

The Singularity and the Neural Code

Can Integrated Information Theory Explain Consciousness?

How Would AI Cover an AI Conference?

See also Q&As with Scott Aaronson, David Albert, David Chalmers, Noam Chomsky, David Deutsch, George Ellis, Marcelo Gleiser, Robin Hanson, Rick Heede, Nick Herbert, Jim Holt, Sabine Hossenfelder, Sheila Jasanoff, Stuart Kauffman, Leslie Kean, Christof Koch, Garrett Lisi, Christian List, Tim Maudlin, James McClellan, Priyamvada Natarajan, Naomi Oreskes, Martin Rees, Carlo Rovelli, Chris Search, Rupert Sheldrake, Peter Shor, Lee Smolin, Sheldon Solomon, Amia Srinivasan, Paul Steinhardt, Philip Tetlock, Tyler Volk, Steven Weinberg, Edward Witten, Peter Woit, Stephen Wolfram and Eliezer Yudkowsky, Carl Zimmer.

See also my profile of neuroscientist Christof Koch in my free, online book Mind-Body Problems