It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk. The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said, delighting at the thought. “So the programmer put in that limit.”
Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)
In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.
Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.
“Musk is right if you assume [propositions] one and two of the trilemma are false,” says astronomer David Kipping of Columbia University. “How can you assume that?”
To get a better handle on Bostrom’s simulation argument, Kipping decided to resort to Bayesian reasoning. This type of analysis uses Bayes’s theorem, named after Thomas Bayes, an 18th-century English statistician and minister. Bayesian analysis allows one to calculate the odds of something happening (called the “posterior” probability) by first making assumptions about the thing being analyzed (assigning it a “prior” probability).
Kipping began by turning the trilemma into a dilemma. He collapsed propositions one and two into a single statement, because in both cases, the final outcome is that there are no simulations. Thus, the dilemma pits a physical hypothesis (there are no simulations) against the simulation hypothesis (there is a base reality—and there are simulations, too). “You just assign a prior probability to each of these models,” Kipping says. “We just assume the principle of indifference, which is the default assumption when you don’t have any data or leanings either way.”
So each hypothesis gets a prior probability of one half, much as if one were to flip a coin to decide a wager.
The next stage of the analysis required thinking about “parous” realities—those that can generate other realities—and “nulliparous” realities—those that cannot simulate offspring realities. If the physical hypothesis was true, then the probability that we were living in a nulliparous universe would be easy to calculate: it would be 100 percent. Kipping then showed that even in the simulation hypothesis, most of the simulated realities would be nulliparous. That is because as simulations spawn more simulations, the computing resources available to each subsequent generation dwindles to the point where the vast majority of realities will be those that do not have the computing power necessary to simulate offspring realities that are capable of hosting conscious beings.
Plug all these into a Bayesian formula, and out comes the answer: the posterior probability that we are living in base reality is almost the same as the posterior probability that we are a simulation—with the odds tilting in favor of base reality by just a smidgen.
These probabilities would change dramatically if humans created a simulation with conscious beings inside it, because such an event would change the chances that we previously assigned to the physical hypothesis. “You can just exclude that [hypothesis] right off the bat. Then you are only left with the simulation hypothesis,” Kipping says. “The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations. It’d be a very strange celebration of our genius that day.”
The upshot of Kipping’s analysis is that, given current evidence, Musk is wrong about the one-in-billions odds that he ascribes to us living in base reality. Bostrom agrees with the result—with some caveats. “This does not conflict with the simulation argument, which only asserts something about the disjunction,” the idea that one of the three propositions of the trilemma is true, he says.
But Bostrom takes issue with Kipping’s choice to assign equal prior probabilities to the physical and simulation hypothesis at the start of the analysis. “The invocation of the principle of indifference here is rather shaky,” he says. “One could equally well invoke it over my original three alternatives, which would then give them one-third chance each. Or one could carve up the possibility space in some other manner and get any result one wishes.”
Such quibbles are valid because there is no evidence to back one claim over the others. That situation would change if we can find evidence of a simulation. So could you detect a glitch in the Matrix?
Houman Owhadi, an expert on computational mathematics at the California Institute of Technology, has thought about the question. “If the simulation has infinite computing power, there is no way you’re going to see that you’re living in a virtual reality, because it could compute whatever you want to the degree of realism you want,” he says. “If this thing can be detected, you have to start from the principle that [it has] limited computational resources.” Think again of video games, many of which rely on clever programming to minimize the computation required to construct a virtual world.
For Owhadi, the most promising way to look for potential paradoxes created by such computing shortcuts is through quantum physics experiments. Quantum systems can exist in a superposition of states, and this superposition is described by a mathematical abstraction called the wave function. In standard quantum mechanics, the act of observation causes this wave function to randomly collapse to one of many possible states. Physicists are divided over whether the process of collapse is something real or just reflects a change in our knowledge about the system. “If it is just a pure simulation, there is no collapse,” Owhadi says. “Everything is decided when you look at it. The rest is just simulation, like when you’re playing these video games.”
To this end, Owhadi and his colleagues have worked on five conceptual variations of the double-slit experiment, each designed to trip up a simulation. But he acknowledges that it is impossible to know, at this stage, if such experiments could work. “Those five experiments are just conjectures,” Owhadi says.
Zohreh Davoudi, a physicist at the University of Maryland, College Park, has also entertained the idea that a simulation with finite computing resources could reveal itself. Her work focuses on strong interactions, or the strong nuclear force—one of nature’s four fundamental forces. The equations describing strong interactions, which hold together quarks to form protons and neutrons, are so complex that they cannot be solved analytically. To understand strong interactions, physicists are forced to do numerical simulations. And unlike any putative supercivilizations possessing limitless computing power, they must rely on shortcuts to make those simulations computationally viable—usually by considering spacetime to be discrete rather than continuous. The most advanced result researchers have managed to coax from this approach so far is the simulation of a single nucleus of helium that is composed of two protons and two neutrons.
“Naturally, you start to ask, if you simulated an atomic nucleus today, maybe in 10 years, we could do a larger nucleus; maybe in 20 or 30 years, we could do a molecule,” Davoudi says. “In 50 years, who knows, maybe you can do something the size of a few inches of matter. Maybe in 100 years or so, we can do the [human] brain.”
Davoudi thinks that classical computers will soon hit a wall, however. “In the next maybe 10 to 20 years, we will actually see the limits of our classical simulations of the physical systems,” she says. Thus, she is turning her sights to quantum computation, which relies on superpositions and other quantum effects to make tractable certain computational problems that would be impossible through classical approaches. “If quantum computing actually materializes, in the sense that it’s a large scale, reliable computing option for us, then we’re going to enter a completely different era of simulation,” Davoudi says. “I am starting to think about how to perform my simulations of strong interaction physics and atomic nuclei if I had a quantum computer that was viable.”
All of these factors have led Davoudi to speculate about the simulation hypothesis. If our reality is a simulation, then the simulator is likely also discretizing spacetime to save on computing resources (assuming, of course, that it is using the same mechanisms as our physicists for that simulation). Signatures of such discrete spacetime could potentially be seen in the directions high-energy cosmic rays arrive from: they would have a preferred direction in the sky because of the breaking of so-called rotational symmetry.
Telescopes “haven’t observed any deviation from that rotational invariance yet,” Davoudi says. And even if such an effect were to be seen, it would not constitute unequivocal evidence that we live in a simulation. Base reality itself could have similar properties.
Kipping, despite his own study, worries that further work on the simulation hypothesis is on thin ice. “It’s arguably not testable as to whether we live in a simulation or not,” he says. “If it’s not falsifiable, then how can you claim it’s really science?”
For him, there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.