The problems with decoherence and the many-worlds idea have led a sizable minority to support a view called GRW theory, according to Leggett. The concept was put forward in 1986 by GianCarlo Ghirardi and Tullio Weber of the University of Trieste and Alberto Rimini of the University of Pavia.
In the GRW scheme, the wave function of a particle spreads out over time. But there is a small probability that the spreading wave “hits” a mysterious “something” in the background. The wave function suddenly becomes localized. Individual particles have only a small chance of a hit, about once every 100 million years. But for a macroscopic cat, the chance that at least one of its roughly 1027 particles makes a hit is high, at least once every 100 picoseconds. The cat never really has a chance to enter any kind of superposition. Hence, there is no need for decoherence: the macroscopic state of the cat results from spontaneous microscopic collapses.
A few problems plague this model. One is that the timing factor that triggers the hit is entirely arbitrary; proponents simply choose one that produces reasonable results. More important, though, is the source of the trigger. “Basically, [there is] a sort of universal background noise that cannot itself be described by quantum mechanics,” Leggett explains. The noise is not simply random processes in the environment; it has a distinct mathematical flavor. Roger Penrose of the University of Oxford argues in his book Shadows of the Mind that the trigger may be gravity, which would neatly sidestep certain technical objections.
Other, more radical proposals abound. The most well known was put forth by the late David Bohm, who postulated that “hidden variables” underpin quantum mechanics. These variables—describing properties that in a way render wave functions as real forces—would eliminate the notion of superpositions and restore a deterministic reality. Like the many-worlds idea, Bohm’s theory cannot be verified: the hidden variables by definition remain, well, hidden.
Given such choices, many working physicists are subscribing to decoherence, which makes the fewest leaps of faith even if it arguably fails to resolve the measurement problem fully. “Decoherence does answer the physical aspects of the questions,” Zurek says, but does not get to the metaphysical ones, such as how a conscious mind perceives an outcome. “It’s not clear if you have the right to expect the answer to all questions, at least until we develop a better understanding of how brain and mind are related,” he muses.
Bigger superpositions may enable researchers to start ruling out some theories— GRW and decoherence predict them on different scales, for instance. “What we would like to do is to go to more complex systems and entangle more and more particles” than just the mere 10 trapped before, Haroche of the ENS says. Future NIST experiments are particularly suited to serve as “decoherence monitors,” Monroe contends. “We can simulate noise to deliberately cause the superposition to decay.” Leggett has proposed using sensors made from superconducting rings (called SQUIDs): it should be possible to set up large currents flowing in opposite directions around the ring simultaneously.
Still, there’s a long way to go. “Even in the most spectacular experiments, at most you’ve shown a superposition for maybe 5,000 particles. That’s a long way from the 1023 characteristic of the macroscopic world,” says Leggett, who nonetheless remains supportive. “My own attitude is that one should just try to do experiments to see if quantum mechanics is still working.”
Shrinking transistors, now with features less than a micron in size, may also lead to insights about the quantum-classical changeover. In a few years they may reach dimensions of tens of nanometers, a realm sometimes called the mesoscopic scale. Da Hsuan Feng of Drexel University speculates that quantum mechanics perhaps really doesn’t lead to classical mechanics; rather both descriptions spring from still undiscovered concepts in the physical realm between them.