Perhaps more than any other profession, science places a premium on being correct. Of course, most scientists—like most living humans—make plenty of mistakes along the way. Yet not all errors are created equal. Historians have unearthed a number of instances in which an incorrect idea proved far more potent than thousands of others that were trivially mistaken or narrowly correct. These are the productive mistakes: errors that touch on deep, fundamental features of the world around us and prompt further research that leads to major breakthroughs. Mistakes they certainly are. But science would be far worse off without them.

Niels Bohr, for example, created a model of the atom that was wrong in nearly every way, yet it inspired the quantum-mechanical revolution. In the face of enormous skepticism, Alfred Wegener argued that centrifugal forces make the continents move (or “drift”) along the surface of the earth. He had the right phenomenon, albeit the wrong mechanism. And Enrico Fermi thought that he had created nuclei heavier than uranium, rather than (as we now know) having stumbled on nuclear fission.

Two instances of productive mistakes, one from physics in the 1970s and one from biology in the 1940s, illustrate this point dramatically. The authors of the mistakes were not hapless bumblers who happened, in retrospect, to get lucky. Rather they steadfastly asked questions that few of their colleagues broached and combined ideas that not many at the time had considered. In the process, they laid critical groundwork for today’s burgeoning fields of biotechnology and quantum information science. They were wrong, and the world should be thankful for their errors.

The Phantom Photon Clone
Our first mistake helped to illuminate a dispute that had begun during the early days of quantum mechanics, when Albert Einstein and Bohr engaged in a series of spirited debates over the nature and ultimate implications of quantum theory. Einstein famously railed against several strange features. Using the equations of quantum mechanics, for example, physicists could predict only probabilities for various occurrences, not definite outcomes. “I, at any rate, am convinced that He [God] is not playing at dice,” came Einstein’s rejoinder. There the matter stood for 30 years. Neither Einstein nor Bohr managed to convince the other side.

Decades later a young physicist from Northern Ireland, John Bell, returned to Einstein and Bohr’s exchanges. Bell revisited a thought experiment that Einstein had published back in 1935. Einstein had imagined a source that spat out pairs of quantum particles, such as electrons or photons, moving in opposite directions. Physicists could measure certain properties of each particle after it had traveled far apart from the other. Bell wondered about correlations between the outcomes of those measurements.

In 1964 he published a remarkably brief and elegant article demonstrating that, according to quantum mechanics, the outcome of one of those measurements—say, the spin of the right-moving particle along a given direction—must depend on the choice of which property to measure of the left-moving particle. Indeed, Bell deduced, any theory that reproduced the same empirical predictions as quantum mechanics must incorporate a signal or “mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote.” Moreover, he concluded, “the signal involved must propagate instantaneously.” Such long-distance correlations became known as “quantum entanglement.”

Though renowned among physicists today, Bell’s paper garnered no great fanfare when it appeared even though instantaneous signal transfer would violate the well-supported laws of Einstein’s relativity, which holds that no signal or influence can travel faster than light. Among the physicists who did take notice was Nick Herbert. The subject began to occupy more and more of Herbert’s attention, crowding out thoughts of his day job as an industrial physicist in the San Francisco Bay Area. At the time, Herbert was a core member of a quirky, informal discussion group called the Fundamental Fysiks Group. The participants met in Berkeley and mostly were young physicists who had earned their Ph.D.s at elite programs—Herbert did his doctoral work at Stanford University—only to fall victim to an unprecedented job crunch. In 1971, for example, more than 1,000 young physicists registered with the Placement Service of the American Institute of Physics, competing for just 53 jobs on offer.

Underemployed and with time on their hands, Herbert and his pals met weekly during the mid-1970s to brainstorm about deep puzzles of modern physics, topics that had received little attention in their formal physics training. They became mesmerized by Bell’s theorem and quantum entanglement. Another group member, John Clauser, conducted the world’s first experimental test of Bell’s theorem and found the strange predictions about quantum entanglement to be spot-on. (In 2010 Clauser shared the prestigious Wolf Prize for his contributions.)

Meanwhile, all around them, the Bay Area was witnessing an explosion of interest in bizarre phenomena such as extrasensory perception and precognitive visions of the future. The San Francisco Chronicle and other mainstream newspapers ran stories about experiments in telepathy, while occult enthusiasts celebrated the arrival of a New Age. Herbert and his discussion-mates began to wonder whether Bell’s theorem—which seemed to imply mysterious, instantaneous, long-distance connections between dis­tant objects—might account for the latest crop of marvels.

Focusing on what Bell had described as instantaneous signals between quantum particles, Herbert wondered whether they could be tapped to send messages faster than light. He set to drawing up plans for what he called a “superluminal telegraph”: a contraption that could harness a fundamental property of quantum theory to violate relativity and hence the laws of physics. After a few false starts, Herbert arrived at his “FLASH” scheme in January 1981. The acronym stood for “first laser-amplified superluminal hookup.” It used an elaborate laser-based system to transmit a faster-than-light signal.

Herbert’s scheme looked watertight. Several reviewers at the journal where he submitted his idea were convinced by his argument. “We have not been able to identify any fundamental flaws with the proposed experiment that reveal the origin of the paradox,” reported two referees. Another referee, Asher Peres, took an even bolder step. He proclaimed in his brief report that Herbert’s paper must be wrong—and hence it needed to be published. Because Peres himself could find no flaw, he argued that the error must be meaty, the kind that would prompt further advances.

Peres’s unusual (even courageous) position was quickly borne out. Three groups of physicists subjected Herbert’s paper to close scrutiny. GianCarlo Ghirardi and Tullio Weber in Italy, Wojciech Zurek and Bill Wootters in the U.S., and Dennis Dieks in the Netherlands all recognized that Herbert had made a subtle error in his calculation of what the physicist who received the signal should see. Herbert had assumed that the laser amplifier in his contraption would be able to emit lots of light in the same state as the original light. In fact, the scientists realized, the laser could not make such copies of a single photon, but only random hash, like a photocopy machine that mixed together two different images to produce a hopeless blur.

In the process of unpacking Herbert’s proposal, those three groups uncovered a fascinating, fundamental feature of quantum mechanics that no one had ever recognized. The FLASH system fails because of the “no-cloning theorem,” which prohibits an unknown quantum state from being copied or cloned without disturbing the state. The theorem prevents would-be inventors from using quantum theory to build faster-than-light telegraphs, thus enabling quantum entanglement to coexist peacefully with Einstein’s relativity. Event by event, the twin particles really do arrange themselves according to long-distance, instantaneous correlations, but those connections can never be used to send a message faster than light.

Very quickly a few other physicists realized that the no-cloning theorem offered more than just a response to Herbert’s curious paper or the basis for an uneasy truce between entanglement and relativity. In 1984 Charles Bennett and Gilles Brassard built directly on the no-cloning theorem to design the very first protocol for “quantum encryption”: a brand-new way to protect digital signals from potential eavesdroppers. As Bennett and Brassard realized, the fact that quantum mechanics forbids anyone from making copies of an unknown quantum state meant that partners could encode secret messages in entangled photons and pass them back and forth. If anyone tried to intercept a photon en route and make copies, they would immediately destroy the sought-after signal and announce their presence at the same time.

In recent years quantum encryption has moved to the forefront of a worldwide effort in quantum information science. Physicists such as Anton Zeilinger in Vienna and Nicholas Gisin in Geneva have conducted real-world demonstrations of quantum-encrypted bank transfers and electronic voting. Not a bad legacy for Herbert’s intriguing—yet flawed—FLASH scheme.

The Genetic Paradox
Our second example of a mistaken scientist features the work of Max Delbrück, a professor at Vanderbilt University and, later, the California Institute of Technology. Delbrück, a former student of Bohr’s, took from Bohr’s famous 1932 lecture “Light and Life” the idea that understanding biological processes would turn up new paradoxes and that solving these paradoxes might lead to the discovery of new laws of physics. Delbrück recruited other scientists to the effort, helping create the field of molecular biology in the years following World War II.

One of the key questions being asked in the 1940s was “What is a gene”? In the mid-19th century the monk Gregor Mendel had proposed the existence of hereditary factors (later called genes), which possessed two distinctive properties. The first was the ability to duplicate themselves. The second was the ability to produce variations, or mutations, that were duplicated as faithfully as the original gene.

Yet in the 1940s no one knew what genes were made of or how they reproduced. As quantum physics pioneer Erwin Schrödinger noted in his 1944 book What Is Life?, no ordinary physical system self-replicates. The seeming ability of genes to do so appeared to defy the second law of thermodynamics.

Delbrück was looking for the atomic gene—the indivisible physical system that was responsible for the mysteries of heredity. As a good physicist, Delbrück figured that the most fruitful approach would be to study life’s smallest and simplest units: viruses. Specifically, he chose to study bacteriophages (“phages” for short)—viruses that infect bacteria. These were among the easiest viruses to isolate and the quickest to grow. Although like all viruses, phages reproduced only inside a host cell, Delbrück attempted to avoid what he saw as this unnecessary complexity. He, along with his colleague Emory Ellis, developed a growth method that allowed them to focus on the reproduction of the phages while ignoring the cellular complexities of the infected bacteria.

Delbrück was convinced that genes were made of protein. Understand how the protein parts of viruses reproduced, he thought, and you would understand genes. And the best way to study viral reproduction, he surmised, was to watch them reproduce.

But how could one actually capture viruses as they replicate, to understand the process? The reproduction time of different bacteriophages varied, and Delbrück and his collaborator Salvador Luria reasoned that if they infected the same bacteria with two strains of phage, one that reproduced more rapidly than the other, they should be able to catch replication intermediates of a slower-duplicating strain when the cells burst open.

The dual-infection experiment did not work as planned—Luria and Delbrück found that infection by one viral strain prevented infection by the other. At about the same time, Thomas Anderson of the University of Pennsylvania examined a sample of one of Delbrück and Luria’s bacteriophage strains under an electron microscope. He discovered that the virus was far more complex than previously imagined—certainly it consisted of much more than a single atomic gene. It was a tadpole-shaped particle composed of both protein and nucleic acid, and it bound to the outside of bacteria to trigger an infection. The one-to-one correlation between viruses and genes that Delbrück had envisioned was beginning to unravel.

Still, Delbrück would not be dissuaded. In an effort to gain a better understanding of how some bacteria resisted phage infection, he and Luria devised what they called the fluctuation test. The test ended up revealing very little about viral replication, but its ingenious methodology showed that bacteria evolve according to Darwinian principles, with random mutations that occasionally confer survival advantages. It was a landmark in the study of bacterial genetics, opening up whole new fields of study. Delbrück and Luria (along with Alfred Hershey) would go on to win the 1969 Nobel Prize in Physiology or Medicine in part for this work.

The fluctuation test, however, did not advance the understanding of virus reproduction, to the evident frustration of Delbrück. In 1946 he even complained, in a public lecture, that the “explosive” possibilities for studying bacteria that he had created now threatened to displace his focus on viruses. Moreover, it was becoming clear that the phage used the cellular resources of the host Escherichia coli bacterium to reproduce itself. Contrary to Delbrück’s initial presumption, the host could not be ignored after all.

Yet his instinct to focus on a simple system turned out to be very fruitful—even if bacteriophages proved far more complex than he anticipated. The phage blossomed into a model organism for a generation of biologists, even inspiring James Watson’s quest for the structure of DNA. Delbrück chose his experimental subject well and devised groundbreaking methods to study it.

Delbrück abandoned phage research altogether in 1950s to focus on the biophysics of sensory perception, using an alga called Phycomyces.* Although he was able to recruit some young physicists to work on this new model system, it was to prove far less fruitful than the phage. Yet he continued to be a lively critic of the phage experiments of others, and his tendency to misjudge key findings became legendary. Caltech molecular biologist Jean Weigle used to tell a story of encountering a young researcher who was dejected after Delbrück’s reaction to his proposed experiment. Delbrück liked the idea, a sure sign that it was hopeless. For those on the right track, the highest praise one could expect from Delbrück was “I don’t believe a word of it!”

Fair Credit
In these examples from physics and biology, smart scientists advanced mistaken ideas. No ordinary mistakes, they spurred major developments in different areas of fundamental science. In rapid order, those scientific insights helped to spawn multibillion-dollar research programs and to seed industries that even today are feverishly remaking the world in which we live.

In one important way, however, Herbert’s and Delbrück’s mistakes spawned rather different legacies. Delbrück (rightly) enjoyed a tremendously successful scientific career. He valued unconventional approaches and subjected even the best science to critical scrutiny; his status was high enough to afford heterodoxy. Herbert, on the other hand, struggled to make ends meet, even spending time on public assistance—hardly the most productive way to encourage a thinker whose work helped to clarify deep insights in quantum theory and launch a technological revolution.

This tremendous divergence in professional trajectories suggests the need for some new accounting scheme by which we apportion credit in the sciences. Those who evaluate the contributions of scientists will never achieve the clarity enjoyed by sports statisticians—endlessly tracking strikeouts or assists—in part because the significance of scientific mistakes will change over time as investigators wrestle with their implications. Nevertheless, it is worth pondering how best to acknowledge—and encourage—the kinds of creative leaps that fall short yet push the game forward.

After all, anyone can make mistakes. Indeed, the sheer volume of today’s scientific publications suggests that most of us are probably wrong most of the time. Yet some errors can serve a generative role in research. While striving to be correct, let us pause to admire the great art of being productively wrong.

This article was published in print as "The Right Way to Get It Wrong."

*Erratum (6/15/12): Phycomyces is incorrectly identified as an alga. It is a fungus.