Why do people see faces in nature, interpret window stains as human figures, hear voices in random sounds generated by electronic devices or find conspiracies in the daily news? A proximate cause is the priming effect, in which our brain and senses are prepared to interpret stimuli according to an expected model. UFOlogists see a face on Mars. Religionists see the Virgin Mary on the side of a building. Paranormalists hear dead people speaking to them through a radio receiver. Conspiracy theorists think 9/11 was an inside job by the Bush administration. Is there a deeper ultimate cause for why people believe such weird things? There is. I call it “patternicity,” or the tendency to find meaningful patterns in meaningless noise.
Traditionally, scientists have treated patternicity as an error in cognition. A type I error, or a false positive, is believing something is real when it is not (finding a nonexistent pattern). A type II error, or a false negative, is not believing something is real when it is (not recognizing a real pattern—call it “apatternicity”). In my 2000 book How We Believe (Times Books), I argue that our brains are belief engines: evolved pattern-recognition machines that connect the dots and create meaning out of the patterns that we think we see in nature. Sometimes A really is connected to B; sometimes it is not. When it is, we have learned something valuable about the environment from which we can make predictions that aid in survival and reproduction. We are the ancestors of those most successful at finding patterns. This process is called association learning, and it is fundamental to all animal behavior, from the humble worm C. elegans to H. sapiens.
Unfortunately, we did not evolve a Baloney Detection Network in the brain to distinguish between true and false patterns. We have no error-detection governor to modulate the pattern-recognition engine. (Thus the need for science with its self-correcting mechanisms of replication and peer review.) But such erroneous cognition is not likely to remove us from the gene pool and would therefore not have been selected against by evolution.
In a September paper in the Proceedings of the Royal Society B, “The Evolution of Superstitious and Superstition-like Behaviour,” Harvard University biologist Kevin R. Foster and University of Helsinki biologist Hanna Kokko test my theory through evolutionary modeling and demonstrate that whenever the cost of believing a false pattern is real is less than the cost of not believing a real pattern, natural selection will favor patternicity. They begin with the formula pb > c, where a belief may be held when the cost (c) of doing so is less than the probability (p) of the benefit (b). For example, believing that the rustle in the grass is a dangerous predator when it is only the wind does not cost much, but believing that a dangerous predator is the wind may cost an animal its life.
The problem is that we are very poor at estimating such probabilities, so the cost of believing that the rustle in the grass is a dangerous predator when it is just the wind is relatively low compared with the opposite. Thus, there would have been a beneficial selection for believing that most patterns are real.
Through a series of complex formulas that include additional stimuli (wind in the trees) and prior events (past experience with predators and wind), the authors conclude that “the inability of individuals—human or otherwise—to assign causal probabilities to all sets of events that occur around them will often force them to lump causal associations with non-causal ones. From here, the evolutionary rationale for superstition is clear: natural selection will favour strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction.”