Most people think of memory as a faithful, if incomplete, recording of the past—a kind of multimedia storehouse of experiences. But psychologists, neuroscientists and lawyers know better. Eyewitness testimony, for instance, is now known to be notoriously unreliable. This is because memory is not just about retrieving stored information. Our minds normally construct memories using a blend of remembered experiences and knowledge about the world. Our memories can be frazzled, though, by new experiences that end up tangling the past and the present.
The sometimes dire consequences of misremembering have led psychologists to try to discover the underlying causes of faulty memories—and a new study has just found a key site in the brain whose functioning gives insight into both the underpinnings of memory and why we misremember things. The research builds on the DRM task—a way of eliciting false memories that was discovered decades ago. The task combines the last initials of three researchers: James Deese first described the psychological illusion in 1959, but it wasn't until Henry Roediger and Kathleen McDermott linked it to false memory in 1995 that it became widely used in psychological experiments. During the task, participants are presented with a list of words, such as “snow,” “ice,” “winter” and “warm,” which are all related to another “lure” word (in this case “cold”) that is never presented. After some delay, participants must recall as many words from the list as they can, and people frequently report clearly remembering seeing the lure word.
Different lists produce this effect more reliably than others, but the effect is remarkably consistent across people. “These results tell us our memories are not based on exactly what happened. There's something more approximate going on, often referred to as a gist memory,” says cognitive neuroscientist Martin Chadwick of University College London and Google DeepMind, the AI company. “Rather than encoding every word, you're building up an overall concept that you store in memory.” Chadwick and colleagues published a study last month in Proceedings of the National Academy of Sciences which, for the first time, reveals where and how conceptual similarity is represented in the brain and why this produces the DRM effect, giving new insight into the neural basis of conceptual knowledge. The team claims that gaining a better understanding of how the human mind processes conceptual information may ultimately help them develop smarter software that can adapt existing knowledge the program has to new situations.
Researchers agree the effect is driven by how closely related the meanings (semantics) of the words are, but many also suspect false memories reflect how our minds organize knowledge. Humans have a vast store of concepts, and we're exceptionally good at using those concepts to make generalizations that allow us to come up with solutions to new situations and problems. “False memory studies show us that our memory is always a blend of what we know about the world generally, plus what we retain of a recent experience—that's adaptive because usually what we're using our memory for is to cope with new situations,” says neuropsychologist Tim Rogers of the University of Wisconsin–Madison. “Memory is always a reconstruction from these two sources, to allow us to make a reasonable inference about what was likely to have happened that is almost always useful, and only occasionally will we be led astray, as in these experiments.”
Brain-imaging studies have implicated numerous regions in semantic memory. Different properties seem to be stored in different areas. The shapes, colors and distinctive movements of things are stored in the visual cortex; sounds, whether words or the crash of a branch falling, are stored in the auditory cortex. Our knowledge of how to interact with objects—knowing how to push a chair under a desk—is stored in the motor cortex. Distributing knowledge about the world throughout the brain in this way, though, wouldn't easily enable us to infer the relationship between, say, an ostrich and a hummingbird. They don't look, sound or move the same, yet we know they're the “same” at some level.
This highest level of abstraction, so critical to human cognition, may, according to some researchers, depend on a “hub” with connections to various areas of a network distributed throughout the brain. “The hub is important because, since it can see everything at once, it can find properties that vary together,” Rogers says. “In so doing, we think it learns representations that express the degree to which different things are similar in kind, rather than just looking or sounding similar.” Rogers and others have argued this hub is located in the brain's anterior temporal lobe (ATL). As evidence, they point to patients with semantic dementia, a neurodegenerative disease primarily affecting the ATL, which results in problems remembering and understanding words, despite other functions remaining intact.
Explanations of the DRM effect often put forward the hypothesis that similar but not identical meanings are represented by like patterns of activity in the brain, and such overlapping activity leads to false memories. But nobody had actually pinpointed specific brain activity that confirms the idea. Chadwick and colleagues set out to do just that. They scanned the brains of 18 volunteers who viewed 40 lists of five words (four list words and a lure). They used the same word lists as a 2001 study by Roediger, McDermott and colleagues. That study also gave them estimates of each list's probability of generating false memories. Chadwick's team then searched the scans looking for regions where the similarity between the response to the lure word and the average response to the list words predicted the chances of that list generating false memories. They found only one area, the most anterior (frontal) part of the ATL, called the temporal pole. This strongly suggests that everyone experiences brain activity representative of false memories—the scans in this new study, in fact, successfully predicted behavioral results from 15 years ago.
Researchers found that each individual participant’s memory errors were predicted by their specific patterns of overlapping activity, suggesting that some semantic knowledge is unique to each of us. “Putting this together, it shows we have a similarity-based code for semantic knowledge in this region, which for the most part is shared,” Chadwick says. “But on top of that, each of us has slight variations in some representations, which lead to differences in the false memory errors we make.”
Semantic knowledge has to be mostly shared for us to communicate, and this study adds to a growing body of research suggesting we have a mental structure in our temporal lobe that other animals may not have, giving us our unique conceptual abilities. But it also shows that subtle differences in learning and experience produce differences in each of us, differences strong enough to measure with brain scans. “It's a great example of the convergence of nature and nurture,” Rogers says. “We all share a common genetic blueprint for connecting our brains as infants but we also vary in the nature of our experience, and that's going to produce coarse similarities across everybody, with fine-grained differences, depending on experience.”
The team also showed that the more similar each list was to its lure, the more list words participants correctly recalled. This finding hints at a reason for why semantic memory is organized the way it is. As well as allowing us to easily see relationships among words and concepts, this organizational strategy enhances memory performance. False memories may just be the price we pay. “We shouldn't see it as a negative,” Chadwick says. “Creating the gist is helpful for retrieving true memories—and in most situations that's fine, but sometimes it will also generate a false memory.”
The work bears relevance to DeepMind's long-term goal of developing intelligent machines. “This is the first step in a line of research we're hoping is going to tell us more about how we learn, store and represent semantic knowledge—and most importantly, how we use this knowledge to solve novel problems,” Chadwick says. “That's exactly the kind of problem current algorithms really struggle with.” It also has implications for issues at the intersection of psychology and law, like false memories in testimony. “Understanding the mechanisms that cause people to confidently attest to false memories could be important for resolving those sorts of legal issues,” Rogers says. “If we really knew how concepts are represented and the degree to which their overlap is likely to cause false memories, maybe there would be ways of interrogating people that would avoid these sorts of cognitive traps.”