If it seems the state of the world is on an endless downward trajectory these days, take heart. Things might not be quite as bad as you think. New research, published in June in Science, suggests that as social problems such as extreme poverty or violence become less prevalent, people may be prone to perceive that they linger—and are perhaps even getting worse.
Led by psychologist Daniel Gilbert at Harvard University, the researchers found people readily and unconsciously change how they define certain concepts—ranging from specific colors to unethical behavior—based on how frequently they run into them. “On almost every dimension, the world is getting better. And yet when people are asked, they consistently say it’s not getting better, and in fact it’s getting worse,” Gilbert says. “As we solve problems, we also unknowingly expand our definitions of what counts as them.”
Concept expansion itself is not a new observation. In 2016 social psychologist Nicholas Haslam at the University of Melbourne in Australia introduced the term “concept creep” to describe the broadening of modern psychological terminology—especially negative examples such as abuse, bullying, trauma, mental disorder, addiction and prejudice—to include cases previously judged benign or inoffensive.
In some cases, the expansion of concepts such as aggression (and more recently, “microaggressions”) in the public consciousness has sparked heated debate; some critics argue these shifts reflect political correctness run amok whereas others claim they signal growing social awareness. Gilbert is emphatically agnostic on the issue. “Expanding a concept isn’t necessarily good or bad,” he says. “Science doesn’t weigh in on whether it’s a good or bad thing.” He and others are simply interested in understanding how the phenomenon happens.
A number of factors likely contribute to these changes, among them political, social or economic forces. But the latest study highlights another intriguing player. “This is the first time someone has actually said there’s a cognitive mechanism that could account for that,” Haslam says.
In one of their experiments Gilbert’s team showed volunteers a series of 1,000 dots, ranging in color between very purple and very blue. Participants had to judge whether each dot was blue or not. Partway through the test, researchers began showing fewer blue dots (and more purple or purplish dots) to some participants. By the end of the experiment, these study participants were more likely to say “blue” to hues in the middle of the spectrum, including some dots they had previously seen and judged “not blue.”
The change was involuntary—it even occurred when volunteers were warned the frequency of blue dots would decrease. Instructing participants to maintain consistent responses did not eliminate the shift, nor did offering monetary bonuses for the most consistent performers. The effect worked both ways: Reversing the experiment and increasing the frequency of blue dots made participants less likely to call dots in the middle of the color range blue (in other words, their concept of “blue” had contracted).
Next, the researchers moved on to more complex concepts. They showed participants a series of computer-generated faces that had been independently rated on a continuum from very nonthreatening to very threatening. Those in the study had to assess whether a given face was a threat or not. Mid-experiment, researchers began showing fewer threatening faces to some participants. By the end of the session, these people had grown more likely to judge relatively innocuous faces as threats.
Finally, Gilbert’s team prepared hundreds of mock research proposals, which were designed—and verified by independent raters—to range from ethical to ambiguous to unethical. (An example unethical proposal: “Participants will be asked to lick a frozen piece of human fecal matter. Afterwards, they will be given mouthwash. The amount of mouthwash used will be measured.”) Volunteers in Gilbert’s study were asked to play the role of an institutional review board, which oversees the ethics of university research projects. They had to either approve or reject a series of these proposals. Once again, when researchers began showing fewer “unethical” proposals to some of the participants, they shifted to rejecting more “ambiguous” proposals than they did earlier in experiment. “It’s a very creative, provocative study,” says Scott Lilienfeld, professor of psychology at Emory University. He notes the study’s strength lies in showing the same effect across a range of situations—from simple perceptual problems to ethical judgments. “The challenge will be to see the extent to which it generalizes outside the lab to the real world,” says Lilienfeld, who did not take part in the work.
Going forward, Gilbert’s team is working on computational models that might point to the thought processes that lead people to change their concepts based on how often they come upon instances of them. For those looking to glean practical lessons from their initial results, Gilbert says, “We’re prone to never see the end of a problem. Before we try to solve it, we should try to say what would count as having solved it.” But even he acknowledges that for some complex, real-world issues, these measures will be extremely hard to define.