The Road to Pseudoscientific Thinking

How to prevent the most salient feature from being the least informative

Join Our Community of Science Lovers!

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


You are in a crowd of new people, and you need to remember all of their names. One of them, Amanda, has purple hair.

What do you do? You naturally remember her as “Amanda with the purple hair.” This initially salient feature—something that jumps out at you—has seduced you into an easy memory hack.

Except, there is a catch. Unless you have learned other things about how Amanda looks and acts, your mnemonic is only going to be useful if Amanda never changes her hair. Despite being salient, her hair is an ineffective feature to represent the ‘concept’ of the person you met earlier.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


When faced with an opportunity to develop a new category, we are drawn to the most obvious features. This often makes sense to save mental resources. Like most time-saving strategies, however, it can backfire. Sometimes the most salient feature is the least informative.

Memory starts with encoding the world around us, and if uninformative salient information highjacks this process, the memory associated with a concept is (at best) inefficient.

I corresponded with Caitlyn McColeman about this, who is currently working on her Ph.D. in cognitive psychology at Simon Fraser University in Canada. According toMcColeman; “The human ability to reason with abstract ideas is what makes higher order thinking possible, but those abstract ideas are developed using an imperfect system.”

“Having heard that someone was once vaccinated and then diagnosed with autism might make someone believe that vaccines cause autism (despite evidence to the contrary). Having established that vaccinations are associated with features such as ‘causing autism’, a person is more likely to vilify vaccines entirely. Such concepts, once established, are hard to unlearn. As such, one of the contributing factors to pseudoscientific thinking is our overly-generous concept system.”

McColeman studies how people access features in categories using eye trackers (read some of her research here). “From this, we can see how people are looking at information, and how their eye gaze changes during the learning process. Sometimes, a feature that is helpful will be less obvious than a feature that is useless when deciding which category something is in, and we measure how long it takes people to figure that out and how their eye movements change while they do so.”

This means that just because something catches our attention, or is easy to remember, it does not mean it is useful for understanding a new thing we want to learn. It also means that we can learn how to categorize information over time, for example learning how to steer away from scary attention-grabbing pseudoscientific claims.

How can we be less distracted by unimportant information? Perhaps by visualizing important information. If we strip down data displays to bare bones, making them eye-catching but really easy to intuitively understand, we help them to defeat the nonsense. To win over the purple hair.

“In a new series of studies,” McColeman says, “I am exploring how attention to features relates to interpreting data displays. Because graphs are really just abstract representations of numbers in the world, I suspect that providing data in easy-to-discern groups and patterns will make communicating information easier and help more people make more data-driven decisions.”

So, where to from here? Are there any cool, futuristic, applications of such insights? According to McColeman “I expect that category learning work from human learning will help computer vision moving forward, as we understand the regularities in the environment that people are picking up on. There’s still a lot of room for improvement in getting computer systems to notice the same things that people notice.” We need to help people, and computers, to avoid being distracted by unimportant, attention-grabbing, information.

The take-home message from this line of research seems to be: When fighting the post-truth war against pseudoscience and misinformation, make sure that important information is eye-catching and quickly understandable.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe