People often blame social media algorithms that prioritize extreme content for increasing political polarization, but this effect has been difficult to prove. Only the platform owners have access to their algorithms, so researchers can’t identify possible tweaks to the products’ behavior without the platforms’ (increasingly rare) cooperation.
A study in Science not only provides compelling evidence that these algorithms cause polarization but also shows the trend can be mitigated without getting a platform’s approval or removing posts.
The researchers created a browser extension that can push down or move up posts in users’ X feeds that display attitudes linked to polarization, such as partisan animosity and support for undemocratic practices. The tool uses a large language model (LLM) to analyze and reorder the posts in real time.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“Only the platforms have had the power to shape and understand these algorithms,” says study co-author and University of Washington information scientist Martin Saveski. “This tool gives that power to independent researchers.”
The team conducted an experiment over 10 days in the run-up to the 2024 U.S. election. More than 1,200 volunteer participants saw feeds in which polarizing content was either significantly down-ranked, reducing the chances of users seeing it before they stopped scrolling, or slightly up-ranked.
Regardless of political orientation, those for whom polarizing posts were de-emphasized felt warmer toward the group that opposed their viewpoints (based on short surveys) than did those with unaltered feeds, whereas those who saw boosted polarizing posts felt colder.
The difference was two to three degrees on a 100-degree “feeling thermometer.” That might not seem big, but “it’s comparable to three years of historical change on average in the U.S.,” says co-author Chenyan Jia, a communication scientist at Northeastern University. The manipulations also affected how much sadness and anger participants reported feeling while scrolling.
According to University of Toronto psychologist Victoria Oldemburgo de Mello, who studies how technology shapes behavior and society, the study authors impressively combined tight control with a real-world setting. “And they do it in a clever way that bypasses [platform] approval. No one has done this before.” The effects’ persistence is unclear—they might dissipate or compound over time, she adds. The researchers say that’s an important direction for future work and have made their code freely available so other scientists can dig in as well.
The current version of the tool works only for browser-based social media sites. Making something that could be used with apps is “technically more difficult, with the way [they] work, but it’s something we’re exploring,” Saveski says.
The researchers also plan to study other interventions for social media feeds, taking advantage of the flexibility offered by LLM analysis, Saveski adds. “Our framework is very general, and one can think about well-being, mental health, and so on.”

