We Need Product Safety Regulations for Social Media

As social media more frequently exposes people to brutality and untruths, we need to treat it like a consumer product, and that means product safety regulations

A person holds a smartphone in their hands

Like many people, I’ve used Twitter, or X, less and less over the last year. There is no one single reason for this: the system has simply become less useful and fun. But when the terrible news about the attacks in Israel broke recently, I turned to X for information. Instead of updates from journalists (which is what I used to see during breaking news events), I was confronted with graphic images of the attacks that were brutal and terrifying. I wasn’t the only one; some of these posts had millions of views and were shared by thousands of people.

This wasn’t an ugly episode of bad content moderation. It was the strategic use of social media to amplify a terror attack made possible by unsafe product design. This misuse of X could happen because, over the past year, Elon Musk has systematically dismantled many of the systems that kept Twitter users safe and laid off nearly all the employees who worked on trust and safety at the platform. The events in Israel and Gaza have served as a reminder that social media is, before anything else, a consumer product. And like any other mass consumer product, using it carries big risks.

When you get in a car, you expect it will have functioning brakes. When you pick up medicine at the pharmacy, you expect it won’t be tainted. But it wasn’t always like this. The safety of cars, pharmaceuticals and dozens of other products was terrible when they first came to market. It took much research, many lawsuits, and regulation to figure out how to get the benefits of these products without harming people.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Like cars and medicines, social media needs product safety standards to keep users safe. We still don’t have all the answers on how to build those standards, which is why social media companies must share more information about their algorithms and platforms with the public. The bipartisan Platform Accountability and Transparency Act would give users the information they need now to make the most informed decisions about what social media products they use and also let researchers get started figuring out what those product safety standards could be.

Social media risks go beyond amplified terrorism. The dangers that algorithms designed to maximize attention represent to teens, and particularly to girls, with still-developing brains have become impossible to ignore. Other product design elements, often called “dark patterns,” designed to keep people using for longer also appear to tip young users into social media overuse, which has been associated with eating disorders and suicidal ideation. This is why 41 states and the District of Columbia are suing Meta, the company behind Facebook and Instagram. The complaint against the company accuses it of engaging in a “scheme to exploit young users for profit” and building product features to keep kids logged on to its platforms longer, while knowing that was damaging to their mental health.

Whenever they are criticized, Internet platforms have deflected blame onto their users. They say it’s their users’ fault for engaging with harmful content in the first place, even if those users are children or the content is financial fraud. They also claim to be defending free speech. It’s true, governments all over the worldorder platforms to remove content, and some repressive regimes abuse this process. But the current issues we are facing aren’t really about content moderation. X’s policies already prohibit violent terrorist imagery. The content was widely seen anyway only because Musk took away the people and systems that stop terrorists from leveraging the platform. Meta isn’t being sued because of the content its users post but because of the product design decisions it made while allegedly knowing they were dangerous to its users. Platforms already have systems to remove violent or harmful content. But if their feed algorithms recommend content faster than their safety systems can remove it, that’s simply unsafe design.

More research is desperately needed, but some things are becoming clear. Dark patterns like autoplaying videos and endless feeds are particularly dangerous to children, whose brains are not developed yet and who often lack the mental maturity to put their phones down. Engagement-based recommendation algorithms disproportionately recommend extreme content.

In other parts of the world, authorities are already taking steps to hold social media platforms accountable for their content. In October, the European Commission requested information from X about the spread of terrorist and violent content as well as hate speech on the platform. Under the Digital Services Act, which came into force in Europe this year, platforms are required to take action to stop the spread of this illegal content and can be fined up to 6 percent of their global revenues if they don’t do so. If this law is enforced, maintaining the safety of their algorithms and networks will be the most financially sound decision for platforms to make, since ethics alone do not seem to have generated much motivation.

In the U.S., the legal picture is murkier. The case against Facebook and Instagram will likely take years to work through our courts. Yet, there is something that Congress can do now: pass the bipartisan Platform Accountability and Transparency Act. This bill would finally require platforms to disclose more about how their products function so that users can make more informed decisions. Moreover, researchers could get started on the work needed to make social media safer for everyone.

Two things are clear: First, online safety problems are leading to real, offline suffering. Second, social media companies can’t, or won’t, solve these safety problems on their own. And those problems aren’t going away. As X is showing us, even safety issues like the amplification of terror that we thought were solved can pop right back up.  As our society moves online to an ever-greater degree, the idea that anyone, even teens, can just “stay off social media” becomes less and less realistic. It’s time we require social media to take safety seriously, for everyone’s sake.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe