Why I’m Suing OpenAI, the Creator of ChatGPT

My lawsuit in Hawaii lays out the safety issues in OpenAI’s products and how they could irreparably harm both Hawaii and the rest of the U.S.

ChatGPT app on a smartphone

IB Photography/Alamy Stock Photo

“I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones,” wrote New York Times technology columnist Kevin Roose in March, “and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.”

He’s right. That’s why I recently filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in the state of Hawaii, where I live, until it can demonstrate the legitimate safety measures that the company has itself called for from its “large language model.”

We are at a pivotal moment. Leaders in AI development—including OpenAI’s own CEO Sam Altman—have acknowledged the existential risks posed by increasingly capable AI systems. In June 2015, Altman stated: “I think AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there’ll be great companies created with serious machine learning.” Yes, he was probably joking—but it’s not a joke.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Eight years later, in May 2023, more than 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the letter, released by the Center for AI Safety, a California nonprofit, says in its entirety.

I’m at the end of my rope. For the past two years, I’ve tried to work with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation, which means taking action before the actual harm materializes, because it may be too late if we wait. Unfortunately, despite collaboration with key senators and committee chairs, my state legislative efforts died early after being introduced. And in the meantime, the Trump administration has rolled back almost every aspect of federal AI regulation and has essentially put on ice the international treaty effort that began with the Bletchley Declaration in 2023. At no level of government are there any safeguards for the use of AI systems in Hawaii.

Despite their previous statements, OpenAI has abandoned its key safety commitments, including walking back its “superalignment” initiative that promised to dedicate 20 percent of computational resources to safety research, and late last year, reversing its prohibition on military applications. Its critical safety researchers have left, including co-founder Ilya Sutskever and Jan Leike, who publicly stated in May 2024, “Over the past years, safety culture and processes have taken a backseat to shiny products.” The company’s governance structure was fundamentally altered during a November 2023 leadership crisis, as the reconstituted board removed important safety-focused oversight mechanisms. Most recently, in April, OpenAI eliminated guardrails against misinformation and disinformation, opening the door to releasing “high risk” and “critical risk” AI models, “possibly helping to swing elections or create highly effective propaganda campaigns,” according to Fortune magazine.

In its first response, OpenAI has argued that the case should be dismissed because regulating AI is fundamentally a “political question” that should be addressed by Congress and the president. I, for one, am not comfortable leaving such important decisions to this president or this Congress—especially when they have done nothing to regulate AI to date.

Hawaii faces distinct risks from unregulated AI deployment. Recent analyses indicate that a substantial portion of Hawaii’s professional services jobs could face significant disruption within five to seven years as a consequence of AI. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging.

Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context.

My federal lawsuit applies well-established legal principles to this novel technology and makes four key claims:

Product liability claims: OpenAI’s AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company's deliberate removal of safety measures it previously deemed essential.

Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors.

Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers.

Public nuisance: OpenAI’s deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii.

Federal courts have recognized the viability of such claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit Court of Appeals (which Hawaii is part of) establish that technology companies can be held liable for design defects that create foreseeable risks of harm.

I’m not asking for a permanent ban on OpenAI or its products here in Hawaii but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including reinstating its previous commitment to allocate 20 percent of resources to alignment and safety research; implementing the safety framework outlined in its own publication “Planning for AGI and Beyond,” which attempts to create guardrails for dealing with AI as or more intelligent than its human creators; restoring meaningful oversight through governance reforms; creating specific safeguards against misuse for manipulation of democratic processes; and developing protocols to protect Hawaii's unique cultural and natural resources.

These items simply require the company to adhere to safety standards it has publicly endorsed but has failed to consistently implement.

While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests.

The development of increasingly capable AI systems is likely to be one of the most significant technological transformations in human history, many experts believe—perhaps in a league with fire, according to Google CEO Sundar Pichai. “AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,” Pichai said in 2018.

He’s right, of course. The decisions we make today will profoundly shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures.

What is happening now with OpenAI's breakneck AI development and deployment to the public is, to echo technologist Tristan Harris’s succinct April 2025 summary, “insane.” My lawsuit aims to restore just a little bit of sanity.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Tamlyn Hunt is scholar at the University of California, Santa Barbara, where he focuses on philosophy and neuroscience. He is the author of numerous neuroscience and philosophy papers looking at the nature of and role of electromagnetic effects in consciousness.

More by Tamlyn Hunt

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe