AI-Influenced Weapons Need Better Regulation

The weapons are error-prone and could hit the wrong targets

Large fireball from artillery fire strike

Flames and smoke rise from a fire following an artillery fire on the 30th day of the invasion of Ukraine by Russian forces in the northeastern city of Kharkiv on March 25, 2022.

With Russia’s invasion of Ukraine as the backdrop, the United Nations recently held a meeting to discuss the use of autonomous weapons systems, commonly referred to as killer robots. These are essentially weapons that are programmed to find a class of targets, then select and attack a specific person or object within that class, with little human control over the decisions that are made.

Russia took center stage in this discussion, in part because of its potential capabilities in this space, but also because its diplomats thwarted the effort to discuss these weapons, saying sanctions made it impossible to properly participate. For a discussion that to date had been far too slow, Russia’s spoiling slowed it down even further.

I have been tracking the development of autonomous weapons and attending the UN discussions on the issue for over seven years, and Russia’s aggression is becoming an unfortunate test case for how artificial intelligence (AI)–fueled warfare can and likely will proceed.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The technology behind some of these weapons systems is immature and error-prone, and there is little clarity on how the systems function and make decisions. Some of these weapons will invariably hit the wrong targets, and competitive pressures might result in deployment of more systems that are not ready for the battelfield.

To avoid the loss of innocent lives and the destruction of critical infrastructure in Ukraine and beyond, we need nothing less that the strongest diplomatic effort to prohibit in some cases, and regulate, in others, the use of these weapons and the technologies behind them, including AI and machine learning. This is critical because when military operations are proceeding poorly countries might be tempted to use new technologies to gain an advantage. An example of this is Russia’s KUB-BLA loitering munition, which has the ability to identify targets using AI.

Data fed into AI-based systems can teach remote weapons what a target looks like, and what to do upon reaching that target. While similar to facial recognition tools, AI technologies for military use have different implications, particularly when they are meant to destroy and kill, and as such, experts have raised concerns about their introduction into dynamic war contexts. And while Russia may have been successful in thwarting real-time discussion of these weapons, it isn’t alone. The U.S., India and Israel are all fighting regulation of these dangerous systems.

AI might be more mature and well-known in its use in cyberwarfare, including to supercharge malware attacks or to better impersonate trusted users in order to access to critical infrastructure, such as the electric grid. But, major powers are using it to develop physically destructive weapons. Russia has already made important advances in autonomous tanks, machines that can run without human operators who could theoretically override mistakes, while the United States has demonstrated a number of capabilities, including munitions that can destroy a surface vessel using a swarm of drones. AI is employed in the development of swarming technologies and loitering munitions, also called kamikaze drones. Rather than the futuristic robots seen in science-fiction movies, these systems use previously existing military platforms that leverage AI technologies. Simply, a few lines of code and new sensors can make a difference in whether a military system is functioning autonomously or under human control. Crucially, introducing AI into decision-making by militaries could lead to overrealiance on the technology, shaping military decision-making and potentially escalating conflicts.  

AI-based warfare might seem like a video game, but last September, according to Secretary of the Air Force Frank Kendall, the U.S. Air Force, for the first time, used AI to help to identify a target or targets in “a live operational kill chain.” Presumably, this means AI was used to identify and kill human targets.

Little information was provided about the mission, including whether any casualties that occurred were the intended targets. What inputs were used to identify such individuals and could there have been possible errors in identification? AI technologies have been shown to be biased, particularly against women and people in minority communities. False identifications disproportionately impact already marginalized and racialized groups.

If recent social media discussions among the AI community are any indication, the developers, largely from the private sector, who are creating the new technologies that some militaries are already deploying are largely unaware of their impact. Tech journalist Jeremy Kahn argues in Fortune that a dangerous disconnect exists between developers and leading militaries, including U.S. and Russian, which are using AI in decision-making and data analysis. The developers seem to be unaware of the general-purpose nature of some of the tools they are building and how militaries could use them in warfare, including to target civilians.

Undoubtedly, lessons from the current invasion will also shape the technology projects the militaries pursue. At the moment, the United States is at the head of the pack, but a joint statement by Russia and China in early February notes that they aim to “jointly build international relations of a new type,” and specifically points to their aim to shape governance of new technologies, including what I believe will be military uses of AI.

Independently, the U.S. and its allies are developing norms on responsible military uses of AI, but generally are not talking with potential adversaries. In general, states with more technologically advanced militaries have been unwilling to accept any constraints on the developments of AI technology. This is where international diplomacy is critical: there must be constraints on these types of weapons, and everyone has to agree to shared standards and transparency in use of the technologies.

The war in Ukraine should be a wake-up call regarding the use of technology in warfare, and the need to regulate AI technologies to ensure civilian protection. Unchecked and potentially hasty development of military applications of artificial intelligence will continue to undermine international humanitarian law and norms regarding civilian protection. Though the international order is in disarray, the solutions to current and future crises are diplomatic, not military, and the next gathering of the U.N. or another group needs to rapidly address this new era of warfare.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe