Don’t Let Robots Pull the Trigger

Weapons that kill enemies on their own threaten civilians and soldiers alike

Chris Whetzel

The killer machines are coming. Robotic weapons that target and destroy without human supervision are poised to start a revolution in warfare comparable to the invention of gunpowder or the atomic bomb. The prospect poses a dire threat to civilians—and could lead to some of the bleakest scenarios in which artificial intelligence runs amok. A prohibition on killer robots, akin to bans on chemical and biological weapons, is badly needed. But some major military powers oppose it.

The robots are no technophobic fantasy. In July 2017, for example, Russia's Kalashnikov Group announced that it had begun development of a camera-equipped 7.62-millimeter machine gun that uses a neural network to make “shoot/no-shoot” decisions. An entire generation of self-controlled armaments, including drones, ships and tanks, is edging toward varying levels of autonomous operation. The U.S. appears to hold a lead in R&D on autonomous systems—with $18 billion slated for investment from 2016 to 2020. But other countries with substantial arms industries are also making their own investments.

Military planners contend that “lethal autonomous weapons systems”—a more anodyne term—could, in theory, bring a detached precision to war fighting. Such automatons could diminish the need for troops and reduce casualties by leaving the machines to battle it out. Yet control by algorithm can potentially morph into “out of control.” Existing AI cannot deduce the intentions of others or make critical decisions by generalizing from past experience in the chaos of war. The inability to read behavioral subtleties to distinguish civilian from combatant or friend versus foe should call into question whether AIs should replace GIs in a foreseeable future mission. A killer robot of any kind would be a trained assassin, not unlike Arnold Schwarzenegger in The Terminator. After the battle is done, moreover, who would be held responsible when a machine does the killing? The robot? Its owner? Its maker?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


With all these drawbacks, a fully autonomous robot fashioned using near-term technology could create a novel threat wielded by smaller nations or terrorists with scant expertise or financial resources. Swarms of tiny, weaponized drones, perhaps even made using 3-D printers, could wreak havoc in densely populated areas. Prototypes are already being tested: the U.S. Department of Defense demonstrated a nonweaponized swarm of more than 100 micro drones in 2016. Stuart Russell of the University of California, Berkeley, a prominent figure in AI research, has suggested that “antipersonnel micro robots” deployed by just a single individual could kill many thousands and constitute a potential weapon of mass destruction.

Since 2013 the United Nations Convention on Certain Conventional Weapons (CCW), which regulates incendiary devices, blinding lasers and other armaments thought to be overly harmful, has debated what to do about lethal autonomous weapons systems. Because of opposition from the U.S., Russia and a few others, the discussions have not advanced to the stage of drafting formal language for a ban. The U.S., for one, has argued that its policy already stipulates that military personnel retain control over autonomous weapons and that premature regulation could put a damper on vital AI research.

A ban need not be overly restrictive. The Campaign to Stop Killer Robots, a coalition of 89 nongovernmental organizations from 50 countries that has pressed for a such a prohibition, emphasizes that it would be limited to offensive weaponry and not extend to antimissile and other defensive systems that automatically fire in response to an incoming warhead.

The current impasse has prompted the campaign to consider rallying at least some nations to agree to a ban outside the forum provided by the CCW, an option used before to kick-start multinational agreements that prohibit land mines and cluster munitions. A preemptive ban on autonomous killing machines, with clear requirements for compliance, would stigmatize the technology and help keep killer robots out of military arsenals.

Since it was first presented at the International Joint Conference on Artificial Intelligence in Stockholm in July, 244 organizations and 3,187 individuals have signed a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” The rationale for making such a pledge was that laws had yet to be passed to bar killer robots. Without such a legal framework, the day may soon come when an algorithm makes the fateful decision to take a human life.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe