Back in the early 1970s, a handful of scientists, engineers, defense contractors and U.S. Air Force officers got together to form a professional group. They were essentially trying to solve the same problem: how to build machines that can operate on their own without human control and to figure out ways to convince both the pub­lic and a reluctant Pentagon brass that ro­­bots on the battlefield are a good idea. For decades they met once or twice a year, in relative obscurity, to talk over technical issues, exchange gossip and renew old friendships. This once cozy group, the Association for Un­­manned Systems International, now encompasses more than 1,500 member companies and organizations from 55 countries. The growth happened so fast, in fact, that it found itself in something of an identity crisis. At one of its meetings in San Diego, it even hired a “master storyteller” to help the group pull together the narrative of the amazing changes in robotic technology. As one attendee summed up, “Where have we come from? Where are we? And where should we—and where do we want to—go?”

What prompted the group’s soul-searching is one of the most profound changes in modern warfare since the advent of gunpowder or the airplane: an astonishingly rapid rise in the use of robots on the battlefield. Not a single robot ac­­companied the U.S. advance from Ku­­wait toward Baghdad in 2003. Since then, 7,000 “unmanned” aircraft and another 12,000 ground vehicles have entered the U.S. military inventory, entrusted with missions that range from seeking out snipers to bombing the hideouts of al-Qaeda higher-ups in Pakistan. The world’s most powerful fighting forces, which once eschewed robots as unbecoming to their warrior culture, have now embraced a war of the machines as a means of combating an irregular enemy that triggers remote explosions with cell phones and then blends back into the crowd. These robotic systems are not only having a big effect on how this new type of warfare is fought, but they also have initiated a set of contentious arguments about the implications of using ever more autonomous and intelligent machines in battle. Moving soldiers out of harm’s way may save lives, but the growing  use of robots also raises deep political, legal and ethical questions about the fundamental nature of warfare and whether these technologies could inadvertently make wars easier to start.

The earliest threads of this story arguably hark back to the 1921 play R.U.R., in which Czech writer Karel  ˆCapek coined the word “robot” to describe mechanical servants that eventually rise up against their human masters. The word was packed with meaning, because it derived from the Czech word for “servitude” and the older Slavic word for “slave,” historically linked to the “robotniks,” peasants who had revolted against rich landowners in the 1800s. This theme of robots taking on the work we don’t want to do but then ultimately assuming control is a staple of science fiction that continues today in The Terminator and The Matrix.

Today roboticists invoke the descriptors “unmanned” or “remote-operated” to avoid Hollywood-fueled visions of machines that are plotting our demise. In the simplest terms, robots are machines built to operate in a “sense-think-act” paradigm. That is, they have sensors that gather information about the world. Those data are then relayed to computer processors, and perhaps artificial-intelligence software, that use them to make appropriate decisions. Finally, based on  that information, mechanical systems known as effectors carry out some physical action on the world around them. Robots do not have to be anthropomorphic, as is the other Hollywood trope of a man in a metal suit. The size and shape of the systems that are beginning to carry out these actions vary widely and rarely evoke the image of C-3PO or the Terminator.

The Global Positioning Satellite system, video-game-like remote controls and a host of other technologies have made robots both useful and usable on the battlefield during the past decade. The increased ability to observe, pinpoint and then attack targets in hostile settings without having to expose the human operator to danger became a priority after the 9/11 attacks, and each new use of the systems on the ground created a success story that had broader repercussions. As an example, in the first few months of the Afghan campaign in 2001, a prototype of the PackBot, now used extensively to defuse bombs, was sent into the field for testing. The soldiers liked it so much that they would not return it to its manufacturer, iRobot, which has since gone on to sell thousands. Similarly, another robotics company executive recounts that before 9/11, he could not get his calls returned by the Pentagon. Afterward, he was told: “Make ’em as fast as you can.”

This accelerating acceptance of military robotics became apparent as the Iraq War played out. When U.S. forces went into Iraq in 2003, the ground invasion force had no unmanned systems. By the end of 2004 the number had risen to 150 or so. A year later it had reached 2,400. Today the overall U.S. military inventory is more than 12,000. The same trend occurred with air weaponry: the U.S. military went from having a handful of unmanned aerial vehicles supporting the invasion force to more than 7,000 now. And this progression is just the start. One U.S. Air Force three-star general forecasts that the next major U.S. conflict will involve not the thousands of robots currently in the field but “tens of thousands.”

The raw numbers reveal an important shift in attitude by a military that just a few years ago remained dubious of its capabilities and protective of the age-old warrior’s prerogative of leading the charge into combat. Today the U.S. Air Force, Army and Navy entice teenage recruits through television advertising that extols how, as one promotion puts it, the U.S. Navy is "working every day to unman the front lines."

When teens do join the military, exposure to automated systems is integral to their experience, from induction to discharge. They use the latest virtual-training software to learn how to operate a particular weapons system. After training, they may well operate a lawnmower-size PackBot or a TALON ground robot that can defuse bombs or peek over the top of a ridge in the hunt for insurgents in Iraq or Afghanistan.

If they end up at sea, they may well serve on an Aegis-class destroyer or Littoral Combat Ship, which operate as mother ships for a range of systems, from Fire Scout unmanned helicopters to Protector robotic sentry motorboats. If their career takes them into submarines, they could end up controlling unmanned underwater vehicles such as the REMUS (Remote Environmental Monitoring Units, a torpedo-shaped robot sub originally developed by the Woods Hole Oceanographic Institution) to detect mines or to conduct surveillance of unfriendly coastlines. If they become aviators, they may “fly” Predator or Global Hawk drones over Central Asia, while never physically leaving the continental U.S.

The War Bots of Tomorrow
Such technologies are billed in a recruiting ad as part of today’s military, while “seeming like science fiction.” In reality, they are merely the first generation, a suggestion of more to come. That is, today’s PackBot robot hunting roadside bombs and the Predator drones flying over Afghanistan represent the equivalent of the Model T Ford and the Wright brothers’ Flyer. Prototypes for the next generation reveal three key ways that robots will change how we conduct warfare.

The idea of robots as mere “unmanned sys­tems”—identical to any other machine, except without the presence of a human operator inside—is beginning to fade. The evolution recapitulates the trajectory of automotive history: thinking about cars as mere “horseless carriages” became an artifact as designers started to consider wholly novel forms and sizes. The similar casting off of preconceptions about robots is leading the machines to take on a wide range of shapes. As would be expected, some models take their inspiration from biology. Boston Dynamics’s BigDog, for one, is a metallic, equipment-toting quadruped. Others are hybrids, such as a Naval Postgraduate School surveillance bot that has both wings and legs. But other systems in early development have literally no form at all. ChemBot, a creation of the University of Chicago and iRobot, is a bloblike machine that shifts shape, such that it is able to squeeze through a hole in the wall.

With no humans inside, the size of robots can range wildly. Miniaturized robots already measure in millimeters and weigh in grams. Take a surveillance bot made by AeroVironment for urban combat. It mimics a hummingbird in size and in its ability to hover over a target. The next frontier is nanoscale robotics (structures measured in billionths of a meter) that some scientists believe will become commonplace within a few decades. In war these machines might be used for roles that range from “smart dust” that detects the enemy to cellular-level machines inside the human body that repair wounds or, in turn, cause them. At the other end of the scale, the ability to deploy a system that does not have to take into account human bodily needs is leading to gigantic unmanned systems, such as Lockheed Martin’s High-Altitude Airship, an unmanned blimp that carries a radar the length of a football field, designed to fly at above 19,800 meters for more than a month at a time.

Beyond size and shape, a second key change is the widening of roles these machines can perform in warfare. Much like the early “aeroplanes” in World War I, robots started out only for observation and reconnaissance and have now expanded into new tasks. Technology development company QinetiQ North America, maker of the TALON, introduced the MAARS robot in 2007, which is armed with a machine gun and grenade launcher and can take on sentry and sniper duty. In turn, med bots such as the U.S. Army Medical Research and Materiel Command’s Robotic Extraction Vehicle are designed to drag wounded soldiers to safety and then administer care.

The third key change is the robots’ ever growing intelligence and autonomy. The inexorable growth in computing power means that today’s recently enlisted soldiers may end their careers witnessing robots powered by computers literally a billion times more capable than those currently available. The World War II–era military did not differentiate between the B-17 and B-24 bomber by how smart they were, but latter-day weapons systems require just such distinctions. The Predator series of unmanned planes, for example, has evolved from being purely remote-controlled to now being able to take off and land on their own and track 12 targets at once; the target-recognition software can even trace footprints back to their point of origin. Even so, the U.S. military is already planning to replace these planes, deployed since 1995, with a newer generation.

The expansion of robotic intelligence and autonomy raises profound questions of what roles are appropriate to outsource to machines. These decisions must be weighed on how effective the machines might be in battle but also on what this shift in responsibility would mean for both their human commanders and broader political, ethical and legal responsibility for their conduct. The most likely outcome in the near future is for robots to take on the semblance of “war fighter associates.” In this scenario, mixed teams of humans and robots would work together, each doing what they do best. The human element may well turn out to be akin to the quarterback in a football game, calling plays for robotic teammates, while giving them enough autonomy to react to changing circumstances.

The Real Story
these remarkable developments may still not fully capture the story of where robotics is headed and what it means for our world and the future of warfare. The full implications cannot be gleaned from describing physical capabilities, just as the significance of gunpowder is not captured by noting that it produced a chemical explosion that allowed a longer trajectory for projectiles.

Robots are one of those rare inventions that literally change the rules of the game. Such a “revolutionary” technology does not give one side a permanent advantage, as some analysts mistakenly believe, because it is quickly adopted by or adapted to by other combatants. Rather it causes shake-ups, not only on the battlefield but in the social structures surrounding it. The longbow, for example, was not notable simply because it allowed the English to beat the French at the Battle of Agincourt during the Hundred Years’ War; rather it let organized groups of peasants triumph over knights, ending the age of feudalism.

An apt historical parallel to the current period may well turn out to be World War I. Back then, strange, exciting new technologies that had been viewed as merely science fiction just years earlier were introduced and then used in increasing numbers on the battlefield. Indeed, it was H. G. Wells’s 1903 short story “Land Ironclads” that inspired Winston Churchill, then First Lord of the Admiralty, to champion the development of the tank. Another story, by A. A. Milne, creator of the beloved Winnie-the-Pooh series, was among the first to raise the idea of using airplanes in war, while Arthur Conan Doyle (in his 1914 short story “Danger!”) and Jules Verne (in his 1869 novel 20,000 Leagues under the Sea) pioneered the notion of submarines’ full use in war. First users had an edge, but it was fleeting. British invention and early exploitation of tanks in World War I, for example, was surmounted a mere 20 years later when the Germans proved with their blitzkrieg tactics that they had figured out how to use the new weapon more effectively.

The arrival of tanks, airplanes and submarines was important, however, because they raised a wholly new set of political, moral and legal issues that resulted in dramatic strategic consequences. For instance, differing interpretations between the U.S. and Germany over how submarines were legally allowed to fight (should they be allowed to sink merchant ships without warning?) drew America into the First World War, ultimately leading to its rise to superpower status. Similarly, airplanes proved useful not only at spotting and attacking troops at greater distances, but also at allowing the emergence of aerial bombing that often resulted in bombs raining onto civilian populations, giving an entirely new meaning to the term “home front.”

The Plot Thickens
we are seeing much the same circumstances today with military robotics. Take the idea of what it once meant to “go to war.” For democratic nations, it long signified a serious commitment that involved currying public favor for an endeavor that jeopardized not just the lives of its citizens’ sons and daughters but the state’s very survival. Unmanned systems (and their ability to carry out remote acts of force) erode the deterrent exerted by public sentiment, a decline already begun by the end of the U.S. military draft in 1979.

This distancing of the human combatant from the theater of conflict may well make wars easier to start and may even change how we view them. For example, the U.S. has carried out more than 130 air strikes into Pakistan using Predator and Reaper unmanned craft. This number is more than triple the total of manned bomber strikes that we launched in the opening round of the Kosovo War a mere decade ago. But unlike that war, robotic air strikes into Pakistan prompted no debate at all in Congress and relatively little reporting in the media. In essence, we are engaging in what we would have previously called a “war,” but without public deliberation. The conflict is not even considered a war, because it comes without any cost in U.S. human lives. By one measure, these strikes have been highly effective. They have killed as many as 40 leaders of al-Qaeda, the Taliban and allied militant groups without having to send American troops or pilots into harm’s way. But the repercussions of these strikes raise questions that are still being answered.

What is, for one, this technology’s impact on the “war of ideas” we are fighting against terrorist recruiting and propaganda? That is, how and why is the reality of our painstaking efforts to act with precision emerging on the other side of the globe through a cloud of anger and misperception? Whereas we use adjectives such as “precise” and “costless” to describe the technology in our mass media, a leading newspaper in Pakistan declared the U.S. to be a “principal hate figure” and “all-purpose scapegoat” because of the strikes. Unfortunately, “drone” has become a colloquial word in Urdu, appearing in rock lyrics that accuse America of not fighting with honor. This issue becomes more complex when weighing who should be held accountable when things go wrong. Estimates of civilian casualties range from 200 to 1,000. But many of these incidents occurred close to some of the most dangerous terrorist leaders around. Where does one draw the line?

The meaning of “going to war” is also changing for the individual warrior in 2010. Setting off to battle has always meant that a soldier might never come home. Achilles and Odysseus sailed off to fight Troy. My grandfather shipped out to fight the Japanese after Pearl Harbor. Remote warfare has changed the enduring truth of the past 5,000 years of war. A growing number of soldiers wake up, drive to work, sit in front of computers and use robotic systems to battle insurgents 11,300 kilometers away. At the end of a day “at war,” they get back in their cars, drive home and, as one U.S. Air Force officer put it: “Within 20 minutes you are sitting at the dinner table talking to your kids.” The most dangerous part of their day is not the dangers of the battlefield but the commute home.

This disconnection from the battlefield also leads to a demographic change in who does what in war and the issues it provokes about a soldier’s identity (young enlisted troops doing jobs once limited to senior officers) or status (the technician versus the warrior) or the nature of combat stress and fatigue. Remote operators may seem like they are just playing video games, but they experience a psychological burden of fighting day after day after day, with lives on the ground depending on their flawless performance. Their commanders describe the challenges of leading units fighting remotely as being far different and sometimes even more difficult than leading regular units physically in battle.

With each step in the growing lethality and intelligence of robotics, the role of the “man in the loop” of decision making in war has begun to diminish. For example, the pace of war is such that only systems such as the Counter-Rocket Artillery and Mortar, or C-RAM (which looks a bit like the Star Wars robot R2-D2, with a 20-millimeter automatic machine gun attached) can react quickly enough to shoot down incoming rockets or missiles. The human is certainly part of the decision making but mainly in the initial programming of the robot. During the actual operation of the machine, the operator really only exercises veto power, and a decision to override a robot’s decision must be made in only half a second, with few willing to challenge what they view as the better judgment of the machine.

Many observers argue that such a trend will lower the likely mistakes in war, as well as ensure that the laws of war are uniformly followed, as if they were software code in a computer processor. Yet this attitude ignores the complex environment of war. An unmanned system may be able to pick out a man carrying an AK-47 rifle from over a kilometer away and tell whether he fired it recently or not (by the weapon’s thermal signature), but knowing whether that man is an insurgent, a member of an allied militia or a simple shopkeeper will be as hard for the machine as it is today for any human soldier.

Nor is the age-old “fog of war” being lifted by technology, as former defense secretary Donald H. Rumsfeld and other advocates for the digital battlefield once believed. For instance, the sophisticated C-RAM technology reportedly once mistook a U.S. Army helicopter for an enemy target because of a programming error. Fortunately, no one was hurt. Unluckily, what an investigative report described as a “software glitch” in a similar antiaircraft system in South Africa produced a less benign outcome in 2007. Armed with a 35-millimeter cannon, the weapon was supposed to fire into the sky during a training exercise. Instead it leveled and fired in a circle, killing nine soldiers before it ran out of ammunition.

Such incidents, of course, raise immense legal concerns. How should one apportion accountability? What system of law can even be relied on for guidance? These instances demonstrate that technology often moves faster than our social institutions. How do we reconcile our 20th-century laws of war to the new reality?

A New Beginning
our definitions and understandings of war, how it is fought and even who should fight are in great flux, driven by a remarkable new technology that delivers immense capability. Humankind has been in this same kind of situation before. We often struggle to integrate and understand new technologies and then eventually look at what was once considered strange and even unacceptable as completely normal. Perhaps the best example can be invoked from the 1400s, when one French nobleman argued that guns were tools of murder a true soldier would not deign to use. Only cowards, he wrote, “would not dare to look in the face of the men they bring down from a distance with their wretched bullets.”

We have “progressed” since then, but the story today is much the same with robotics. Mastery of the technology may turn out to be much easier to address than the policy dilemmas arising from the incredible capabilities of machines that can change the world around them. Indeed, it is for this reason that some scientists invoke a different historic parallel to where we stand now with robotics than the gun or airplane, instead citing the atomic bomb. We are creating an exciting technology that is pushing the frontiers of science but raises such penetrating concerns beyond the scientific realm that we may well come to regret these elaborate engineering creations, as did some designers of early nuclear warheads. Of course, just like those inventors back in the 1940s, today’s robotics developers continue their work because it is militarily useful, highly profitable, as well as the cutting edge of science. As Albert Einstein supposedly said, “If we knew what it was we were doing, it would not be called research, would it?”

The real story is that what was once only fodder for science-fiction conventions has to be discussed seriously and not only at the Pentagon. This narrative is of importance not solely to what takes place at robotic trade group meetings, in the research labs or on the battlefield but to how the overall tale of humanity is playing itself out. Humankind had a 5,000-year monopoly on the fighting of war. That monopoly has ended.