Pros and Cons ofAutonomous WeaponsSystemsAmitai Etzioni, PhDOren Etzioni, PhDAutonomous weapons systems and military robots are progressing from science fiction movies to designers’ drawing boards, to engineering laboratories, and to the battlefield. These machineshave prompted a debate among military planners,roboticists, and ethicists about the development anddeployment of weapons that can perform increasinglyadvanced functions, including targeting and applicationof force, with little or no human oversight.Some military experts hold that autonomousweapons systems not only confer significant strategicand tactical advantages in the battleground but alsothat they are preferable on moral grounds to the useof human combatants. In contrast, critics hold thatthese weapons should be curbed, if not banned altogether, for a variety of moral and legal reasons. Thisarticle first reviews arguments by those who favorautonomous weapons systems and then by those whooppose them. Next, it discusses challenges to limitingand defining autonomous weapons. Finally, it closeswith a policy recommendation.Arguments in Support ofAutonomous Weapons SystemsSupport for autonomous weapons systems falls intotwo general categories. Some members of the defensecommunity advocate autonomous weapons because ofmilitary advantages. Other supporters emphasize moraljustifications for using them.Military advantages. Those who call for furtherdevelopment and deployment of autonomous weapons72systems generally point to several military advantages.First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a givenmission, and the efficacy of each warfighter is greater.Next, advocates credit autonomous weapons systems withexpanding the battlefield, allowing combat to reach intoareas that were previously inaccessible. Finally, autonomous weapons systems can reduce casualties by removinghuman warfighters from dangerous missions.1The Department of Defense’s Unmanned SystemsRoadmap: 2007-2032 provides additional reasons forpursuing autonomous weapons systems. These includethat robots are better suited than humans for “‘dull, dirty,or dangerous’ missions.”2 An example of a dull mission islong-duration sorties. An example of a dirty mission isone that exposes humans to potentially harmful radiological material. An example of a dangerous mission isexplosive ordnance disposal. Maj. Jeffrey S. Thurnher,U.S. Army, adds, “[lethal autonomous robots] havethe unique potential to operate at a tempo faster thanhumans can possibly achieve and to lethally strike evenwhen communications links have been severed.”3In addition, the long-term savings that could beachieved through fielding an army of military robotshave been highlighted. In a 2013 article published in TheFiscal Times, David Francis cites Department of Defensefigures showing that “each soldier in Afghanistan coststhe Pentagon roughly 850,000 per year.”4 Some estimate the cost per year to be even higher. Conversely,according to Francis, “the TALON robot—a small roverthat can be outfitted with weapons, costs 230,000.”5May-June 2017MILITARY REVIEW

AUTONOMOUS WEAPONS SYSTEMSAccording to Defense News, Gen. Robert Cone, formercommander of the U.S. Army Training and DoctrineCommand, suggested at the 2014 Army AviationSymposium that by relying more on “support robots,”the Army eventually could reduce the size of a brigadefrom four thousand to three thousand soldiers withouta concomitant reduction in effectiveness.6Air Force Maj. Jason S. DeSon, writing in the AirForce Law Review, notes the potential advantages ofautonomous aerial weapons systems.7 According toDeSon, the physical strain of high-G maneuvers andthe intense mental concentration and situationalawareness required of fighter pilots make them veryprone to fatigue and exhaustion; robot pilots, on theother hand would not be subject to these physiologicaland mental constraints. Moreover, fully autonomousplanes could be programmed to take genuinely randomMILITARY REVIEWMay-June 2017As autonomous weapons systems move from concept to reality, militaryplanners, roboticists, and ethicists debate the advantages, disadvantages,and morality of their use in current and future operating environments. (Image by Peggy Frierson)and unpredictable action that could confuse an opponent. More striking still, Air Force Capt. MichaelByrnes predicts that a single unmanned aerial vehiclewith machine-controlled maneuvering and accuracycould, “with a few hundred rounds of ammunitionand sufficient fuel reserves,” take out an entire fleet ofaircraft, presumably one with human pilots.8In 2012, a report by the Defense Science Board, insupport of the Office of the Under Secretary of Defensefor Acquisition, Technology and Logistics, identified“six key areas in which advances in autonomy would73

have significant benefit to [an] unmanned system:perception, planning, learning, human-robot interaction, natural language understanding, and multiagentcoordination.”9 Perception, or perceptual processing,refers to sensors and sensing. Sensors include hardware,and sensing includes software.10Next, according to the Defense Science Board, planning refers to “computing a sequence or partial order ofactions that [achieve] a desired state.”11 The process relies on effective processes and “algorithms needed to makedecisions about action (provide autonomy) in situationsin which humans are not in the environment (e.g., space,the ocean).”12 Then, learning refers to how machines cancollect and process large amounts of data into knowledge.The report asserts that research has shown machinesprocess data into knowledge more effectively than peopledo.13 It gives the example of machine learning for autonomous navigation in land vehicles and robots.14Human-robot interaction refers to “how people work orplay with robots.”15 Robots are quite different from othercomputers or tools because they are “physically situatedagents,” and human users interact with them in distinctways.16 Research on interaction needs to span a numberof domains well beyond engineering, including psychology, cognitive science, and communications, among others.“Natural language processing concerns systems thatcan communicate with people using ordinary humanlanguages.”17 Moreover, “natural language is the mostnormal and intuitive way for humans to instruct autonomous systems; it allows them to provide diverse,high-level goals and strategies rather than detailed teleoperation.”18 Hence, further development of the ability ofautonomous weapons systems to respond to commandsin a natural language is necessary.Finally, the Defense Science Board uses the termmultiagent coordination for circumstances in which a taskis distributed among “multiple robots, software agents,or humans.”19 Tasks could be centrally planned or coordinated through interactions of the agents. This sort ofcoordination goes beyond mere cooperation because “itassumes that the agents have a cognitive understandingof each other’s capabilities, can monitor progress towardsthe goal, and engage in more human-like teamwork.”20Moral justifications. Several military experts androboticists have argued that autonomous weapons systems should not only be regarded as morally acceptablebut also that they would in fact be ethically preferable74to human fighters. For example, roboticist Ronald C.Arkin believes autonomous robots in the future willbe able to act more “humanely” on the battlefield fora number of reasons, including that they do not needto be programmed with a self-preservation instinct,potentially eliminating the need for a “shoot-first, askquestions later” attitude.21 The judgments of autonomousweapons systems will not be clouded by emotions suchas fear or hysteria, and the systems will be able to processmuch more incoming sensory information than humanswithout discarding or distorting it to fit preconceivednotions. Finally, per Arkin, in teams comprised of humanand robot soldiers, the robots could be more relied uponto report ethical infractions they observed than would ateam of humans who might close ranks.22Lt. Col. Douglas A. Pryer, U.S. Army, asserts theremight be ethical advantages to removing humans fromhigh-stress combat zones in favor of robots. He pointsto neuroscience research that suggests the neuralcircuits responsible for conscious self-control can shutdown when overloaded with stress, leading to sexualassaults and other crimes that soldiers would otherwisebe less likely to commit. However, Pryer sets aside thequestion of whether or not waging war via robots isethical in the abstract. Instead, he suggests that becauseit sparks so much moral outrage among the populationsfrom whom the United States most needs support,robot warfare has serious strategic disadvantages, and itfuels the cycle of perpetual warfare.23Arguments Opposed toAutonomous Weapons SystemsWhile some support autonomous weapons systemswith moral arguments, others base their opposition onmoral grounds. Still others assert that moral argumentsagainst autonomous weapons systems are misguided.Opposition on moral grounds. In July 2015, an openletter calling for a ban on autonomous weapons wasreleased at an international joint conference on artificialintelligence. The letter warns, “Artificial Intelligence (AI)technology has reached a point where the deployment ofsuch systems is—practically if not legally—feasible withinyears, not decades, and the stakes are high: autonomousweapons have been described as the third revolution inwarfare, after gunpowder and nuclear arms.”24 The letteralso notes that AI has the potential to benefit humanity,but that if a military AI arms race ensues, AI’s reputationMay-June 2017MILITARY REVIEW

AUTONOMOUS WEAPONS SYSTEMScould be tarnished, and a public backlash might curtailfuture benefits of AI. The letter has an impressive list ofsignatories, including Elon Musk (inventor and founderof Tesla), Steve Wozniak (cofounder of Apple), physicistStephen Hawking (University of Cambridge), and NoamChomsky (Massachusetts Institute of Technology),among others. Over three thousand AI and roboticsresearchers have also signed the letter. The open lettersimply calls for “a ban on offensive autonomous weaponsbeyond meaningful human control.”25We note in passing that it is often unclear whethera weapon is offensive or defensive. Thus, many assumethat an effective missile defense shield is strictly defensive, but it can be extremely destabilizing if it allowsone nation to launch a nuclear strike against anotherwithout fear of retaliation.In April 2013, the United Nations (UN) specialrapporteur on extrajudicial, summary, or arbitraryexecutions presented a report to the UN Human RightsCouncil. The report recommended that member statesshould declare and implement moratoria on the testing,production, transfer, and deployment of lethal autonomous robotics (LARs) until an internationally agreedupon framework for LARs has been established.26That same year, a group of engineers, AI androbotics experts, and other scientists and researchersfrom thirty-seven countries issued the “Scientists’ Callto Ban Autonomous Lethal Robots.” The statementnotes the lack of scientific evidence that robots could,in the future, have “the functionality required foraccurate target identification, situational awareness,or decisions regarding the proportional use of force.”27Hence, they may cause a high level of collateral damage. The statement ends by insisting that “decisionsabout the application of violent force must not bedelegated to machines.”28Indeed, the delegation of life-or-death decision making to nonhuman agents is a recurring concern of thosewho oppose autonomous weapons systems. The mostobvious manifestation of this concern relates to systemsthat are capable of choosing their own targets. Thus, highly regarded computer scientist Noel Sharkey has calledfor a ban on “lethal autonomous targeting” because it violates the Principle of Distinction, considered one of themost important rules of armed conflict—autonomousweapons systems will find it very hard to determine whois a civilian and who is a combatant, which is difficultMILITARY REVIEWMay-June 2017even for humans.29 Allowing AI to make decisions abouttargeting will most likely result in civilian casualties andunacceptable collateral damage.Another major concern is the problem of accountability when autonomous weapons systems aredeployed. Ethicist Robert Sparrow highlights thisethical issue by noting that a fundamental conditionof international humanitarian law, or jus in bello, requires that some person must be held responsible forcivilian deaths. Any weapon or other means of warthat makes it impossible to identify responsibility forthe casualties it causes does not meet the requirements of jus in bello, and, therefore, should not beemployed in war.30This issue arises because AI-equipped machinesmake decisions on their own, so it is difficult to determine whether a flawed decision is due to flaws in theprogram or in the autonomous deliberations of theAI-equipped (so-called smart) machines. The natureof this problem was highlighted when a driverless carviolated the speed limits by moving too slowly on ahighway, and it was unclear to whom the ticket shouldbe issued.31 In situations where a human being makesthe decision to use force against a target, there is aclear chain of accountability, stretching from whoeveractually “pulled the trigger” to the commander whogave the order. In the case of autonomous weaponssystems, no such clarity exists. It is unclear who orwhat are to be blamedor held liable.Amitai Etzioni is aprofessor of internationalrelations at The GeorgeWashington University. Heserved as a senior advisorat the Carter White Houseand taught at Columbia University, HarvardBusiness School, and theUniversity of Californiaat Berkeley. A study byRichard Posner ranked himamong the top one hundred American intellectuals. His most r