Killer Robots: The Future of War?

21 Mar 2017

Despite arguments to the contrary, Johannes Lang believes we should reject claims that lethal autonomous weapons will make war more discriminate, more controllable, and less risky. In fact, given the dangers these arms raise, governments should work to impose an international ban or moratorium on their development and use.

This article was external pageoriginally published by the external pageDanish Institute for International Studies (DIIS) on 16 March 2017.

The prospect of lethal autonomous weapons— or “killer robots”—looms on the horizon. The full consequences of delegating lethal decisions to machines are unknown, but the dangers are evident. Governments should support international efforts to impose a ban or a moratorium on the development and use of such weapons.

Recommendations

  • Despite arguments to the contrary, we should be wary of the claim that lethal autonomous weapons will make war more discriminate, more controllable, and less risky.
  • The Danish government should support international efforts to introduce a ban, a moratorium, or similarly strict regulations on the development and use of lethal autonomous weapons, confronting the legal and practical challenges this involves.
  • The Danish government should encourage the tech industry to review and put in place strict ethical regulations on research and development relating to lethal autonomous weapons.

Robots have already taken over many of the dull, dirty, and dangerous tasks of war. These intelligent machines, able to perceive their environment and act purposefully within it, have become welcome aids in clearing land and sea mines, in battlefield rescue, and casualty extraction. Armed robots guard border regions of Israel, as well as the demilitarized zone between North and South Korea. Britain’s “fire and forget” Brimstone missiles guide themselves to their targets once launched. Armed drones hover over the earth.

For the moment, humans remain “in the loop” (with sole authority to decide when to use the weapons) and “on the loop” (with authority to call in or call off the robots). However, investments in autonomous weapons rank high on defense agendas, with the US Defense allotting 12.5 billion dollars in 2017 for work in electronic warfare, big data, robotics, autonomy, and disruptive technologies. The tough question facing policymakers today is what steps the international community should take to regulate the development of fully autonomous weapons, where humans are entirely out of the loop.

Autonomous technologies of war

The main motivation behind the development of autonomous weapons has to do with the distribution of military force. The hope is that such weapons will allow commanders to deploy their staff and firepower in new and stunningly effective ways. The US Department of Defense, for example, is currently developing swarm technologies for land, sea, and air. In one scenario, a small number of pilots would oversee a large fleet of lethal autonomous aircraft. In another scenario, autonomous robots would accompany American troops into battle. These robots would be able to identify the source of hostile fire and retaliate immediately, without human authorization.

A second argument for autonomous weapons has to do with the speed and complexity of modern warfare. War is a race against time. Whoever is able to think faster, make decisions faster, and initiate clever military operations faster than their enemies, usually wins. Technologically advanced militaries increasingly rely on computers to rapidly compile and analyze massive amounts of information, which then serve as the basis for strategic decisions.

Militaries are careful to insist that the decision to use force remains safely in the hands of human beings. US Deputy Defense Secretary Robert Work has been adamant that the US military “will not delegate lethal authority to a machine to make a decision.” With one crucial exception: when things “go faster than human reaction time, like cyber or electronic warfare.” The reality, however, is that most aspects of contemporary warfare are speeding up and reliant on electronics. Increasingly, humans have only a supervisory role and the power to veto the machines’ recommendation to use force. As the complexity, speed, and sophistication of these systems increase, human supervisors could well become more likely to trust the systems and less likely to veto their recommendations. When this happens, the authority to make decisions regarding life and death has shifted to the weapons themselves.

The changing character of war

Automated weapons are not merely new tools of war; they also change the very conditions of war itself. Innovations in robotics and artificial intelligence open up new possibilities, which will to some extent dictate the goals and strategies of future military operations. The dispersion of military power, made possible by autonomous technology, is already transforming military thinking. War is becoming less like a traditional conflict between clearly defined centers of power, and more like a global network of diffuse battlefields and highly mobile and dispersed firepower, further eroding the conventional distinction between “home front” and “battlefront.” The new swarm technology will contribute to this development, with small, fully autonomous drones dropping out of a “mothership” and returning hours later. Such technology promises to enhance military intelligence capacities, but once in existence there is nothing to stop the military from arming the drones. Imagine a swarm of drones, equipped with biometric data and orders to find and kill specific individuals, groups of individuals, or everyone in a designated area. Swarm technology, promoted by the industry as relatively inexpensive, could also fall into the hands of non-state actors.

Should we fear autonomous weapons?

The prospect of fully autonomous weapons has become a source of international concern. The most vocal critics have been the “Campaign to Stop Killer Robots,” spearheaded by national and international NGOs, including Human Rights Watch. A prominent member of the Campaign, computer scientist Noel Sharkey, argues that “killer robots,” by their very nature, violate the ethics and laws of war. Robots, he claims, cannot discriminate between combatants and civilians, because we cannot program a computer with the specification of what a civilian is. Sharkey also insists that there is no way for a robot to make the proportional decisions required by International Humanitarian Law. In his view, it requires a specifically human form of judgment to decide whether a certain number of civilian casualties and damage to property is proportional to the military advantages gained. Such debates have a philosophical dimension: robots cannot die, and so cannot understand the existential gravity of the decision to kill. Sharkey also notes that we cannot hold robots accountable for their actions. Who then do we hold to account? The human commander? If the robot malfunctions or makes a terrible decision, who is to blame? The programmer? The manufacturer? The policymakers?

On the other side of this debate are those, like roboticist Ronald Arkin, who argue that robots will make war less destructive, less risky, and more discriminate. Human perception and judgment are inherently limited and biased, and war far too complex for any human mind to grasp. Some of the worst atrocities in war, Arkin claims, are due to human weakness. Emotions like fear, anger, and hatred—or mere exhaustion—can easily cloud a soldier’s judgment on the battlefield. From this point of view, the objection that robots will never be able to think and act like humans is anthropocentric and misses the point. The question for advocates of lethal autonomous weapons is not whether the technology can mimic human psychology, but whether we can design, program, and deploy robots to perform ethically as well, or better, than humans do under similar circumstances.

The Phalanx Close-In Weapon System (CIWS)
The Phalanx Close-In Weapon System (CIWS) provides ships with a defense against missiles that have penetrated other fleet defenses. It can automatically search for, detect, track, engage, and confirm kills using its computer-controlled radar system. The first ship fully fitted out was the aircraft carrier USS Coral Sea in 1980 © US Navy

The debates about autonomous weapons have made questions about artificial intelligence and lethal machine autonomy the center of concern. How intelligent and autonomous could or should these weapons become? Will humans be in, on, or out of “the loop”? Framing the debate in these terms can conceal more than it reveals. For one thing, proponents of autonomous weapons are not openly suggesting that we build human-like killer robots, nor that we take humans “out of the loop.” Discussions about artificially intelligent killer robots in the future also divert attention from the extent to which our dependence on technology has in practice already pushed humans out of the loop and blurred the distinctions between automatic, semi-autonomous, and autonomous weapons.

Finally, the focus on lethal machine autonomy obscures how autonomous technology concentrates immense firepower in the hands of a few human beings. The crucial issue here is not that of lethal machine autonomy, but of the capacity for humans to exert meaningful autonomy in the lethal human-machine interactions that will define future wars. Lethal autonomous weapons will greatly expand the potential scope of violence, at the very moment when the complexity and speed of war has moved beyond the human ability to follow. This growing gap between the immense human capacity for violence and a limited capacity for judgment is perhaps the most dangerous implication of such technology.

A need for UN regulations

The Campaign to Stop Killer Robots is making an important moral argument, warning us against the possible dangers of a new and emerging military technology. “Killer robots” have the potential to increase the loss of civilian lives in war and to render the laws of war irrelevant. Such weapons will also fundamentally alter the human-machine relation, shifting ever more decision-making authority to the machines, while at the same time concentrating firepower in a small number of human hands.

The preemptive ban the Campaign is calling for is unlikely to prevent completely the development of this technology, and we know from experience that once technology exists, people will often use and abuse it. Yet a ban, moratorium, or other strict regulations, could slow down these developments, just as they might generate a sense of urgency and stigmatize such technology. International prohibitions carry a normative force that affects the behavior of nations, even if they refuse to join a ban treaty. The UN will launch negotiations in 2017 on a treaty prohibiting nuclear weapons, paving the way for similar discussions on lethal autonomous weapons. We urge the Danish government to prioritize the upcoming discussions at the UN, and to consider seriously a ban or other strict regulations on the development and use of lethal autonomous weapons.

About the Authors

Johannes Lang and Robin May Schott are Senior Researchers at the Danish Institute for International Studies (DIIS).

For more information on issues and events that shape our world, please visit the CSS Blog Network or browse our Digital Library.

JavaScript has been disabled in your browser