Interview with Robert Sparrow on Autonomous Weapon Systems and Respect in Warfare

26 Aug 2016

In this transcript, Robert Sparrow focuses on the ethical and security issues surrounding “killer robots” – i.e., weapon systems that independently determine who should live or die. He ultimately concludes that such weapons should be banned because they are “mala en se” (evils in themselves). In other words, the ethical foundations for such systems just don’t exist.

This interview transcript was external pageoriginally published by the external pageCarnegie Council for Ethics in International Affairs on 25 July 2016.

ADAM READ-BROWN: Hello, and welcome to another episode in our Ethics & International Affairs Interview Series, sponsored by the Carnegie Council.

My name is Adam Read-Brown and I'm the assistant editor of external pageEthics & International Affairs, the Council's quarterly peer-reviewed journal, which is now in its 30th year and is published by Cambridge University Press.

With me today is Professor Robert Sparrow, author of the article "external pageRobots and Respect: Assessing the Case Against Autonomous Weapon Systems" which appears in the external pageSpring 2016 issue of the journal published earlier this year.

Professor Sparrow, welcome. It's good to have you with us.

ROBERT SPARROW: I'm pleased to be talking with you.

ADAM READ-BROWN: Speaking with me from Australia, Robert Sparrow is a professor in the philosophy program, a chief investigator in the external pageAustralian Research Council Centre of Excellence for Electromaterials Science, and an adjunct professor in the external pageCenter for Human Bioethics at Monash University, where he works on ethical issues raised by new technologies. He is the author of some 75 refereed papers and book chapters on topics as diverse as the ethics of military robotics, aged care robotics, just war theory, human enhancement, pre-implantation genetic diagnosis, and nanotechnology. He is the co-chair of the external pageIEEE Technical Committee on Robot Ethics and was one of the founding members of the external pageInternational Committee for Robot Arms Control.

With that introduction, let's begin our discussion.

To start, Professor Sparrow, the subheading of your article in Ethics & International Affairs is "Assessing the Case Against Autonomous Weapon Systems." This term, "autonomous weapon systems" commonly abbreviated as AWS, might not be familiar to all of our listeners. So could you briefly describe what we are referring to when we use this term? What are the characteristics and capabilities of these weapons?

ROBERT SPARROW: Colloquially, we're talking "killer robots." Here it's important to distinguish between the remote-controlled weapon systems, like the Predator and Reaper drones that the United States deploys around the world, and systems where an onboard computer is choosing the targets for the system.

Now, there is controversy in the literature about what we mean by "choosing the targets" here. But the basic idea is that, in some sense, it's the weapon system itself that is determining who should live or die. While of course it's possible that some systems might carry out large parts of their operations autonomously, myself and other people writing in this area are particularly interested in autonomous targeting and autonomous targeting using lethal force. So essentially, machines that decide who to kill.

ADAM READ-BROWN: To be clear, though, the technology and the weapons that you are describing, machines deciding who to kill—are these weapons currently in existence? What is the actual state of robot weaponry today?

ROBERT SPARROW: There is a general feature of argument about new technology where people are tempted to simultaneously describe a technology as radically new and raising all these issues—that's the sort of thing you say when you want people to pay attention or you want funding—but then, in other circumstances, you might say, "Look, there's nothing new to see here. We've been doing this for a long time. Everybody can relax."

Inevitably, this debate features both of those movements. So people who want to be deflationary about autonomous weapon systems might point to a modern anti-tank mine and say, "Here's a weapon that is not remote-controlled, is not active all the time, decides when to explode." Or you could point to an anti-submarine weapon in the tradition of naval mines, called the CAPTOR system. This was a United States weapon system which was essentially a tethered torpedo with a sensor package that could fire a torpedo when it detected a submarine in the area. Now, if you want to think of those weapons as choosing who to kill, then autonomous weapon systems have in fact been around for a long time.

If you think that the required standard of choice can't merely be automatic or have very narrow procedures for determining when to launch the weapon, then you might see autonomous weapon systems as on the horizon. And there are things like Cruise missiles which already have a limited capacity to determine which of a list of targets they will strike.

And then, of course, there's an enormous amount of interest in connecting the sorts of data-mining and pattern-recognition algorithms that are widely used now across a range of military applications to the sensor systems and the targeting systems of weapons to give them this kind of capacity to choose from a range of targets.

ADAM READ-BROWN: This topic of banning or allowing AWS can be a contentious one, as your article makes clear.

How did you come to find yourself focusing your research in this area of study, and particularly on the ethical dimensions of the technology?

ROBERT SPARROW: I was originally interested in robots and computers because they are interesting examples to use to investigate some traditional philosophical arguments, particularly around the moral status of our fellow creatures. If you think of something like a robot dog and you think of someone who is kicking a robot dog, you can think about the concept of virtue and the concept of a virtuous agent without having to deal with intuitions about the pain and suffering of the robot. So you can sort of investigate the contribution that the human end of our relations with other creatures makes.

I was originally writing about the moral status of hypothetical future artificial intelligences and the role played by the form of their embodiment in establishing their moral status. Then, I was interested in the ethics of what's called "ersatz companionship." So robots are being designed for the aged-care context as companions for lonely older citizens. I was interested in how much of a contribution that would make to human well-being.

But in the course of doing that research, I realized just how much robotics research was actually being funded by the military. Essentially, the vast majority of cutting-edge robotics research is funded by military programs.

I remember a famous essentially puff piece for an early guided weapons systems was titled "This bomb can think before it acts." [Air Armament Center Public Affairs Report (2000) 'This bomb can think before it acts,' external pageLeading external pageEdge Magazine, 42, 2, Feb., p.12.]  I remember being struck by that and thinking, "Well, in what sense is this true? Could it be true?"

So I wrote a external pagepaper about autonomous weapon systems arguing about who might be held responsible for when they commit a war crime, and whether you'd hold the weapon itself or the program or a commanding officer responsible when a hypothetical autonomous weapon system deliberately attacks a refugee camp, for instance. That paper turned out to be quite influential and the beginning of a debate about the ethics of autonomous weapon systems.

Since that date, I've been more interested in weapons that are closer to application. So I've been writing about drones and I've been writing about autonomous submersibles, where I've been less focused on the possibility that these might be artificial intelligences and more on something that is closer to term, where they're either remote-controlled or they're autonomous in some sense without us wanting to describe them as potentially full moral agents.

ADAM READ-BROWN: I'd like to turn now to those ethical issues that you alluded to and to your research on them, specifically some of the issues you bring up in the article you wrote for Ethics & International Affairs. If you would, I'd like to have you start by laying out some of the basics of your argument surrounding robots and respect.

You argue, in part, in favor of banning AWS on the grounds that they are mala en se. To start off, what is mala en se?

ROBERT SPARROW: This is a category of weapons that is hypothesized or proposed or identified within the just war tradition as being "evils in themselves." They are weapons that should never be used in war.

Now, it quickly becomes quite hard to explain what unites this category. But throughout the history of warfare, there has been a strong intuition that some weapons are simply wrong, that we shouldn't use these even if they have military advantages, perhaps we shouldn't even use them if victory is at stake, because to use these kinds of weapons is to violate a profound moral imperative. The traditional examples are things like poisonous gases, fragmenting bullets, rape as a weapon of war. There's a significant portion of the international community that thinks nuclear weapons are mala en se and that it would never be ethical to use them.

Understood in its strongest formulation, the category is a function of the weapons themselves, or it captures something about the weapons themselves, rather than the way in which they're used. So a weapon that is evil in itself isn't capable of being used ethically. It's not something that we can just make sure that we aim it at the right people and that rescues it ethically. In theory, this category consists of weapons that it would never be ethical to use.

ADAM READ-BROWN: So from your perspective, and as you argue it, this category includes AWS. Why do you believe that this technology falls into that category, and how does that connect to the concept in your title of robots and respect?

ROBERT SPARROW: The debate about autonomous weapon systems has for the last few years been dominated by the thought that these systems couldn't do distinction or they couldn't make a proportionality calculation. These are other requirements of the just war tradition within the area of what's called jus in bello, justice, or the ethics of the means of warfare.

Traditionally, combatants are required to distinguish between combatants and noncombatants, between legitimate and illegitimate targets, and not deliberately or directly target noncombatants. They are also supposed to pay attention to the consequences of the use of armed force and make sure that the evils that flow from a particular attack don't outweigh the benefits being sought. Benefits being sought here refers most obviously to a particular military advantage.

Critics of autonomous weapon systems have said these systems will never be able to do that, basically won't be able to tell the difference between civilians and insurgents, or they'll never be capable of the sort of robust moral and prudential calculations required to make the proportionality judgment. I think those arguments are likely to falter in particular applications—so anti-submarine warfare, where it's more plausible that a machine could tell the difference between a military and a civilian submarine—and where the proportionality calculation might be much simpler.

So my project was to set out to say, "If we still have the intuition that there's something problematic with these systems, what might sustain that; what could we say about the system, the sorts of weapons, their very nature, that explains a very widespread intuition that this is a horrific future where weapons with computers are deciding who should live or die?" It seems to me that it's something about the relationship that the systems establish between the people who are killing and the people being killed.

Now, in my previous work I was interested in the case where you might want to say the machine itself is doing the killing, where you do hypothesize a sort of machine agency that interrupts the attribution of responsibility for death to the person who originally authorized the use of the autonomous weapon system.

In this paper I'm interested in the case where you are still willing to say that the commander killed this group of people with a weapon. But I think that when that weapon has a sufficient level of autonomy, our intuition becomes that that person is not in the right moral relation to the people being killed. Here I was drawing on some arguments in the work of external pageThomas Nagel, where he famously tries to provide a nonconsequentialist account of just war theory as founded in a principle of respect for persons, a sort of external pageKantian philosophy. So that was my touchstone notion.

What do we owe persons in wartime? We owe them a particular relationship of respect. I believe that autonomous weapon systems may threaten that relationship. At one end, there may be no relationship at all; but closer to the real-world weapon systems, you might say, "Look, there is a relationship between the killer and the killed, but it's a relationship of disrespect rather than respect."

ADAM READ-BROWN: In your answer just now, you alluded to the fact that this is certainly a debate with two sides and there are people, policymakers and scholars, who are arguing in favor of using AWS. What are some of these arguments—you were alluding to some of the consequentialist arguments and others that are out there—in favor of using this technology?

ROBERT SPARROW: There are at least two different sorts of arguments that people are making in favor of autonomous weapon systems.

There's a military/strategic argument that draws on the idea that it's a good thing to preserve the lives of our war fighters, and so has embraced remote-controlled or tele-operated weaponry as a way of having the capacity to use lethal force without putting the lives of friendly war fighters at risk. But it's very foreseeable that the communication systems that are required by those systems will be interrupted in certain forms of warfare. So in war with a high-technology adversary, you would expect military communication systems to be jammed or perhaps directly attacked. In other forms of warfare, like war under the sea, it's very hard to maintain real-time communications of the sort that you might need to receive telemetry from the weapon system in order to enable a human being to directly oversee targeting decisions.

So a significant set of the arguments in favor of autonomous weapon systems are coming out of the military in terms of their military utility. There's a sense that, of course, if you are fighting a just war, then military utility has an ethical valence, as indeed does the idea that we shouldn't be placing the lives of our war fighters at risk unnecessarily when a technology might help to protect them.

Now, beyond that, there are a group of authors who are making a more direct and open appeal to ethics to advocate in favor of autonomous weapon systems. In particular, they hypothesize that these systems might one day be more capable of doing distinction in particular, so, more capable of telling the difference between a civilian and a military target. One circumstance in which that seems quite plausible to me is modern warfare involving airships, where you're worrying about whether a particular radar signal is coming from a fighter plane or perhaps a civilian aircraft or a missile and the speed of combat, the tempo of combat, has become so high that unless one makes a split-second decision you lose the engagement. So you might imagine that machines will be better at making that calculation than human beings.

More controversially, you might think that because a robot isn't in fear for its life, because hopefully it's not racist, because it's not tired or seeking revenge, there's a whole lot of motivations and circumstances that mean human beings are not actually that good in practice at distinguishing between legitimate and non-legitimate targets. So some authors, in particular external pageRon Arkin at Georgia Tech, have suggested our robots might in the future be better than human beings at doing distinction. I think it's worth highlighting that that's a very futuristic scenario, at least when it comes to the sort of counterinsurgency campaigns that the United States in particular is involved in around the world today. It might be more plausible in relation to high-speed air combat, as I suggested.

But I do think we should be cautious about essentially buying into a particular category of weapons on their future promise. Having said that, it does seem to me conceivable that in some areas of combat the argument that these machines will never be able to do distinction is too swift.

ADAM READ-BROWN: Okay, so I hear you saying that some of the technological advancements touted by proponents of AWS may simply never come to pass—and this is one reason that we should be skeptical of these arguments. Leaving that aside, for a moment, some of those arguments in favor of AWS do certainly sound compelling. The idea that the military could suffer fewer casualties in war, or that wars would have fewer collateral civilian deaths. That certainly sounds nice. How does your respect-based argument address these advocates of AWS?

ROBERT SPARROW: I think it's important to separate out two different lines of arguments in your remark there. There's "we can save the lives of our own war fighters" and it certainly seems plausible to me that autonomous weapon systems could do that. Of course, so could just heavy artillery shelling from a long distance. And of course, at some point we actually expect our war fighters to take on certain risks for the sake of the principle of distinction and the moral weight of the lives of noncombatants.

The more interesting argument is, what if we could save more noncombatant lives using these autonomous weapon systems; wouldn't that establish a strong case for their use?

Now, it's a general feature of argument about just war theory that consequentialism and deontological or Kantian or respect-based accounts are in real tension. Indeed, it's a sort of well-known problem with consequentialism. If you allow your consequentialist opponent in an argument the hypothetical premise "what if we could achieve all these great things without any of the bad things?" and you accept consequentialism, then you will accept any conclusion. So the combination of consequentialism plus science fiction quickly leads you to anywhere you want to go. Someone who's happy to sort of drink the science fiction Kool-Aid—they will draw any conclusion at all from that.

So I think we need to hold on to the nonconsequentialist intuitions that are in the just war tradition. We need to uphold the idea that even if we could shorten the course of a war by firebombing the enemy kindergarten so the enemy was so terrified and dispirited that they surrendered—that might save a whole lot of lives in the consequentialist calculation. But when someone suggests firebombing an enemy's kindergarten, we should say, "No, that's not something that you can do even if it were true that this would save the lives of our war fighters, and maybe even save some of the lives of enemy civilians who would otherwise have been killed in the course of the military campaign that would follow."

That nonconsequentialist intuition about the nature of what it is that we do when we deliberately set out to kill noncombatants or when we choose a particularly horrific means of warfare, that's what I think we need to be investigating in the debate about autonomous weapon systems. Otherwise the conclusion that these systems will be good just follows too quickly from the consequentialism and the hypothesis that they will be better than human beings.

ADAM READ-BROWN: I'd like to pivot for a moment and pause our discussion about the future of warfare and look back for a moment at a little bit of history, which you alluded to earlier in our discussion. It's not difficult to point to historical examples—you've mentioned a few of them—of various new types of weaponry in the past that were initially viewed as beyond the pale or inhumane but which eventually became normalized—the machine gun, long-range missiles.

To give just one other example, I was doing a little research and came across an article from 1922 in a publication called The North American Review. The article was about the debate over whether submarines should be banned outright in warfare. Clearly, we know how that debate ended.

Is there a certain feeling of wading against a current of inevitability for someone like you and your colleagues who are wrestling with new technologies such as this and advocating for it to be banned?

ROBERT SPARROW: I don't think there's anything inevitable about the way in which the debates go about new means of killing people. I mean some means of warfare did become normalized. In the case of submarine warfare, which was widely believed to be profoundly unethical at one historical moment, as you say, we know how that ended. But equally well there are other cases where prohibitions on use of poisonous gases or fragmenting munitions, where public outrage and concern was actually instrumental in driving a reasonably robust consensus that these weapons are evils in themselves and should be prohibited.

So it does seem to me that there are moments when new technologies are up for grabs. It's actually a feature of my argument that the notion of respect that I'm drawing on is a product of convention in a certain way. It's a product of meaning and life world, where of course a relevant question here is, "What does it say about someone that they are willing to do this, or what does it say about our attitudes towards the enemy that we send robots to kill them?"

Now, insofar as those are questions about meaning, they refer to social conventions, and meanings can change. So I think it is certainly possible that in the future we lose the intuition that there's anything particularly disrespectful about sending a robot to kill someone. Equally well, we might solidify that and put it at the heart of our self-understanding and the understanding of what it means to be a good warrior or an ethical soldier, that you don't use that kind of weapon. Just as you don't rape your enemy as a punishment, or to terrify other soldiers you don't rape the people that you capture, so too do you not send robots to kill people.

So it seems to me that we are at a decision point here. Now, I do think that there is significant military utility to these weapon systems, in the same way that there was a significant military utility to submarines. In the face of that military utility, it's understandable that people in the military desire these weapon systems and, in particular, they desire that other people don't have access to them while they don't.

So if we want to resist a future in which autonomous weapon systems become one of the sort of main means of war fighting, I think that we need to try to solidify a certain set of our social understandings now.

ADAM READ-BROWN: You mentioned public outrage when you were talking about some of these other technologies. Do you see that as the key ingredient, that the public somehow has to be understanding these new technologies, specifically AWS, with a sense of outrage? Is that really the key to getting some kind of consensus on this?

ROBERT SPARROW: Outrage I think plays two roles here. Practically, politically, it was, for instance, the sort of horror of the external pageuse of gas in the trench warfare in external pageWorld War I and public outrage at what was done to their brothers and their fathers and their sons in that context that made it possible to put in place conventions that chemical weapons were beyond the pale. And we certainly need that.

If one wants to imagine a prohibition on AWS, it's not going to come from within the armed services, and it's unlikely to come from philosophers in one sense, it will be public distrust and distaste and despair at a future in which robots are hunting and killing people on the battlefield that drives the international community to confront these issues.

I think that the outrage also plays a philosophical role, or we should understand outrage as being philosophically significant, as one of the indicators that there is something problematic about this relationship, that when someone kills someone else with a robot, that's not business as usual, it's not like the use of a Cruise missile, that the public discomfort is an indication that there's more to that relationship than someone who is deeply involved in the technical design of weaponry and thinks that a Cruise missile is just a not terribly sophisticated robot.

I do think we need to be paying attention to outrage in both of these contexts.

ADAM READ-BROWN: Is it difficult, or do you think it's a challenge, for critics of this technology in this case to cultivate that sense of outrage, since the technology remains somewhat hypothetical; whereas in the case, as you noted, with chemical weapons, say, that outrage came from having seen the effects of the technology, and here we still have things that are in development? Does that pose a different challenge?

ROBERT SPARROW: We are moving into a world where robots are killing people. I think there was significant public discomfort with the use of an explosive charge on the end of a bomb disposal robot to kill someone in the police context just a couple of weeks ago, and people were really discomfited by the idea that the police were going to be using robots to kill people.

So it's not that this intuition is completely related to hypothetical circumstances. I mean, granted that wasn't an autonomous weapon system, but the intuitions that interest me are intuitions that relate to the robot understood as a means of killing. We are starting to see people using robots, what the public identifies as robots, to kill other people, and then you get this kind of public response that I think is morally significant, or potentially morally significant.

I wouldn't like, in my role as a philosopher, to be advocating the mobilization of outrage. I think politically one might rely upon and be comforted by the fact that killer robots are a hard sell. But I'm not sure that the role of a philosopher is to be drumming up outrage as part of what we refer to and respond to when we think about the nature of these new means of killing.

ADAM READ-BROWN: Stepping back from the arena of warfare, there are also of course debates raging around the issues surrounding commercial uses of autonomous automated technology, whether it's driverless cars or drone delivery systems. Do you worry that these technologies, as they become integrated into everyday life, may eventually normalize the idea of things like AWS without ever explicitly doing so?

ROBERT SPARROW: I do think there's that danger. Maybe, as we come to see robots in our daily lives, we may actually need to insist upon the value of human life and human relationships. Indeed, every time—there was this case where someone was external pagekilled using their Tesla car in its driver-assist mode, using it as an autonomous vehicle, and they were killed, that has generated an enormous amount of concern. That suggests that we are really worried about robots killing us and we want them to be safe, we want them to respect us.

So it may also be a consequence—or perhaps, instead, a consequence of becoming more familiar with robots is that we actually strengthen the intuition that they should never be allowed to kill people. So I guess we'll see how that goes.

But I think you're right, there is that danger that as we become more familiar with robots, we'll start to see them as just part of the background of our lives and nothing problematic when they're used to kill people. But equally well we might end up with a much stronger intuition that taking life is not the kind of decision that should be made by a robot.

ADAM READ-BROWN: At this point in our discussion, I'm sorry to say that our time is up and we do need to stop here.

Once again, I'm Adam Read-Brown and I've been speaking with Professor Robert Sparrow, whose article "external pageRobots and Respect: Assessing the Case Against Autonomous Weapon Systems" appears in the external pageSpring 2016 issue of Ethics & International Affairs. That article, as well as much more, is available online at external pagewww.eiajournal.org. We also invite you to follow us on Twitter, external page@EIAJournal.

Thank you for joining us, and thank you, Professor Sparrow, for this wonderful discussion. It's been a pleasure.

************

For more information on issues and events that shape our world, please visit the CSS Blog Network or browse our Digital Library.

JavaScript has been disabled in your browser