The developments in the field of robotics and its potential to change warfare have kicked off a debate that will likely stay with us for years to come. After decades, the world of science-fiction, as described by such eminent writers as Isaac Asimov (“I, Robot“), seems to become a tangible reality. Opponents are now warning about the consequences of robotic deployment in battlefields. The recent report “Losing Humanity” and a soon to start public campaign against “killer robots” are a definitive sign of the topics inherent controversy. However, their argumentation needs to be scrutinised more closely since major premises and statements are misguided, thus likely to obfuscate a proper debate that is vital to a society heading towards a more machine-controlled future.
Why robots are “killer robots”?
The recent report “Losing Humanity“, a joint effort of “Human Rights Watch”, the IHRC and various prominent experts, posits that robots are not fit for war. The basic argumentation is the following: war is incredibly complex and requires immediate as well as nuanced judgement on the ground. Robots are not capable of such nuanced judgements, potentially violate international law and need to be banned from the battlefield.
Which principles are autonomous machines violating? According to Article 36 of Additional Protocol I to the Geneva Convention, the authors note several principles, which robots (“new weapons”) would not fulfil. Firstly, they cannot sufficiently differentiate between combatants and civilians. Thus, maybe a child playing with a plastic gun might be mistaken for a soldier. Secondly, they would not be able to guarantee proportionality, since “robots cannot be programmed to duplicate the psychological processes in human judgement that are necessary to assess proportionality”. Finally, robots could become a “military necessity” which magnifies their potential detrimental effects. Because robots are not capable of human emotions, they are lacking the potential to be empathic or show mercy. War would be transformed into a rational act of machines fulfilling a predefined duty with a potential of failure and collateral damage.
The above described rationale is compelling, worrying and at first glance a ban seems to be the right thing to do. Who would want robots spiralling out of control or the dictate of efficiency taking over warfare? However, we have to look closer because the individual arguments are not always sound. Thus, there is a risk of front-loading a crucial debate with prejudice and fear.
How robots are condemned in advance
Although the initiators of the debate are experts in their respective fields, e.g. law or Artificial Intelligence (AI), the supposedly rational arguments are based on questionable premises. The criticism related to proportionality assumes that robots cannot make decisions that require a real-time weighing of a multitude of variables. When defining what proportionality means, the authors recur to the very broad statement of a “subjective determination that will be resolved on a case-by-case basis” (US Air Force). Since robots do not have the human quality of judging a situation within its context, they automatically fail the test. However, this statement is a circular argument. Since robots do not have human emotions and cannot make subjective decisions, they are incapable of adhering to the principle of proportionality. Why? Because they are not human! But, to assume that robots will have to mirror the capacities of the human species in order to be qualified to make the “right” decision is an arrogant and misleading position because it renders every counter-argument invalid by definition. As such, this position does not lead us anywhere and makes it very easy for critics to refute robotic deployment since, well, robots are not human. Instead of using this circular argumentation, it would be more fruitful to ask what exactly are the qualities needed to guarantee proportionality – both for humans and for robots – instead of simply asserting that robots would not be capable of it because only humans can do it.
Emotions and self-serving arguments are playing with our fear
There are two important aspects that can be derived from the logic and arguments used by “killer robot’s” opponents. Firstly, the basic argument is still based on prejudice, conservatism and fear. Robots (and algorithms) are about to take over several aspects of human life and it somehow makes us feel uncomfortable. However, discomfort and uncertainty does not end the debate and banning robots from the battlefield will not resolve our issues. Insisting on the eternal flaws of robotics, machine learning or AI and arguing that robots will never be sophisticated enough, is merely a deterministic, narrow-minded and self-serving point. Instead, and regardless of the exact development in robotics, we need to face the more important underlying questions. How do we decide which level of algorithmic sophistication would be sufficient? Do we need to adjust international treaties and definitions of certain aspects in warfare? And, more philosophically, how do humans ultimately relate to their machine creations? These questions are at the bottom of this debate and a simple ban will not help answering them.
Which standard do we use for robots?
Secondly, there is a huge double standard when it comes to judging robots and their suitability for warfare because they are evaluated differently than humans. Whereas we seem comfortable with human error, we are not comfortable with machines failing to make the right decision. Large parts of the logic in the report “Losing humanity” are based on that double standard. Opponents simply say, since there is no guarantee that robots will make the right decision in every case, they should not make it at all. This is a problematic assumption because it neglects the huge margin for human error. In order to be fair and sound, a debate about the suitability of robots should be based on a proper comparison between humans and machines instead of a utopian ideal.
To be more concrete: there will be no future where robots will make the “right” decision all the time. But there is no presence where humans are making the right decision every time either. Thus, we need to compare the potential for robotic failure with human failure because otherwise the argument has no empirical basis. At the moment, we do not know how the track-record of robots would look like but we do know that the track record of human soldiers is anything else than glorious. To quickly illustrate that point, a search for “atrocities of soldiers in Iraq” yields 45 million results on Google.
It is precisely because of human emotions and the extreme situations in war that humans are making problematic or wrong decision. Yes, there is mercy on one side of the emotional spectrum, but there is rage and vindictiveness on the other. Yes, humans are able to carefully analyse a complex battle situation, but there is also fear, distraction or exhaustion that might lead us to pick the wrong choice. Yes, there might be proud and glory in fighting for a good cause, but there is also the reality of Posttraumatic Stress Disorder (PTSD) and other damage to the human body and mind. In short, the hailed characteristics of “human warfare” are at least evenly matched by the dark side of it.
Starting a proper debate
To condemn robots without having assessed their potential value and solely basing the discussion on utopian ideals that human soldiers could not uphold themselves, is both misguided and insincere. Building a case to ban autonomous machines in advance does certainly not solve the problem. Not just because it would be a tough battle in practical terms that runs against the rationale of a billion dollar industry, but also because it obfuscates the more important debate we should start instead.
We need to approach the tough issues of this near human future. Sometimes, they match the broader set of questions induced by the massive expansion of data and machine learning in general. For instance, which decisions in the social and political life should be complemented or substituted by algorithms? How to weigh the benefits of emotions and subjectivity against the rational dictate of efficiency? Or, what is actually “human” in light of these developments and how do we define ourselves in comparison to “intelligent” machines doing our work?
There are more specific questions when it comes to “killer robots”. What metrics can we use to determine whether robots are “fit” for battle? Or, more general, does the deployment of robots increase the likelihood of war as some opponents of “killer robots” have posited? Indeed, this is an important point. It could be that the decrease of human death in warfare will create an atmosphere where political leaders are inclined to declare war more often. After all, they do not have to mourn and watch coffins coming home. On the other hand, the actual costs of war might increase and the public could pay closer attention to where we are spending billions of our budget. Thus, the reduction of human loss might curiously lead to better public scrutiny.
This is mere speculation but it is worth engaging in the debate instead of insisting on the superiority of humans confronted with their own creations. The hubris of humanity has rarely been good guidance for the public discourse. We need to take our responsibility to shape an algorithmically-enhanced future. Not by invoking fear and creating circular arguments, but by looking into the details, questioning our assumptions and (re)designing the principles we share collectively.
Leave a Reply