“Killer robots”? Why an alarmist debate is problematic!

25 Mar
Scene from Terminator 3

Scene from Terminator 3

The developments in the field of robotics and its potential to change warfare have kicked off a debate that will likely stay with us for years to come. After decades, the world of science-fiction, as described by such eminent writers as Isaac Asimov (“I, Robot“), seems to become a tangible reality. Opponents are now warning about the consequences of robotic deployment in battlefields. The recent report “Losing Humanity” and a soon to start public campaign against “killer robots” are a definitive sign of the topics inherent controversy. However, their argumentation needs to be scrutinised more closely since major premises and statements are misguided, thus likely to obfuscate a proper debate that is vital to a society heading towards a more machine-controlled future.

Why robots are “killer robots”?

The recent report “Losing Humanity“, a joint effort of “Human Rights Watch”, the IHRC and various prominent experts, posits that robots are not fit for war. The basic argumentation is the following: war is incredibly complex and requires immediate as well as nuanced judgement on the ground. Robots are not capable of such nuanced judgements, potentially violate international law and need to be banned from the battlefield.

Which principles are autonomous machines violating? According to Article 36 of Additional Protocol I to the Geneva Convention, the authors note several principles, which robots (“new weapons”) would not fulfil. Firstly, they cannot sufficiently differentiate between combatants and civilians. Thus, maybe a child playing with a plastic gun might be mistaken for a soldier. Secondly, they would not be able to guarantee proportionality, since “robots cannot be programmed to duplicate the psychological processes in human judgement that are necessary to assess proportionality”. Finally, robots could become a “military necessity” which magnifies their potential detrimental effects. Because robots are not capable of human emotions, they are lacking the potential to be empathic or show mercy. War would be transformed into a rational act of machines fulfilling a predefined duty with a potential of failure and collateral damage.

The above described rationale is compelling, worrying and at first glance a ban seems to be the right thing to do. Who would want robots spiralling out of control or the dictate of efficiency taking over warfare? However, we have to look closer because the individual arguments are not always sound. Thus, there is a risk of front-loading a crucial debate with prejudice and fear.

How robots are condemned in advance

Although the initiators of the debate are experts in their respective fields, e.g. law or Artificial Intelligence (AI), the supposedly rational arguments are based on questionable premises. The criticism related to proportionality assumes that robots cannot make decisions that require a real-time weighing of a multitude of variables. When defining what proportionality means, the authors recur to the very broad statement of a “subjective determination that will be resolved on a case-by-case basis” (US Air Force). Since robots do not have the human quality of judging a situation within its context, they automatically fail the test. However, this statement is a circular argument. Since robots do not have human emotions and cannot make subjective decisions, they are incapable of adhering to the principle of proportionality. Why? Because they are not human! But, to assume that robots will have to mirror the capacities of the human species in order to be qualified to make the “right” decision is an arrogant and misleading position because it renders every counter-argument invalid by definition. As such, this position does not lead us anywhere and makes it very easy for critics to refute robotic deployment since, well, robots are not human. Instead of using this circular argumentation, it would be more fruitful to ask what exactly are the qualities needed to guarantee proportionality – both for humans and for robots – instead of simply asserting that robots would not be capable of it because only humans can do it.

Emotions and self-serving arguments are playing with our fear

There are two important aspects that can be derived from the logic and arguments used by “killer robot’s” opponents. Firstly, the basic argument is still based on prejudice, conservatism and fear. Robots (and algorithms) are about to take over several aspects of human life and it somehow makes us feel uncomfortable. However, discomfort and uncertainty does not end the debate and banning robots from the battlefield will not resolve our issues. Insisting on the eternal flaws of robotics, machine learning or AI and arguing that robots will never be sophisticated enough, is merely a deterministic, narrow-minded and self-serving point. Instead, and regardless of the exact development in robotics, we need to face the more important underlying questions. How do we decide which level of algorithmic sophistication would be sufficient? Do we need to adjust international treaties and definitions of certain aspects in warfare? And, more philosophically, how do humans ultimately relate to their machine creations? These questions are at the bottom of this debate and a simple ban will not help answering them.

Which standard do we use for robots?

Secondly, there is a huge double standard when it comes to judging robots and their suitability for warfare because they are evaluated differently than humans. Whereas we seem comfortable with human error, we are not comfortable with machines failing to make the right decision. Large parts of the logic in the report “Losing humanity” are based on that double standard. Opponents simply say, since there is no guarantee that robots will make the right decision in every case, they should not make it at all. This is a problematic assumption because it neglects the huge margin for human error. In order to be fair and sound, a debate about the suitability of robots should be based on a proper comparison between humans and machines instead of a utopian ideal.

To be more concrete: there will be no future where robots will make the “right” decision all the time. But there is no presence where humans are making the right decision every time either. Thus, we need to compare the potential for robotic failure with human failure because otherwise the argument has no empirical basis. At the moment, we do not know how the track-record of robots would look like but we do know that the track record of human soldiers is anything else than glorious. To quickly illustrate that point, a search for “atrocities of soldiers in Iraq” yields 45 million results on Google.

It is precisely because of human emotions and the extreme situations in war that humans are making problematic or wrong decision. Yes, there is mercy on one side of the emotional spectrum, but there is rage and vindictiveness on the other. Yes, humans are able to carefully analyse a complex battle situation, but there is also fear, distraction or exhaustion that might lead us to pick the wrong choice. Yes, there might be proud and glory in fighting for a good cause, but there is also the reality of Posttraumatic Stress Disorder (PTSD) and other damage to the human body and mind. In short, the hailed characteristics of “human warfare” are at least evenly matched by the dark side of it.

Starting a proper debate

To condemn robots without having assessed their potential value and solely basing the discussion on utopian ideals that human soldiers could not uphold themselves, is both misguided and insincere. Building a case to ban autonomous machines in advance does certainly not solve the problem. Not just because it would be a tough battle in practical terms that runs against the rationale of a billion dollar industry, but also because it obfuscates the more important debate we should start instead.

We need to approach the tough issues of this near human future. Sometimes, they match the broader set of questions induced by the massive expansion of data and machine learning in general. For instance, which decisions in the social and political life should be complemented or substituted by algorithms? How to weigh the benefits of emotions and subjectivity against the rational dictate of efficiency? Or, what is actually “human” in light of these developments and how do we define ourselves in comparison to “intelligent” machines doing our work?

There are more specific questions when it comes to “killer robots”. What metrics can we use to determine whether robots are “fit” for battle? Or, more general, does the deployment of robots increase the likelihood of war as some opponents of “killer robots” have posited? Indeed, this is an important point. It could be that the decrease of human death in warfare will create an atmosphere where political leaders are inclined to declare war more often. After all, they do not have to mourn and watch coffins coming home. On the other hand, the actual costs of war might increase and the public could pay closer attention to where we are spending billions of our budget. Thus, the reduction of human loss might curiously lead to better public scrutiny.

This is mere speculation but it is worth engaging in the debate instead of insisting on the superiority of humans confronted with their own creations. The hubris of humanity has rarely been good guidance for the public discourse. We need to take our responsibility to shape an algorithmically-enhanced future. Not by invoking fear and creating circular arguments, but by looking into the details, questioning our assumptions and (re)designing the principles we share collectively.

Philosophy is dead – long live philosophy! The importance of ethical decision-making in the digital future

13 Feb
Algorithmic decision-making

Algorithmic decision-making

In a debate between the biologist Richard Dawkins and Physicist Neil deGrasse Tyson the question arose whether philosophy has anything to contribute to the modern world and science. The answer from both participants was fairly similar and short: No. The justification was equally simple. In the age of Newton, philosophy went hand in hand with the natural sciences. Newton, the great mathematician and physicist was in fact a “natural philosopher”. But, since the sciences developed and segregated themselves, this fragmentation also implied a fragmentation of utility. Dawkins and deGrasse Tyson apply the cold and sharp knife of scientific utility when stating that instead of philosophy, it is physics, biology and mathematics that are driving the knowledge production nowadays. This slow death of philosophy is certainly true if we only look at mere knowledge production. Formerly, natural philosophers discovered natural phenomena. Nowadays, natural scientists found the Higgs-Boson particle that was needed to confirm the current theories about elementary particles. This great scientific success has been accomplished without philosophy.

However, deducing from the fragmentation of knowledge production that philosophy is futile would be dangerous and misleading. Rather, the core domain of philosophy has shifted. In a scientifically and technologically driven world, the value of philosophy will lie in its capacity to provide an ethical framework for an increasingly complex environment. Otherwise, the social applications of these innovations will be blind.  In short: philosophy has moved on, from an engine of knowledge production to an overall conscience of our digital society.

Ethics as a fundament for the digital future

Quantum computing, big data analysis and algorithmic decision-making will expand its scope dramatically in the next decade. The geeky and less controversial vision of this future comes with Google’s “Project Glass” which will hit the mainstream market in 2014. Such a form of personalised augmented reality already yields implications. Users will be able to record conversations, search the web in real-time, require available information about objects or people and implement this extended connector into their daily life. However, this is only one very visible result of the underlying development. The more fundamental change is related to the idea of (ro)botic development that will affect society on a deeper level than a simple Google product.

(Ro)bots and related challenges

(Ro)bots are controlling a vast amount of daily decision-making. The strange spelling (ro)bots has been put forward by Wendell Wallach, an eminent thinker in the field of machine ethics. It comprises both “robots” and “bots” as significant parts of the field of machine ethics. These two different areas will influence our digital future significantly. Philosophy is necessary to cope with them.

Robots

The robotic development is more visible than the bot development. However, it needs to be detached from past visions about Artificial Intelligence and Science-Fiction novels from prolific writers, such as Isaac Asimov or Philip K. Dick. This is not to say that these forms of highly autonomous thinking machines are not possible at all. Rather, it is trying to entangle a debate that alternates between futurology, technological determinism, dystopian visions and outright ignorance towards technology.

Asimo

Asimo

We are using some types of robots already. Google has developed cars that are already legalised in several states of the US. The military is using controlled drones and surgeons are relying on machines that are far more precise than themselves. This random collection of examples shows that citizens, professionals and the state are already relying upon robots with varying degrees of autonomy. The more elaborated these mechanical assistants become and the more tasks they can accomplish, the more pressing the questions about a (ro)botic future will become.

How will these developments affect personal autonomy? Which economic consequences have to be drawn from the fact that robots are both creating new jobs and replacing old ones? Which ethical considerations do we have to address? A recent article in the New Yorker presented an old philosophical thought game in an updated version: in an automated world where cars and other vehicles are driving us, we still need to think about hypothetical situations. What if a school bus in front of you crashes on a bridge and your car driving behind needs to “decide”, whether to collide with the bus and potentially hurt some children or switch to “suicide” mode and drive off the bridge? Granted, this is a simplified scenario that would be taught in an undergraduate seminar at university. However, it underlines two crucial points. First, the old ethical dilemmas are as prevalent in the “future” as they are now. Second, the importance of these questions gets amplified because the number of decision-makers will be reduced. Previously, the human driver would have decided on the spot. In this example, the car decides based on the interplay of its algorithm and the available external data. Thus, a majority of the decision-making is predefined and already integrated in the product. If that is the case, the need for discussing ethical and legal questions in advance becomes much more pertinent. Instead, these questions are often marginalised, ignored or replaced.

Bots

The second component of the (ro)botic component is equally as important and based on the same logic. Instead of addressing a mechanical machine that is driven by algorithmic “thinking”, the idea of bots refers to the interconnected algorithmic analysis that facilitates, complements or replaces decision-making.

The analysis of major parts of our life is already taking place and rapidly expanding. The possibilities are almost unlimited. Companies are using “persuasive technologies” that can nudge people to buy different products. On some stock markets, trading algorithms already account for two thirds of the total volume of exchanges. Governments can nudge citizens to live healthier, less wasteful and more peaceful. For instance, the UK government has already set up a Behavioural Insights Team (known as the “Nudge unit“) to fulfil that purpose.

Finally, users can try to control their lives more accurately, set themselves goals to live healthier, vary their eating habits, use a digital “butler” to track important behaviour and formulate habitual daily recommendations. If the algorithms are determining too much, users can even randomise their past behaviour to create suggestions for the present (Facebook can give you simple analytics about the past ten years). However, in each of these different “persuasive” cases, the threshold between a positive form of persuasion and a negative form of manipulation can easily get blurry.

Ethical considerations

These social applications of technology need to be addressed and they all have an ethical underpinning: Does human autonomy increase or decrease? Which values should the bots follow, or, more precisely, which ethical considerations do we need to embed into the algorithms? The simple fact that they are efficient and fast does not make them good. We need to rethink our value system and see how we can apply our ideals to a society that is driven by algorithms. For all these questions, we need philosophy to help us out.

Political Philosophy

In addition to the very fundamental ethical considerations, the same procedure needs to be repeated for the macro-level of political philosophy. This is important because the old philosophical trench wars might have shifted almost unnoticed in the digital world.

An invigorated debate about political philosophy should start with the ethical considerations and acknowledge that the topics have become more fragile. The party lines of the past, derived from a distinct philosophical heritage, might not be the same any more. It used to be simple. Property, liberty, privacy. One side fighting for lower taxes, individual freedom and an unobtrusive government; the other favouring moderate intervention for the greater good. However, these supposedly simple truths are becoming more complex and less ideological. The questions might be the same in a world of consequent data processing that surrounds and accompanies citizens on a daily basis. But the answers might not be. Do we always favour individual liberty over the collective good? Some might say yes, but there are legitimate doubts when assuming that a powerful algorithm can actually calculate the most desirable behaviour for a majority of citizens. If it can be calculated what the ideal tax distribution should be or how high the insurance rate for a citizen given his and other people’s behaviour, then some might change their opinion. If a healthier life, a cheaper tax rate or a better information supply is dependent on data from other people, how sure can we be that the individuals’ liberty is always the most sacred good?

The recent “Do Not Track” Initiative in Europe can elucidate that dilemma. On the surface it looks like an excellent idea to protect citizens and consumers. One should not be tracked against his or her own will. This would serve the value of privacy in the digital world. However, by framing the tracking issue as a mere privacy problem, the big picture gets lost. Cookies are the digital glue that sticks together individual preferences, search results, analytics and thus helps to refine the feedback loops. In general, tracking via cookies allows for all sorts of good and all sorts of evil things. It is thus more important to ask, when and for what purposes tracking might be useful.

These debates should not be separated out into the realms of academia, the public sphere or clandestine political negotiations. They need to be integrated as freely and open as possible in a societal discourse. Specifically, opinion leaders and multipliers in different echo chambers need to address these questions in order to avoid a hysterical debate or ideological warfare.

The end of ideology?

In general, it also seems less clear what a more right-wing or left-wing approach towards these issues constitutes. Does a conservative government want to collect more data or less? It might not want to infringe on citizens liberty but it might also want to stay in control or use the data. Similarly, a more liberal, left-wing or democratic government might not want to collect data or “monitor” its citizens but the value of nudging people for the greater good cannot be refuted easily. Thus, instead of fighting about these ideological terms that are marked by the burden of their long history, the more pragmatic approach could succeed: using evidence and complex analysis to create policy instead of arguing blindly about ideological phrases.

Suddenly, the trench wars of political philosophy become less clear. Without going into the analysis of the current political landscape in most Western democracies, it seems rather striking that the differences between governments on the “left” and governments on the “right” are less and less clear. One explanation might actually be the beginning of a trend that sees data-driven analysis as more important than arguing about the “invisible hand” – after all a 200 year old assumption based on simple observations by only one clever individual.

Long live philosophy

Philosophy is necessary to grapple with these issues, since algorithmic operations are already surrounding us and their sophistication is only increasing. They yield implications for the ethical foundation of our society(ies) and the trench war of political philosophy. We need to start addressing the important questions more thoroughly instead of repeating old debates. In doing so, we can differentiate between two types of problems. Firstly, old questions that repeat themselves in a similar modern form, such as philosophical dilemmas for the digital world. Secondly, old questions that might require new answers, such as the broader debates relating to individual autonomy, the “digital greater good”, collective decision-making and the future of hierarchical structures.

For all these debates, the philosophical equipment will be equally as important as it has been in the past. The fact that knowledge production does not belong to the core of philosophy anymore has only freed the discipline of one burden to make space for another: responsible decision-making. In fact, the automated form of knowledge production via (ro)bots has made it even more important to keep the bigger questions in mind. After all, Dawkins and deGrasse Tyson have only been half-right about their judgment on the importance of philosophy.

The end of ideology? Big data and decision-making in politics

12 Feb
The end of ideology?

The end of ideology?

Throughout most of the 20th century, one leader or ruling party tried to frame the policy visions of the future and then act upon them. Mostly, ideology and simple heuristics were used to accomplish this goal which led to clear slogans, e.g. supporting a free market economy, lowering taxes or investing in education and other public goods. Above all, it was beneficial for two reasons. Firstly, it allowed the political leader(s) to reduce uncertainty by having a long-term agenda. Secondly, it could be used to justify themselves towards their constituents.

However, we are currently witnessing a shift in western politics that has several symptoms. I will focus on three of them. Firstly, the formerly distinguishable profile between right-wing and left-wing parties is eroding. As a result, several European democracies are observing political tendencies towards the “middle” of society, where politicians are trying to appeal to as large a voter basis as possible. Secondly, the threshold of expertise in certain areas has become so high, that only a few people seem to be able to understand specific problems and design solutions. Thirdly, the scope of political agendas, visionary ideas or ideological slogans has been reduced significantly. Previously, the agenda has been designed for the long-run, but now the fast-pacing global environment seems to dictate the political agenda which requires constant updates. The simplest example is the financial crisis.

In Germany, the current chancellor Merkel is responding to this challenge by planning for the short-term and (re)adjusting to current problem(s), even if that means to contradict past comments. In response, the media mostly accuses the incumbent of being “pragmatic, yet without a vision” or “lacking courage”. However, these political judgments are using the wrong frame of analysis. It is not the politician that is lacking courage. It is the circumstances that have become overly complex. The predictability of political visions has been sacrificed in order to ensure short-term predictability. As a result, the interconnectedness of social, economic and political problems does not allow politicians to force their agenda upon reality anymore. The empirical glimpse that experts are getting from using large data-sets and algorithmic analysis shows, that simple recipes and ideological slogans won’t suffice to solve problems. Although far away from being perfect, the analysis of big data exposes the simplicity of ideological slogans, e.g. “raise taxes” or “cut benefits”, on all sides of the political spectrum. The slogans are just not cutting it anymore and people are losing faith in the political system.

In short: the crisis symptoms of the political class, where leaders make false promises or contradict themselves, are most likely due to an overly complex reality that is much more prone to the use of algorithmically based decision-making than to human errors. It is therefore no coincidence that the application of algorithms and big data is increasing at the same time than the political “courage” is decreasing.

If this analysis holds, then there are several conclusions to be drawn. Firstly, the era of individual leaders with big visions is likely to be over. This includes the political trench wars and ideological phrasing of certain problems. So far, visions and agendas have been useful as a political compass. In the current environment, however, they can be harmful artefacts of the time when leaders had to reduce uncertainty by acting under unknown conditions. This was necessary because in a situation with lots of unknown variables you could rely on a clear and desirable vision, using trial and error methods and then hope for the best. Contrary to that, marginal improvements and efficient problem solving will be much simpler in an algorithmic environment. Instead of fighting over the budget, we can actually focus on how to improve the living conditions, distribute wealth, or calculate the most efficient health insurance for a given situation.

Secondly, the algorithmic analysis will have implications for the political process. The hierarchical world of public administrations with its top-down decision-making process will likely be shaken up. A much more flexible model of decision-making is needed to answer how the policy-making process needs to be adjusted to be responsive and accurate? So far, public administrations seem to be unwilling to put themselves under the microscope.

Thirdly, algorithmic decision-making has overall implications for the democratic understanding. If politicians have to respond to problems by using sophisticated mechanisms of empirical analysis instead of doing political “guesswork” that is aligned with their campaign promises, then they will have to change their mind more frequently. Currently, this is considered to be a problem. However, it might actually be a huge advantage. To use the bonmot of the economist John Maynard Keynes who responded the following, when he got challenged for his contradicting theories: “If the facts change, I change my mind. What do you do Sir?”.

However, as citizens or media practitioners, we try to hold politicians to account by using their previous comments or their electoral promises. This is where the problem arises: on the one hand, these complaints are absolutely justified because we have voted for our representatives. On the other hand, it is unfair and harmful, since it only encourages politicians, journalists and citizens to continue with an on-going and well-established masquerade: citizens complaining about characteristics of politicians (greed, corruptibility), journalists using these simplified categories and politicians trying to evoke the illusion that they are actually still in control by responding to “human” categories of being “honest” and showing “leadership quality”. This is a tragedy, because such a democratic theatre does not expand the public discourse. In fact, it only obfuscates the new techno-social challenges. Simple nostalgia and mourning about a less complex past is misleading.

Instead of watching that theatre, we need to address the underlying questions: how can we use algorithms to facilitate problem analysis and decision-making inside institutions? How can we balance this new form of decision-making with the accountability towards the public? And above all, is our current democratic understanding actually well-equipped to address these questions properly? In starting that debate, we can avoid perpetuating the masquerade and begin to understand one of the most vital challenges of the 21st century.