Tag Archives: google car

Philosophy is dead – long live philosophy! The importance of ethical decision-making in the digital future

13 Feb
Algorithmic decision-making

Algorithmic decision-making

In a debate between the biologist Richard Dawkins and Physicist Neil deGrasse Tyson the question arose whether philosophy has anything to contribute to the modern world and science. The answer from both participants was fairly similar and short: No. The justification was equally simple. In the age of Newton, philosophy went hand in hand with the natural sciences. Newton, the great mathematician and physicist was in fact a “natural philosopher”. But, since the sciences developed and segregated themselves, this fragmentation also implied a fragmentation of utility. Dawkins and deGrasse Tyson apply the cold and sharp knife of scientific utility when stating that instead of philosophy, it is physics, biology and mathematics that are driving the knowledge production nowadays. This slow death of philosophy is certainly true if we only look at mere knowledge production. Formerly, natural philosophers discovered natural phenomena. Nowadays, natural scientists found the Higgs-Boson particle that was needed to confirm the current theories about elementary particles. This great scientific success has been accomplished without philosophy.

However, deducing from the fragmentation of knowledge production that philosophy is futile would be dangerous and misleading. Rather, the core domain of philosophy has shifted. In a scientifically and technologically driven world, the value of philosophy will lie in its capacity to provide an ethical framework for an increasingly complex environment. Otherwise, the social applications of these innovations will be blind.  In short: philosophy has moved on, from an engine of knowledge production to an overall conscience of our digital society.

Ethics as a fundament for the digital future

Quantum computing, big data analysis and algorithmic decision-making will expand its scope dramatically in the next decade. The geeky and less controversial vision of this future comes with Google’s “Project Glass” which will hit the mainstream market in 2014. Such a form of personalised augmented reality already yields implications. Users will be able to record conversations, search the web in real-time, require available information about objects or people and implement this extended connector into their daily life. However, this is only one very visible result of the underlying development. The more fundamental change is related to the idea of (ro)botic development that will affect society on a deeper level than a simple Google product.

(Ro)bots and related challenges

(Ro)bots are controlling a vast amount of daily decision-making. The strange spelling (ro)bots has been put forward by Wendell Wallach, an eminent thinker in the field of machine ethics. It comprises both “robots” and “bots” as significant parts of the field of machine ethics. These two different areas will influence our digital future significantly. Philosophy is necessary to cope with them.

Robots

The robotic development is more visible than the bot development. However, it needs to be detached from past visions about Artificial Intelligence and Science-Fiction novels from prolific writers, such as Isaac Asimov or Philip K. Dick. This is not to say that these forms of highly autonomous thinking machines are not possible at all. Rather, it is trying to entangle a debate that alternates between futurology, technological determinism, dystopian visions and outright ignorance towards technology.

Asimo

Asimo

We are using some types of robots already. Google has developed cars that are already legalised in several states of the US. The military is using controlled drones and surgeons are relying on machines that are far more precise than themselves. This random collection of examples shows that citizens, professionals and the state are already relying upon robots with varying degrees of autonomy. The more elaborated these mechanical assistants become and the more tasks they can accomplish, the more pressing the questions about a (ro)botic future will become.

How will these developments affect personal autonomy? Which economic consequences have to be drawn from the fact that robots are both creating new jobs and replacing old ones? Which ethical considerations do we have to address? A recent article in the New Yorker presented an old philosophical thought game in an updated version: in an automated world where cars and other vehicles are driving us, we still need to think about hypothetical situations. What if a school bus in front of you crashes on a bridge and your car driving behind needs to “decide”, whether to collide with the bus and potentially hurt some children or switch to “suicide” mode and drive off the bridge? Granted, this is a simplified scenario that would be taught in an undergraduate seminar at university. However, it underlines two crucial points. First, the old ethical dilemmas are as prevalent in the “future” as they are now. Second, the importance of these questions gets amplified because the number of decision-makers will be reduced. Previously, the human driver would have decided on the spot. In this example, the car decides based on the interplay of its algorithm and the available external data. Thus, a majority of the decision-making is predefined and already integrated in the product. If that is the case, the need for discussing ethical and legal questions in advance becomes much more pertinent. Instead, these questions are often marginalised, ignored or replaced.

Bots

The second component of the (ro)botic component is equally as important and based on the same logic. Instead of addressing a mechanical machine that is driven by algorithmic “thinking”, the idea of bots refers to the interconnected algorithmic analysis that facilitates, complements or replaces decision-making.

The analysis of major parts of our life is already taking place and rapidly expanding. The possibilities are almost unlimited. Companies are using “persuasive technologies” that can nudge people to buy different products. On some stock markets, trading algorithms already account for two thirds of the total volume of exchanges. Governments can nudge citizens to live healthier, less wasteful and more peaceful. For instance, the UK government has already set up a Behavioural Insights Team (known as the “Nudge unit“) to fulfil that purpose.

Finally, users can try to control their lives more accurately, set themselves goals to live healthier, vary their eating habits, use a digital “butler” to track important behaviour and formulate habitual daily recommendations. If the algorithms are determining too much, users can even randomise their past behaviour to create suggestions for the present (Facebook can give you simple analytics about the past ten years). However, in each of these different “persuasive” cases, the threshold between a positive form of persuasion and a negative form of manipulation can easily get blurry.

Ethical considerations

These social applications of technology need to be addressed and they all have an ethical underpinning: Does human autonomy increase or decrease? Which values should the bots follow, or, more precisely, which ethical considerations do we need to embed into the algorithms? The simple fact that they are efficient and fast does not make them good. We need to rethink our value system and see how we can apply our ideals to a society that is driven by algorithms. For all these questions, we need philosophy to help us out.

Political Philosophy

In addition to the very fundamental ethical considerations, the same procedure needs to be repeated for the macro-level of political philosophy. This is important because the old philosophical trench wars might have shifted almost unnoticed in the digital world.

An invigorated debate about political philosophy should start with the ethical considerations and acknowledge that the topics have become more fragile. The party lines of the past, derived from a distinct philosophical heritage, might not be the same any more. It used to be simple. Property, liberty, privacy. One side fighting for lower taxes, individual freedom and an unobtrusive government; the other favouring moderate intervention for the greater good. However, these supposedly simple truths are becoming more complex and less ideological. The questions might be the same in a world of consequent data processing that surrounds and accompanies citizens on a daily basis. But the answers might not be. Do we always favour individual liberty over the collective good? Some might say yes, but there are legitimate doubts when assuming that a powerful algorithm can actually calculate the most desirable behaviour for a majority of citizens. If it can be calculated what the ideal tax distribution should be or how high the insurance rate for a citizen given his and other people’s behaviour, then some might change their opinion. If a healthier life, a cheaper tax rate or a better information supply is dependent on data from other people, how sure can we be that the individuals’ liberty is always the most sacred good?

The recent “Do Not Track” Initiative in Europe can elucidate that dilemma. On the surface it looks like an excellent idea to protect citizens and consumers. One should not be tracked against his or her own will. This would serve the value of privacy in the digital world. However, by framing the tracking issue as a mere privacy problem, the big picture gets lost. Cookies are the digital glue that sticks together individual preferences, search results, analytics and thus helps to refine the feedback loops. In general, tracking via cookies allows for all sorts of good and all sorts of evil things. It is thus more important to ask, when and for what purposes tracking might be useful.

These debates should not be separated out into the realms of academia, the public sphere or clandestine political negotiations. They need to be integrated as freely and open as possible in a societal discourse. Specifically, opinion leaders and multipliers in different echo chambers need to address these questions in order to avoid a hysterical debate or ideological warfare.

The end of ideology?

In general, it also seems less clear what a more right-wing or left-wing approach towards these issues constitutes. Does a conservative government want to collect more data or less? It might not want to infringe on citizens liberty but it might also want to stay in control or use the data. Similarly, a more liberal, left-wing or democratic government might not want to collect data or “monitor” its citizens but the value of nudging people for the greater good cannot be refuted easily. Thus, instead of fighting about these ideological terms that are marked by the burden of their long history, the more pragmatic approach could succeed: using evidence and complex analysis to create policy instead of arguing blindly about ideological phrases.

Suddenly, the trench wars of political philosophy become less clear. Without going into the analysis of the current political landscape in most Western democracies, it seems rather striking that the differences between governments on the “left” and governments on the “right” are less and less clear. One explanation might actually be the beginning of a trend that sees data-driven analysis as more important than arguing about the “invisible hand” – after all a 200 year old assumption based on simple observations by only one clever individual.

Long live philosophy

Philosophy is necessary to grapple with these issues, since algorithmic operations are already surrounding us and their sophistication is only increasing. They yield implications for the ethical foundation of our society(ies) and the trench war of political philosophy. We need to start addressing the important questions more thoroughly instead of repeating old debates. In doing so, we can differentiate between two types of problems. Firstly, old questions that repeat themselves in a similar modern form, such as philosophical dilemmas for the digital world. Secondly, old questions that might require new answers, such as the broader debates relating to individual autonomy, the “digital greater good”, collective decision-making and the future of hierarchical structures.

For all these debates, the philosophical equipment will be equally as important as it has been in the past. The fact that knowledge production does not belong to the core of philosophy anymore has only freed the discipline of one burden to make space for another: responsible decision-making. In fact, the automated form of knowledge production via (ro)bots has made it even more important to keep the bigger questions in mind. After all, Dawkins and deGrasse Tyson have only been half-right about their judgment on the importance of philosophy.