Tag Archives: algorithms

Philosophy is dead – long live philosophy! The importance of ethical decision-making in the digital future

13 Feb
Algorithmic decision-making

Algorithmic decision-making

In a debate between the biologist Richard Dawkins and Physicist Neil deGrasse Tyson the question arose whether philosophy has anything to contribute to the modern world and science. The answer from both participants was fairly similar and short: No. The justification was equally simple. In the age of Newton, philosophy went hand in hand with the natural sciences. Newton, the great mathematician and physicist was in fact a “natural philosopher”. But, since the sciences developed and segregated themselves, this fragmentation also implied a fragmentation of utility. Dawkins and deGrasse Tyson apply the cold and sharp knife of scientific utility when stating that instead of philosophy, it is physics, biology and mathematics that are driving the knowledge production nowadays. This slow death of philosophy is certainly true if we only look at mere knowledge production. Formerly, natural philosophers discovered natural phenomena. Nowadays, natural scientists found the Higgs-Boson particle that was needed to confirm the current theories about elementary particles. This great scientific success has been accomplished without philosophy.

However, deducing from the fragmentation of knowledge production that philosophy is futile would be dangerous and misleading. Rather, the core domain of philosophy has shifted. In a scientifically and technologically driven world, the value of philosophy will lie in its capacity to provide an ethical framework for an increasingly complex environment. Otherwise, the social applications of these innovations will be blind.  In short: philosophy has moved on, from an engine of knowledge production to an overall conscience of our digital society.

Ethics as a fundament for the digital future

Quantum computing, big data analysis and algorithmic decision-making will expand its scope dramatically in the next decade. The geeky and less controversial vision of this future comes with Google’s “Project Glass” which will hit the mainstream market in 2014. Such a form of personalised augmented reality already yields implications. Users will be able to record conversations, search the web in real-time, require available information about objects or people and implement this extended connector into their daily life. However, this is only one very visible result of the underlying development. The more fundamental change is related to the idea of (ro)botic development that will affect society on a deeper level than a simple Google product.

(Ro)bots and related challenges

(Ro)bots are controlling a vast amount of daily decision-making. The strange spelling (ro)bots has been put forward by Wendell Wallach, an eminent thinker in the field of machine ethics. It comprises both “robots” and “bots” as significant parts of the field of machine ethics. These two different areas will influence our digital future significantly. Philosophy is necessary to cope with them.

Robots

The robotic development is more visible than the bot development. However, it needs to be detached from past visions about Artificial Intelligence and Science-Fiction novels from prolific writers, such as Isaac Asimov or Philip K. Dick. This is not to say that these forms of highly autonomous thinking machines are not possible at all. Rather, it is trying to entangle a debate that alternates between futurology, technological determinism, dystopian visions and outright ignorance towards technology.

Asimo

Asimo

We are using some types of robots already. Google has developed cars that are already legalised in several states of the US. The military is using controlled drones and surgeons are relying on machines that are far more precise than themselves. This random collection of examples shows that citizens, professionals and the state are already relying upon robots with varying degrees of autonomy. The more elaborated these mechanical assistants become and the more tasks they can accomplish, the more pressing the questions about a (ro)botic future will become.

How will these developments affect personal autonomy? Which economic consequences have to be drawn from the fact that robots are both creating new jobs and replacing old ones? Which ethical considerations do we have to address? A recent article in the New Yorker presented an old philosophical thought game in an updated version: in an automated world where cars and other vehicles are driving us, we still need to think about hypothetical situations. What if a school bus in front of you crashes on a bridge and your car driving behind needs to “decide”, whether to collide with the bus and potentially hurt some children or switch to “suicide” mode and drive off the bridge? Granted, this is a simplified scenario that would be taught in an undergraduate seminar at university. However, it underlines two crucial points. First, the old ethical dilemmas are as prevalent in the “future” as they are now. Second, the importance of these questions gets amplified because the number of decision-makers will be reduced. Previously, the human driver would have decided on the spot. In this example, the car decides based on the interplay of its algorithm and the available external data. Thus, a majority of the decision-making is predefined and already integrated in the product. If that is the case, the need for discussing ethical and legal questions in advance becomes much more pertinent. Instead, these questions are often marginalised, ignored or replaced.

Bots

The second component of the (ro)botic component is equally as important and based on the same logic. Instead of addressing a mechanical machine that is driven by algorithmic “thinking”, the idea of bots refers to the interconnected algorithmic analysis that facilitates, complements or replaces decision-making.

The analysis of major parts of our life is already taking place and rapidly expanding. The possibilities are almost unlimited. Companies are using “persuasive technologies” that can nudge people to buy different products. On some stock markets, trading algorithms already account for two thirds of the total volume of exchanges. Governments can nudge citizens to live healthier, less wasteful and more peaceful. For instance, the UK government has already set up a Behavioural Insights Team (known as the “Nudge unit“) to fulfil that purpose.

Finally, users can try to control their lives more accurately, set themselves goals to live healthier, vary their eating habits, use a digital “butler” to track important behaviour and formulate habitual daily recommendations. If the algorithms are determining too much, users can even randomise their past behaviour to create suggestions for the present (Facebook can give you simple analytics about the past ten years). However, in each of these different “persuasive” cases, the threshold between a positive form of persuasion and a negative form of manipulation can easily get blurry.

Ethical considerations

These social applications of technology need to be addressed and they all have an ethical underpinning: Does human autonomy increase or decrease? Which values should the bots follow, or, more precisely, which ethical considerations do we need to embed into the algorithms? The simple fact that they are efficient and fast does not make them good. We need to rethink our value system and see how we can apply our ideals to a society that is driven by algorithms. For all these questions, we need philosophy to help us out.

Political Philosophy

In addition to the very fundamental ethical considerations, the same procedure needs to be repeated for the macro-level of political philosophy. This is important because the old philosophical trench wars might have shifted almost unnoticed in the digital world.

An invigorated debate about political philosophy should start with the ethical considerations and acknowledge that the topics have become more fragile. The party lines of the past, derived from a distinct philosophical heritage, might not be the same any more. It used to be simple. Property, liberty, privacy. One side fighting for lower taxes, individual freedom and an unobtrusive government; the other favouring moderate intervention for the greater good. However, these supposedly simple truths are becoming more complex and less ideological. The questions might be the same in a world of consequent data processing that surrounds and accompanies citizens on a daily basis. But the answers might not be. Do we always favour individual liberty over the collective good? Some might say yes, but there are legitimate doubts when assuming that a powerful algorithm can actually calculate the most desirable behaviour for a majority of citizens. If it can be calculated what the ideal tax distribution should be or how high the insurance rate for a citizen given his and other people’s behaviour, then some might change their opinion. If a healthier life, a cheaper tax rate or a better information supply is dependent on data from other people, how sure can we be that the individuals’ liberty is always the most sacred good?

The recent “Do Not Track” Initiative in Europe can elucidate that dilemma. On the surface it looks like an excellent idea to protect citizens and consumers. One should not be tracked against his or her own will. This would serve the value of privacy in the digital world. However, by framing the tracking issue as a mere privacy problem, the big picture gets lost. Cookies are the digital glue that sticks together individual preferences, search results, analytics and thus helps to refine the feedback loops. In general, tracking via cookies allows for all sorts of good and all sorts of evil things. It is thus more important to ask, when and for what purposes tracking might be useful.

These debates should not be separated out into the realms of academia, the public sphere or clandestine political negotiations. They need to be integrated as freely and open as possible in a societal discourse. Specifically, opinion leaders and multipliers in different echo chambers need to address these questions in order to avoid a hysterical debate or ideological warfare.

The end of ideology?

In general, it also seems less clear what a more right-wing or left-wing approach towards these issues constitutes. Does a conservative government want to collect more data or less? It might not want to infringe on citizens liberty but it might also want to stay in control or use the data. Similarly, a more liberal, left-wing or democratic government might not want to collect data or “monitor” its citizens but the value of nudging people for the greater good cannot be refuted easily. Thus, instead of fighting about these ideological terms that are marked by the burden of their long history, the more pragmatic approach could succeed: using evidence and complex analysis to create policy instead of arguing blindly about ideological phrases.

Suddenly, the trench wars of political philosophy become less clear. Without going into the analysis of the current political landscape in most Western democracies, it seems rather striking that the differences between governments on the “left” and governments on the “right” are less and less clear. One explanation might actually be the beginning of a trend that sees data-driven analysis as more important than arguing about the “invisible hand” – after all a 200 year old assumption based on simple observations by only one clever individual.

Long live philosophy

Philosophy is necessary to grapple with these issues, since algorithmic operations are already surrounding us and their sophistication is only increasing. They yield implications for the ethical foundation of our society(ies) and the trench war of political philosophy. We need to start addressing the important questions more thoroughly instead of repeating old debates. In doing so, we can differentiate between two types of problems. Firstly, old questions that repeat themselves in a similar modern form, such as philosophical dilemmas for the digital world. Secondly, old questions that might require new answers, such as the broader debates relating to individual autonomy, the “digital greater good”, collective decision-making and the future of hierarchical structures.

For all these debates, the philosophical equipment will be equally as important as it has been in the past. The fact that knowledge production does not belong to the core of philosophy anymore has only freed the discipline of one burden to make space for another: responsible decision-making. In fact, the automated form of knowledge production via (ro)bots has made it even more important to keep the bigger questions in mind. After all, Dawkins and deGrasse Tyson have only been half-right about their judgment on the importance of philosophy.

Advertisements

The end of ideology? Big data and decision-making in politics

12 Feb
The end of ideology?

The end of ideology?

Throughout most of the 20th century, one leader or ruling party tried to frame the policy visions of the future and then act upon them. Mostly, ideology and simple heuristics were used to accomplish this goal which led to clear slogans, e.g. supporting a free market economy, lowering taxes or investing in education and other public goods. Above all, it was beneficial for two reasons. Firstly, it allowed the political leader(s) to reduce uncertainty by having a long-term agenda. Secondly, it could be used to justify themselves towards their constituents.

However, we are currently witnessing a shift in western politics that has several symptoms. I will focus on three of them. Firstly, the formerly distinguishable profile between right-wing and left-wing parties is eroding. As a result, several European democracies are observing political tendencies towards the “middle” of society, where politicians are trying to appeal to as large a voter basis as possible. Secondly, the threshold of expertise in certain areas has become so high, that only a few people seem to be able to understand specific problems and design solutions. Thirdly, the scope of political agendas, visionary ideas or ideological slogans has been reduced significantly. Previously, the agenda has been designed for the long-run, but now the fast-pacing global environment seems to dictate the political agenda which requires constant updates. The simplest example is the financial crisis.

In Germany, the current chancellor Merkel is responding to this challenge by planning for the short-term and (re)adjusting to current problem(s), even if that means to contradict past comments. In response, the media mostly accuses the incumbent of being “pragmatic, yet without a vision” or “lacking courage”. However, these political judgments are using the wrong frame of analysis. It is not the politician that is lacking courage. It is the circumstances that have become overly complex. The predictability of political visions has been sacrificed in order to ensure short-term predictability. As a result, the interconnectedness of social, economic and political problems does not allow politicians to force their agenda upon reality anymore. The empirical glimpse that experts are getting from using large data-sets and algorithmic analysis shows, that simple recipes and ideological slogans won’t suffice to solve problems. Although far away from being perfect, the analysis of big data exposes the simplicity of ideological slogans, e.g. “raise taxes” or “cut benefits”, on all sides of the political spectrum. The slogans are just not cutting it anymore and people are losing faith in the political system.

In short: the crisis symptoms of the political class, where leaders make false promises or contradict themselves, are most likely due to an overly complex reality that is much more prone to the use of algorithmically based decision-making than to human errors. It is therefore no coincidence that the application of algorithms and big data is increasing at the same time than the political “courage” is decreasing.

If this analysis holds, then there are several conclusions to be drawn. Firstly, the era of individual leaders with big visions is likely to be over. This includes the political trench wars and ideological phrasing of certain problems. So far, visions and agendas have been useful as a political compass. In the current environment, however, they can be harmful artefacts of the time when leaders had to reduce uncertainty by acting under unknown conditions. This was necessary because in a situation with lots of unknown variables you could rely on a clear and desirable vision, using trial and error methods and then hope for the best. Contrary to that, marginal improvements and efficient problem solving will be much simpler in an algorithmic environment. Instead of fighting over the budget, we can actually focus on how to improve the living conditions, distribute wealth, or calculate the most efficient health insurance for a given situation.

Secondly, the algorithmic analysis will have implications for the political process. The hierarchical world of public administrations with its top-down decision-making process will likely be shaken up. A much more flexible model of decision-making is needed to answer how the policy-making process needs to be adjusted to be responsive and accurate? So far, public administrations seem to be unwilling to put themselves under the microscope.

Thirdly, algorithmic decision-making has overall implications for the democratic understanding. If politicians have to respond to problems by using sophisticated mechanisms of empirical analysis instead of doing political “guesswork” that is aligned with their campaign promises, then they will have to change their mind more frequently. Currently, this is considered to be a problem. However, it might actually be a huge advantage. To use the bonmot of the economist John Maynard Keynes who responded the following, when he got challenged for his contradicting theories: “If the facts change, I change my mind. What do you do Sir?”.

However, as citizens or media practitioners, we try to hold politicians to account by using their previous comments or their electoral promises. This is where the problem arises: on the one hand, these complaints are absolutely justified because we have voted for our representatives. On the other hand, it is unfair and harmful, since it only encourages politicians, journalists and citizens to continue with an on-going and well-established masquerade: citizens complaining about characteristics of politicians (greed, corruptibility), journalists using these simplified categories and politicians trying to evoke the illusion that they are actually still in control by responding to “human” categories of being “honest” and showing “leadership quality”. This is a tragedy, because such a democratic theatre does not expand the public discourse. In fact, it only obfuscates the new techno-social challenges. Simple nostalgia and mourning about a less complex past is misleading.

Instead of watching that theatre, we need to address the underlying questions: how can we use algorithms to facilitate problem analysis and decision-making inside institutions? How can we balance this new form of decision-making with the accountability towards the public? And above all, is our current democratic understanding actually well-equipped to address these questions properly? In starting that debate, we can avoid perpetuating the masquerade and begin to understand one of the most vital challenges of the 21st century.