November 2018 - Artificial Intelligence | Internet Governance

AI, Human Rights, and Algorithmic Transparency

How do AI and automated decision-making co-exist with human rights and self-determination? Michael Rotert from the eco Association provides an overview of the Council of Europe’s deliberations on a topic which presents both potential benefits and ethical challenges for society.

Artificial Intelligence, Human Rights and Algorithmic Transparency

© Jakarin2521 | istockphoto.com

The development of artificial intelligence (AI) opens many possibilities to explore its benefits to businesses, people and societies. Its impacts are everywhere and they present opportunities as well as challenges for the lives and futures of people. AI-based technologies affect the core of our personal lives and interactions with others.

In many areas, experts are working on the capacity of algorithmic processes as powerful tools of manipulation, with important individual and societal impacts on the formation of opinions on public discourse, the media, and democratic processes. This raises the question of how to prevent negative human rights impacts from algorithmic decision-making processes, and who should be responsible for this. 

The European Committee on Crime Problems will examine the substantive criminal law challenges posed by advances in robotics, AI, and smart autonomous machinery, including self-driving cars, drones, and other forms of robots capable of causing physical harm independent of human operators. The anticipated outcome of this work is potentially a standard-setting instrument that might take the form of a Council of Europe convention. 

To address the constantly evolving challenges to the right to privacy and protection of personal data in the context of technological convergence and artificial intelligence, the Council of Europe in Strasbourg is preparing a comprehensive report on the data protection implications of artificial intelligence. This report will contain recommendations for limiting the impacts of new technologies on privacy and human dignity. It is being carried out by the Consultative

Committee of the only international data protection treaty, the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

Important to watch are also the impacts of AI-powered micro-targeting techniques of big data in combination with psychometrics and psychographics on the fairness of electoral campaigns and on voter behaviour, as well as on broader participatory and democratic processes. 

Are algorithms the new decision makers?

What information is made available to users on their social network newsfeed? On what basis is a person’s risk profile determined and what profiles provide the best chances for obtaining, for instance, health insurance, or employment, or for being regarded as a potential criminal or terrorist?

Automated data processing techniques, such as algorithms, do not only enable Internet users to seek and access information; they are also increasingly used in decision-making processes that were previously entirely in the remit of human beings. Algorithms can be used to prepare human decisions or to take them immediately. In fact, boundaries between human and automated decision-making are often blurred, resulting in the notion of ‘quasi- or semi-automated decision-making’.

The use of algorithms raises considerable challenges not only for the specific policy area in which they are operated, but also for society as a whole. How can we safeguard human rights and human dignity in the face of rapidly changing technologies? The right to life, the right to a fair trial and the presumption of innocence, the right to privacy and freedom of expression, workers’ rights, the right to free elections, even the rule of law itself are all impacted. Responding to challenges associated with ‘algorithms’ used by the public and private sector, in particular by Internet platforms, is currently one of the most hotly debated questions.

Ethical questions for automated decision-making

There is an increasing perception that the Internet and its mechanisms are evil, as human beings feel that they have no control over and do not understand the technical systems that surround them. While the lack of control and understanding may be disconcerting at first glance, it is not always negative but rather a by-product of this phase of modern life in which globalized economic and technological developments produce large numbers of software-driven technical artefacts. Which split-second choices should a software-driven vehicle make if it knows it is going to crash? Is racial, ethnic, or gender bias more likely or less likely in an automated system? Are societal inequalities replicated through automated data processing techniques, are they minimized, or even amplified? 

Historically, private companies decided how to develop software in line with the economic, legal, and ethical frameworks they deemed appropriate. While there are emerging frameworks for the development of systems and processes that lead to algorithmic decision-making or for the implementation thereof, they are still at an early stage and usually do not explicitly address human rights concerns. Moreover, it is unclear whether a normative framework regarding the use of algorithms or an effective regulation of automated data processing techniques is even feasible, as many technologies based on algorithms are still in their infancy and a greater understanding of their societal implications is needed.

Issues arising from use of algorithms as part of the decision-making process are manifold and complex. At the same time, the debate about algorithms and their possible consequences for individuals, groups, and societies is at an early stage. This should not, however, prevent efforts towards understanding what algorithms actually do, what societal consequences flow from them, and how possible human rights concerns could be addressed. 

Algorithms and human rights

A study from the Council of Europe identifies a number of human rights concerns triggered by the increasing role of algorithms in decision-making. Depending on the types of functions performed by algorithms and the level of abstraction and complexity of the automated processing that is used, their impact on the exercise of human rights will vary. Who is responsible when human rights are infringed based on algorithmically-prepared decisions? The person who programmed the algorithm, the operator of the algorithm, or the human being who implemented the decision? Is there a difference between such a decision and a human-made decision? What effects does it have on the way in which human rights are exercised and guaranteed in accordance with well-established human rights standards, including rule of law principles and judiciary processes?

Challenges related to the human rights impact of algorithms and automated data processing techniques are bound to grow, as related systems are becoming increasingly complex and are interacting with each other’s outputs in ways that are becoming progressively impenetrable to the human mind. There are many more themes which will require more detailed research to more systematically assess their challenges and potential from a human rights point of view, including questions related to big data processing, machine learning, artificial intelligence, and the Internet of Things.

But the Council of Europe has not simply identified AI as a subject deserving its closest attention. Most of the Council of Europe committees, intergovernmental and expert bodies, and monitoring structures are exploring the implications of AI to support their respective area of work, such as privacy and data protection, freedom of expression, freedom of thought, conscience and religion, the right to fair trial, and the prevention of discrimination.

There is also an international organisation called AlgorithmWatch working in the area of algorithmic decision making and their mission is: 

“The more technology develops, the more complex it becomes. AlgorithmWatch believes that complexity must not mean incomprehensibility.

AlgorithmWatch is a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision-making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.”

As it is mostly private sector actors that have in-depth knowledge and skills in the field of digital technologies, the Council for Europe is keen to cooperate and bring together expertise from different stakeholders. A high-level Conference will be organised by the Finnish Chairmanship of the Council of Europe Committee of Ministers and the Council of Europe in February 2019 in Helsinki, Finland. The objective is to engage stakeholders in a critical conversation about the challenges and opportunities that AI carries for individuals, for societies, and for the viability of our legal and institutional frameworks. Organizing discussions along the three main pillars that represent the Council of Europe core values, human rights, democracy, and the rule of law, panel discussions will explore the individual rights and implications of the use of AI, its societal aspects, and the concerns raised vis-à-vis regulatory and judicial systems. 

  

Note: This article is fed by various public papers and studies done by the Council of Europe, where Michael Rotert has been an observer and participant in many committees for over 10 years.


Michael Rotert is a pioneer and veteran of the Internet industry – and was the first person on German soil to receive an email. Amongst other posts, his stellar career spans Technical Head of the Data Center of the Informatics Department at the University of Karlsruhe, Founder and Managing Director of the ISP Xlink (later KPNQwest), and managing directorships of various service providers. After stepping down from his long-standing service as Chair of eco – Association of the Internet Industry in 2017, he became Honorary President of the eco Association in the same year. Rotert is also a founding member of the Internet Society, DE-NIC, and other Internet bodies, and contributes his industry expertise through membership of numerous committees and advisory councils.