Signal post, October 2020.
In the end of September 2020, VTT organized a Futures workshop with members of ETAIROS Scientific Advisory Board to discuss the future of AI ethics. We invited eminent academics from UK, Spain, Sweden, Scotland, Japan, Finland and the Netherlands, among them Dr. Kentaro Watanabe, Human Augmentation Research Center National Institute of Advanced Industrial Science and Technology (AIST); Professor José J. Cañas, Cognitive Ergonomics Group, University of Granada; Professor Gilbert Cockton, School of Computing, University of Sunderland, UK; Dr. Tony Kinder, University of Edinburgh; Professor Emeritus Göran Collste, Department of Culture and Society, Linköping University, Sweden; Associate Professor Peter Thanisch, Tampere University, Finland. In addition to the high-level advisory board, ETAIROS project researchers and experts participated in the lively discussions and group work.
ETAIROS Futures workshop involved application of scenario method as a basis for the discussions. Scenarios are stories or narratives set in the future that explore how the world would change if certain trends were to strengthen or diminish. The main drivers of the scenarios in our case were conceptualized as tensions between the dynamics of cooperation & competition and between different versions of AI development, performance oriented AI & Human centric AI. These drivers and trends were identified through foresight methods, such as horizon scanning.
The question posed to the participants through the scenarios was the following: if AI is developing into certain direction, manifested in four different scenarios, what would be the ethical issues and potential societal impacts in these future worlds in 2030?
Operationalizing “fairness” for pandemic vaccine rationing
In order to stimulate the workshop discussions, Associate Professor Peter Thanisch gave a presentation on the development of pandemic vaccine distribution. His presentation discussed the potential ways in which the planned vaccines should be distributed to certain developing countries by the World Health Organization (WHO). This presentation served as a practical example of what kinds of broader ethical issues algorithmic decision-making systems is raising for the society and public decision-making.
More specifically, the vaccine allocation committee of WHO determines countries’ shares of vaccines on the basis of population size, demographic profile and estimated vulnerability. The practical problem is to find an allocation that is an acceptable trade-off between the utility and fairness of the vaccination. The main challenge related to the distribution of the vaccine is the fact that there is no single adequate metric for fairness.
The presentation identified multiple potential ways of defining fairness through different rationing processes and algorithms. However, even though fairness can be expressed as the most optimal resolution of chosen standards and guiding principles, the principles and standards themselves represent certain values. This inevitably involves choices between different values and therefore dimensions of power.
Discussing the future of algorithmic decision-making
The workshop discussion that followed the presentation took note of the changing ways in which we define algorithms. It has become commonplace to use algorithm as a pejorative shorthand term for the kind of decision-making that is opaque and beyond the understanding of many citizens. Especially the so-called black box decision-making algorithms can create a sense of alienation, a loss of human control over technological development.
Accordingly, the discussions reflected growing concerns about the way in which black box decision-making algorithms can undermine the legitimacy of public decision-making and broader societal solidarity. A particular concern was the increasing gap between the understanding of AI by its developers and the understanding of AI in the society. If this gap between technology and society grows, the potential for societal alienation increases.
As a result, the development of AI systems was seen as a political question, dealing with questions of power and legitimacy. The concern is that the application of AI systems can further increase global asymmetries in power and wealth, thereby deepening existing inequalities. Accordingly, the issue of AI development brought questions of trust between citizens and governments, as well as trust between lay people and experts, to the forefront as a crucial societal challenge.
Reflecting these concerns, the explainability of AI systems was identified as a key source of trust in algorithmic decision-making. Explainability can be seen as the ability to identify the key factors, which contribute to the decision-making process of an algorithm. One practical suggestion to develop the accountability of algorithms was the possibility to create citizen panels that would review algorithms. However, the task to create capacities for citizens to understand and analyze algorithmic decision-making and AI systems is far from simple. Indeed, from a societal point of view, analyzing the technical design details of an algorithm is not as important as the assessment of what the algorithms actually does and what kinds of impacts it has on the society.
Another problem with trying to increase transparency and openness in algorithms is that companies often view their algorithms as trade secrets, as competitive advantage, which do not need to be publicly disclosed. This issue of trade secrets relates to the global market competition between various AI systems that are developed around the world. This development can create efficient and accurate systems that distribute various benefits for societies but it can also lead to a competition or a race in which the “winner-takes-it-all”.
As a result, the development of AI should not be only in the hands of a few private companies or nations. On the contrary, because of its far-reaching societal implications and impacts, the development of AI needs to have public accountability and legitimacy. From a legal perspective, it is important to have accountability of algorithms because it can increase the acceptance of algorithmic decision-making systems. However, it is important to note that the question of the desirability of algorithmic decision-making in society is not addressed by accountability alone. Rather, the question of the broader responsibility of algorithmic decision-making systems is increasingly relevant in AI ethics.
The idea of responsibility in algorithmic decision-making raised many challenging questions in the workshop, such as: how to create capacities for citizens to understand algorithmic decision-making systems? How to create understanding and awareness of ethics among AI developers and businesses? How to take into account the increasingly diversifying population in AI development? In the case of companies, it is important to draw attention to how ethics are implemented in practice in their operations. The discussion emphasized that ethics in AI-business is primarily not about competitive advantage. Indeed, often the implementation of ethical principles does not materialize, at least as a short-term advantage, because it involves principles that might limit or curtail the development of AI systems.
The Futures workshop participants agreed that values, such as solidarity, can indeed alleviate problems and asymmetries that are created through the implementation of AI systems. In practice, this could mean the need for a creation of a new kind of social contracts that take into account the impacts of algorithmic decision-making in the future. Furthermore, the development and application of AI systems requires increased awareness of interdependencies and tensions between actors and societies on a global scale. The responsibility to manage and solve ethical issues around AI cannot be placed on the shoulders of individuals. The implementation of a more ethical AI requires collective efforts and cooperation in addition to ongoing diverse discussion on algorithmic decision-making.
The writers: Santtu Lehtinen, Nina Wessberg and Nadezhda Gotcheva (VTT)