This Signal post series present a brief overview of a topical issue related to the fields of AI and ethics. The signal post is produced based on the continuous horizon scanning work in the Finnish Academy SRC project ETAIROS. The aim is to identify and integrate weak signals in the area, analyze and discuss their importance and possible future developments and impacts.
Signal post, April 2020
In this Signal post, we discuss signals of change that refer to AI, ethics and handling of the novel COVID-19 virus outbreak.
There are many initiatives globally that aim at tackling the crisis by applying AI technologies while striving for striking the right balance between public health and civil rights. AI initiatives have been launched to support the research community in understanding the specifics of the novel COVID-19 virus and to develop treatments. AI startup Bluedot detected a cluster of unusual pneumonia cases in Wuhan in late December 2019 and accurately predicted where the virus might spread. Robots have been reducing human interaction by disinfecting hospital rooms and common places, moving food and supplies, and delivering telehealth consultations. AI is being used to track and map the spread of infection in real time, diagnose infections, predict mortality risk, and more, while the potential for future innovations cannot be overlooked.
In a recent overview on AI technologies and control of Covid-19 coronavirus, carried out by the Ad hoc Committee on Artificial Intelligence (CAHAI) secretariat of the Council of Europe, a wide range of AI activities is described. AI has been used for predicting the spread of the infection and ensuring public safety, searching for a cure in the pharmaceuticals industry and medical research, as a driving force for knowledge sharing, for observing and predicting of the evolution of the pandemic, to assist healthcare personnel, and evaluation of its use in the aftermath of a crisis has also been among the concerns. On 26th of April 2020 the Australian Government launched the COVIDSafe app: a voluntary contact trace app, based on the one used in Singapore, which uses a Bluetooth wireless signal to “speeds up contacting people exposed to coronavirus (COVID-19)”. Information security and privacy protection are highlighted in the description of the app, and a law firm was selected to carry out privacy impact assessment.
What if crisis-triggered, AI-powered tracking apps are knitted in the normal fabric of society?
In a recent blogpost in MIT Technology Review, Karen Hao reminded us that in February the EU outlined its new AI and data governance strategy, which emphasized the importance of protecting data privacy and facilitating trustworthy AI development. The strategy called for European AI to be trained only on European data to ensure its quality and ethical sourcing. However, ethics is context dependent: as the global situation rapidly changes due to coronavirus pandemic, does this new context and its new priorities – understanding the virus and the infection’s patterns, protecting the public and the medical personnel, and just saving human lives – call for rethinking the AI and data governance strategy?
On 8th of April 2020 the European Commission issued a recommendation on a “common Union toolbox for the use of technology and data to combat and exit from the COVID-19 crisis, in particular concerning mobile applications and the use of anonymized mobility data”. Chapter 10/3, p.9 indicates that “once the processing is no longer strictly necessary, the processing is effectively terminated and the personal data concerned are irreversibly destroyed, unless, on the advice of ethics boards and data protection authorities, their scientific value in serving the public interest outweighs the impact on the rights concerned, subject to appropriate safeguards.” This raises important considerations for the future, that is, under what conditions the value in serving the public interest could overshadow the impact on the civil rights.
What if it becomes acceptable to loosen or even lift AI ethics?
Even if the AI ethics guidelines may not be fully in place, they are there to support a way of thinking and steer decision-making. As questions of public health and safety become paramount, how about the questions of human dignity, privacy and rights? Who decides if a trade-offs between data privacy and public health is a meaningful one and on what basis decisions will be made and by whom? This pose questions of governance for the good of the society: how does this look like in the long-run, what might be the consequences for the future, if “temporary” and “voluntary” emergency tracking apps become the new normal? In a recent article, Yuval Noah Harari warned that the temporary measures of control and mass monitoring of the population by technology should not become permanent. In another recent article, Guardian collected experts’ opinion that growth in surveillance may be hard to scale back after pandemic. Concerns are that these tracking apps could potentially save human life but cause discrimination and compromise civil liberties. The discussion about the contact tracking apps pose fundamental questions about the value of human life, and moreover, the meaningful human life, and the value of democracy.
The role of governance for striking the right balance between public health and civil rights
In its new report, issued in April 2020, United Nations (UN) warned that the coronavirus pandemic is becoming a human rights crisis. UN highlighted that the adequate response and sustainable recovery from this unprecedented public health crisis are closely related to respecting human rights, preserving human dignity, paying attention who is suffering most and taking action to ease the pain. The pandemic situation is changing quickly. A signal came from Singapore, which had reportedly lost control of COVID-19 outbreak although the country was recently praised by the World Health Organization (WHO) as a model for coronavirus response. The number of COVID-19 cases has unexpectedly increased more than two and a half times in the last week, due largely to a “cognitive blindspot”: as it turned out, local officials underestimated the vulnerability of thousands of migrant workers, who live in cramped dormitories with up to 20 people to a room and share common facilities, like bathrooms and kitchens.
The COVID-19 pandemic requires social distancing and isolation for a several months period, at least. On the one hand, this is a long enough period to make people to change their habits and thereby some values may change as well. On the other hand, in some situations the recommendation for “social distancing” in communities is perhaps hard to follow as it could be a matter of privilege to have more living space and a garden, or might be seen as contradictory to established subcultural norms, for example not seeing elderly relatives may be perceived as rude. When social distancing is impossible and disinfection is needed, AI disinfection robots can be useful. Everything possible is now distanced and digitalized – implying the use of AI in most situations – which has benefits but also raises issues. The need to discuss ethical issues in this new global situation therefore becomes most urgent and important to avoid ‘throwing the baby out with the bathwater’.
In ETAIROS we are studying the future trajectories of AI technologies and their potential implications on ethics and governance – we are interested in raising awareness about “ethical blindspots”. Governance mechanisms are needed to steer coordination and support the public authorities, NGOs and business solutions when planning future actions related to ethical AI.
Writers: Nadezhda Gotcheva and Raija Koivisto (VTT)
Selected sources and further reading: