AI ethics and business models: Aligned or off balance?
Signal post, February-March 2021.
Balance between questions and answers concerning ethics and AI technologies
In a recent article, entitled “Google might ask questions about AI ethics, but it doesn’t want answers”, the Guardian journalist John Naughton addressed the off balance between questions and answers concerning ethics and AI technologies. The article reported that “the departure of two members of the tech firm’s ethical artificial intelligence team exposes the conflict at the heart of its business model”. Consequently, Google’s ethical AI team was described by Business Insider as being “headless”. These news highlight important questions about ethical leadership, responsibilities and power dynamics regarding both individual experts and global tech companies. Are we doing enough, as a society, to develop meaningful and transparent conversation on future-related topics? Increasingly, philosophers, humanists, social scientists are asking difficult questions about ethics and highlighting different ethical dilemmas. However, have we seen enough potential answers or plausible solutions to these ethical questions from the companies developing, utilizing and benefitting from AI?
Last year, the World Economic Forum (WEF) emphasized the role of global tech companies in championing ethical AI. However, there are some signals that the global tech companies might be more interested in “ethics washing”(Bietti, 2020) than in creating solutions to ethical problems. There might be a danger that some companies might create a “shiny cover” for ethics and sustainability in order to pretend responsible behaviour without actually fixing their existing, often un-ethical business models. In order to avoid this scenario, we will need a better balance between questions and answers regarding technology development and ethics. This requires active dialogue and meaningful conversation between different societal actors.
Dialogue between AI developers and policymakers: building ‘ethics literacy’ and becoming ‘AI tech savvy’
In a new study, WEF reveals a huge ‘information gap’ between AI creators and policymakers. In essence, the report calls for AI developers to build ‘ethical literacy’ in order to better understand the complexity of human social systems, the potential negative impacts of AI systems on society, and the importance of embedding ethics in the AI designs from the beginning. On the other hand, policymakers need to become more ‘AI tech savvy’, to develop better knowledge on the technical side of AI. The authors of the report call for a “comprehensive and collectively shared understanding of AI’s development and deployment cycle”. In the future, we will need better AI governance by embracing continuous multi-stakeholder dialogue and by utilising interdisciplinary methodologies and skills. This is exactly what ETAIROS project is tackling right now: to create models for public authorities and business to have mutual cooperation and find ways to create ethical and sustainable solutions.
What does it take for businesses to champion ethical AI?
In addition to substantial time and resources, championing ethical AI requires the inclusion of ethical issues into the corporate management systems. How to actually achieve this? Companies have quality management systems, environmental management systems and corporate social responsibility systems but these do not necessarily include ethics. We are currently experiencing an emerging trend of applying responsibility and ethics into business. This phenomena is not new. Another business ethics “boom” existed in early 90’s. At that time, ethics was found to be a difficult subject to implement in practice. It might for instance include putting customer needs first, being transparent, providing resources for reporting unethical behaviour, or ignoring conflicts of interest.
The aforementioned ethical practices are also applicable to AI development and its application to business and production processes. Ethical AI should be made transparently for customers. Furthermore, in the production process it is ethical to provide reporting possibilities for unethical behaviour and try to avoid conflicts of interests.
EU’s trustworthy AI includes three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Moreover, ethical principles should include: respect for human autonomy, prevention of harm, fairness and explicability, equity and risk management.
Would these principles for trustworthy AI define the questions that we are waiting to be answered also in relation to Google’s AI ethics issues? Would it be too complicated to respect human autonomy, prevent harm and manage risks, be fair, open and equal – yes, it is complicated. These are huge and complex questions. This is probably why it is perceived that Google doesn’t want answers. Although it can be hard to find answers, it is of utmost importance to continue raising questions, discussing ethics, and taking action. Ethics is a difficult balancing art.
The writers: Nadezhda Gotcheva,Nina Wessberg, and Santtu Lehtinen (VTT)