AI can enrich human life, but human life is often too rich for AI to understand

Principal Scientist, docent Jaana Leikas from our ETAIROS project lists ten common challenges of AI in recent FCAI (Finnish Centre for Artificial Intelligence) blog. These are redistribution of power, renewal of work tasks, increasing inequality, distorted data, mass surveillance, profiling and automated decision-making, toxic information environment, disinformation and AI hallucinations, manipulation, bubbling, and last but not least, threatened humanity.
Despite of all these tremendous challenges, I would like to emphasize that AI has a great feature: it can generate information that we could not have imagined without it, if it has a good algorithm and enough high-quality data. AI can therefore, in the best case, enrich our thinking and the possibilities of our decision-making, as well make the information management and use efficient. Therefore, AI could really be a tool to deepen our understanding of human needs and solutions.
In the most radical mode AI could act in decision-making board or even in leading a political party, which have been a mission for some Nordic art experiments. In these experiments some art groups have prompted AI to create party program and let AI to generate decisions for political challenges. In my vision, it would be fascinating to see an AI party in elections and in the best case also sitting in the parliament and making decisions there. This could enrich the political field, and increase inclusion and democracy, since more people could participate in the political decision-making processes via mass-data processed by AI.
Behind all this, however, is that AI requires model of the system that AI is about to tackle, and quality data. Hence, the challenge is to simplify human needs into a model, and then to produce enough qualitative data from people, and at the same time ensure their privacy and dignity, transparency, trust and fairness. This can however be difficult from a human life perspective. This we saw for instance in The Finnish Ministry of Finance’s project AuroraAI, which showed that people are not keen on to use an AI application touching sensitive issues, like concern for loved ones, whereas for instance AI in tax system is working well, probably because the AI system in tax issues is straightforward and experienced as non-sensitive, touching mainly “simple” economic issues.
All in all, too much development is going ”technology-first”, which does not lead to an optimal result, due to the complexity of human life. Too little attention in AI development is put to broad social impacts. We would like to see more inclusiveness, ethical consideration, future oriented perspectives and responsibility indicators in AI design and development already in the early phases.
We want to create resilience to overcome the challenges AI technology brings. We should for instance be capable of identifying bubbling or manipulation that AI technology can cause. Therefore, to consider ethical issues of AI is key aspect and, in fact, the way to gain resilience. In ETAIROS we have developed tools to achieve this ethical consideration. We want to emphasize that when AI is safe, transparent and understandable, it supports our humanity. And when we understand AI´s capacity, our expectations of it will be realistic.
This blog has been written based on the speech given in SRI — The 2024 Sustainability Research and Innovation Congress in Helsinki, 13.6.2024, Special session in Transforming Society – How strategic research impacts thinking and practices.
Nina Wessberg, Principal Scientist, VTT
Kuva: Katja Anokhina / Unsplash
