ETAIROS Signal Post, February 2023
Nina Wessberg, Nadezhda Gotcheva and Antero Karvonen
ChatGPT was launched by OpenAI, a San Francisco-based company on 30 November 2022. Latest estimation is that ChatGPT has already reached 100m users globally. OpenAI’s new product ChatGPT has raised vivid societal discussion about the ethics of using AI tools especially in education. ChatGPT is capable of providing (seemingly) comprehensive answers to the questions raised by the user, so that the answers may be used as such directly, for example for school homework or university essays.
“Artificial intelligence more skilled than Google can put educational institutions in trouble” was a topic in the Finnish news in January 2023. The fear is that students may increasingly use the AI chatbot to generate output for school tasks without first making the effort to think by themselves. If using such AI tool to help with homework becomes a quickly shared norm among school children and young people in general, it might be challenging to assess and prevent potential long-term social harm and unintended consequences in the future, especially from the perspective of education.
At universities these fears about the new AI tools are not uniformly shared. For instance, the University of Jyväskylä School of Economics allows the use of artificial intelligence in studies. At the Lapland University of Applied Sciences, ChatGPT artificial intelligence is viewed as a new tool, which could be used inappropriately, but such cases have not appeared yet. University of Tampere has followed Jyväskylä´s example and allows the use of language models in student works. However, if for instance ChatGPT is applied, it should be mentioned.
The reason why Jyväskylä School of Economics allows the use of AI systems in the studies is because the students want to be guided to use the developing technology correctly and responsibly. This may be a wise direction, since we cannot prevent people from using these tools. The best way is to teach people to use these tools ethically and critically, and to be aware of the risks and potential consequences.
Another viewpoint regarding fears and hopes concerning new chatbot AI systems refers to the ways these tools are developed. YLE news informed in late January 2023 that the company which created Chat GPT made its subcontractors’ employees read a text about violence, child sexual abuse, and interference with animals for less than two dollars an hour. The AI needs to be taught of all kinds of content. Surely, we should have ethical rules and ways how this is done. Users are also actively searching for ways to “jailbreak” ChatGPT and bypass its programmed restrictions on violence, illegal activities and racist talk.
The competition in this field is heating up. On 6th of February 2023 Google announced testing its own chatbot technology Bard AI. On 7th of February 2023 Microsoft launched ChatGPT as part of its search engine Bing, and released a new version of their browser Edge, integrating the features. Ensuring safety and reliability of such tools at design stage is of utmost importance, as well as wise use: awareness and skills are needed for recognizing how AI chatbot systems can be made useful and when their output should be ignored.
Currently, there seems to be massive experimentation with these tools and sharing of experiences in social media. It is timely to have broad societal discussion about the potential consequences of minimal human input: what if it becomes a cultural norm that minimum human effort is enough, and the rest can be outsourced to AI chatbots in the future? Could readily available answers by AI tools lead to creativity crisis and increased vulnerabilities for humanity’s capability to solve complex problems? Could the widespread use of AI chatbots improve the ability of humans to ask good questions, and in turn, how this could affect the future development of generative AI? Often, it is not about the answers but the questions we ask. Furthermore, asking questions in natural language can take HCI (human-computer interaction) into novel and more accessible directions. Nevertheless, it remains to be seen whether the final form of this technology will be the question, or the prompt.