Signal post, August 2020.
In the Signal posts we discuss signals of change – concrete events, shared experiences and current developments – that give us hints of what might be possible in the future.
“The news on the TV blared in the background, “The controversial Band system enacted just two years ago in 2027 was designed to find the most capable…” They started reeling out statistics about the number of Band 1 students, apparently on the rise this year, although hardly anyone from our school had managed it. “And now, reporting live from a local grammar school in Buckinghamshire, where a record 87 students have placed In Band 1 this morning. So…” *
Jessica Johnson, 18, won the Orwell Youth Prize in 2019 for ‘A Band Apart’, a short story about an algorithm that split students into bands based on the class that they were from. This dystopian fiction turned out to be a prescient warning of what might happen in the future. Professor Jean Seaton, director of the Orwell Foundation, commented that the teenager “saw into the heart of what the system represents and her story demonstrates the human ability which exams only exist to uncover”.
What happens when AI is really used to set students’ grades? In the education sector, a recent case sparked a lot of attention, controversy and public debate. Faced with impossibility to organize the final graduation exams in person due to COVID-19 pandemic, in 2020 the International Baccalaureate Organization (IBO), which is operating in over 150 countries, decided to cancel the exams and use AI instead. An algorithm, developed by a third party, was used for setting the overall scores for almost 200 000 high-school graduates based on students’ past work and other historic data related the school. However, the results, produced by the exam-grading algorithm were largely perceived as surprising, if not shocking, by thousands of students and their parents. Are we aware of the possible harms and deeply touching emotional consequences of utilizing automated decision-making systems?
Furthermore, deployment of the algorithm was perceived and categorized as unethical and unjust. The lack of transparency in the AI assessment was disturbing – it was unclear how the grades were produced and how they can be appealed if they appear anomalous. In practice, the affected groups self-organized to shape the appeal and leading institutions and authorities have reacted to this case. In Finland, a group of representative parent of justice for IB students wrote a letter to the Ministry of Education and Culture in August 2020. In the UK, the marks have been investigated by England’s Office of Qualifications and Examinations Regulation (Ofqual). The Swiss Group of International Schools (SGIS) has expressed its frustration to the IBO and requested a formal meeting with a representative of the IB Assessment Office to discuss this matter further. The Norwegian Data Protection Authority(DPA) sent to IBO an advanced notification to order rectification of IB grades. DPA referred to the IBO case in terms of issues with GDPR ethical requirements, while referring to the European Data Protection Board Guidelines on fairness. On 17th of August, IBO provided update on May 2020 Diploma programme and career-related programme results, which suggested results review and some grade adjustments, where applicable. This amendment seemed to make difference for some of the students but not for all: the overall grades rewarding model remains fuzzy and the public debate continues.
In general, and especially now with pandemic-triggered safe distance requirements, algorithms are increasingly involved in systems to support decision-making in various domains. To elaborate these AI developments, last year the European Parliamentary Research Service, Scientific Foresight Unit published a report on understanding opportunities and challenges in algorithmic decision-making. The degree of automation and thus human intervention may vary but in principle, these systems rely on analysis of large amounts of data to derive information deemed useful to make decisions. These developments pose an interesting question: is AI a technology or an actor? As a technology, nowadays AI serves a certain purpose by fulfilling a specific task. However, an algorithm may be seen as an actor as well due to potential to hold and exercise a dominating power in the decision-making.
If an AI-enabled solution has the power to determine our future, this can have long-term consequences for individuals and the society. Are we going to see more cases of algorithms determining critical paths in our human development? What are the mechanisms to control and manage such situations? What are the burning challenges to be solved? At least the transparency of the criteria and values behind the algorithmic decision-making systems could be made visible and understandable, especially for independent experts, NGOs, evaluation bodies or data protection authorities (DPA), to audit and certify those systems for the good of the public and the relevant stakeholders. The pandemic situation established time pressure conditions for educational institutions, among others, to quickly adapt and find solutions. Still, in which cases it is acceptable to use an algorithm, in which cases it should be carefully considered or even forbidden, and what the stakeholders need to know? This case also illustrates that technology is developed and deployed faster than the capabilities to assign legal, regulatory, ethical and political governance mechanisms to coordinate efforts and to ensure accountability of automated decision-making support systems.
Our futures are not fixed; we are co-creating them with our choices today. We can make ethical choices to design and deploy algorithmic decision-making systems that does not set people “a band apart”.
* ‘A Band Apart’ by Jessica Johnson | Orwell Youth Prize Winner 2019 (Senior Prize)