Death by algorithm?

Lethal autonomous weapons are the subject of an intense debate in Geneva within the UN Inhumane Weapons Convention (CCW) process. The process started in 2013, when the diplomatic meeting of CCW State Parties decided that an informal Meeting of Experts was to be convened to discuss questions related to emerging technologies in the area of lethal autonomous weapons systems (LAWS).

Several years later, the future of both laws of wars and, to a significant degree, the rules applicable to AI use in armed conflict, law enforcement and other security domains, remains opaque. The process is seemingly trapped in an ever-circling continuum in the CCW.

Despite the glacial speed of process, some limited progress has been made. In 2018 and 2019, a Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS), identified 11 ‘guiding principles’ for LAWS use. The Group also noted that the challenges were profound. LAWS destabilize many of the key tenets of international humanitarian law (IHL) and it has been stressed that international law, in particular the United Nations Charter, IHL and ethics should guide all future work. Meetings had been set up for 2020 and 2021 to ‘explore and agree on possible recommendations on options related to emerging technologies in the area of lethal autonomous weapons systems’. The goal is to arrive at ‘consensus recommendations in relation to the clarification, consideration and development of aspects of the normative and operational framework on emerging technologies in the area of lethal autonomous weapons systems’. However, the global Coronavirus (COVID-19) situation has disrupted the meeting schedule in Geneva and it is unclear how or when the work of the GGE on LAWS will be resumed.

Whatever the outcome of the process, one theme underlies the entire discussion. LAWS use entails an inherent ‘humanity deficit’. In autonomous weapons systems decision-making that may end up causing deaths may potentially be  transferred to machines. As compellingly articulated by Christof Heyns in an Informal Meeting of Experts on Lethal Autonomous Weapons in 2016, the risk is that ‘a human being in the sights of a fully autonomous machine is reduced to being an object – being merely a target. This is death by algorithm’.

Here, one argumentation pattern connected to the ‘humanity deficit’ is worth exploring. In the discussions, some commentators have argued that LAWS might in fact make future battlefields more ‘humane’. A machine, it is argued, could be relied upon to comply with legal rules because it is not impeded by human frailties, failings and emotions, such as panic, anger, fatigue, stress, hunger, blood-thirst, and so on. Similarly, commentators argue that autonomous weapons could manage vast amounts of information and execute combat missions with ‘surgical precision’, reducing collateral damage in military operations. However, others have convincingly argued that machines will likely betray the hopes for two reasons. First, they are incapable of true or genuine compassion, mercy, regret, pity, empathy and other ethical dictates of humanity. As such, machines can neither understand nor be moved by the essential elements of the principle of human dignity that underpin many legal rules. After all, artificial intelligence is artificial. Likewise, the informational advantages may prove useless. Information is not the same as knowledge, data is not the same as understanding, and precision is meaningless if there is insufficient knowledge and understanding.

The debate seems partly misguided. Focusing on whether or not autonomous weapons could comply with international humanitarian law might mislead or even warp the legal dimension of the LAWS discussion. Instead of engaging in this debate, we should perhaps attempt to devise a novel approach. In contrast to Schmitt and Thurnher who argue that human rules are enough and that ‘as a matter of law, more may not be asked of autonomous weapon systems than of human-operated systems’, it seems worth asking whether LAWS need their own rules, potentially more stringent than those applicable to humans. Because of their ‘humanity deficit’, we might even ask whether non-living machines intrinsically incapable of comprehending the notion of ‘life’ should even be legally allowed to preside over ethical choices involving decisions over human life or death? Hence, the most urgent question may not be the technical question of how could we regulate LAWS, but rather the ethical question of should, may or shall we allow the use LAWS.

The last questions are not, however, visibly on the table in Geneva. The process is committed to producing the outlines of a future ‘normative and operational framework’ on LAWS, potentially allowing ‘death by algorithm’. Returning to the thought-provoking argument articulated by Christof Heyns in the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons: ‘Machines cannot fathom the importance of life, and the significance of the threshold that is crossed when life is taken.’ If so, should we ever allow them to make those choices?

 

The blog is written by Dr. Johanna Friman, an expert on international law at the University of Turku.