Rangita De Silva De Alwis (U Pennsylvania Carey Law) has posted “”Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems” (Chicago Journal of International Law, forthcoming) on SSRN. Here is the abstract:
Paragraph 2 of the UN General Assembly Resolution 78/241 requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.
In implementation of this directive, on 1 February 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention inviting their formal input. This paper for the first time analyzes the positions of Member States on AI- driven LAWS. Using a qualitative coding matrix, the paper examines Member States’ positions in relation to human centric approaches to AI- driven LAWS, and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution but whether they can be free of data, algorithmic, and programmer bias. Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI- driven weapons asymmetry between different nation states depending on who has access to AI.
The article raises the question whether Yale Law’s Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “mistakes” in war may also apply to the pattern of biases in AI- driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?
During the 2017 testimony to the US Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, ….“because we take our values to war …. I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” The laws of war are rapidly advancing to a critical crossroads in war’s relationship with technology.
