Taddeo & Blanchard on Ethical Principles for Artificial Intelligence in National Defense

Mariarosaria Taddeo (Oxford Internet Institute) and Alexander Blanchard (The Alan Turing Institute) have posted “Ethical Principles for Artificial Intelligence in National Defence” (Philosophy & Technology) on SSRN. Here is the abstract:

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles — justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.

Green on The Flaws of Policies Requiring Human Oversight of Government Algorithms

Ben Green (University of Michigan at Ann Arbor) has posted “The Flaws of Policies Requiring Human Oversight of Government Algorithms” on SSRN. Here is the abstract:

Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to oversee algorithmic decision-making. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more stringent approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, agencies or vendors must conduct preliminary evaluations of whether people can effectively oversee the algorithm.

Watson et al. on Local Explanations Via Necessity and Sufficiency: Unifying Theory and Practice

David Watson (University College London) et al. have posted “Local Explanations Via Necessity and Sufficiency: Unifying Theory and Practice” on SSRN. Here is the abstract:

Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors with respect to a given context, and demonstrate its flexibility and competitive performance against state of the art alternatives on various tasks.