Burr & Leslie on Ethical Assurance:A Practical Approach to the Responsible Design, Development, and Deployment of Data-Driven Technologies

Christopher Burr (University of Oxford – Oxford Internet Institute; The Alan Turing Institute) and David Leslie (The Alan Turing Institute) have posted “Ethical Assurance: A Practical Approach to the Responsible Design, Development, and Deployment of Data-Driven Technologies” on SSRN. Here is the abstract:

This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic assessment, which are used to operationalise normative principles, such as sustainability, accountability, transparency, fairness, and explainability, in order to identify limitations and gaps with the current approaches. Second, it provides an accessible introduction to the methodology of argument-based assurance, and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method to incorporate wider ethical, social, and legal considerations, in turn establishing a novel version of argument-based assurance that we call ‘ethical assurance.’ Ethical assurance is presented as a structured means for unifying the myriad practical mechanisms that have been proposed, as it is built upon a process-based form of project governance that supports inclusive and participatory ethical deliberation while also remaining grounded in social and technical realities. Finally, it sets an agenda for ethical assurance, by detailing current challenges, open questions, and next steps, which serve as a springboard to build an active (and interdisciplinary) research programme as well as contribute to ongoing discussions in policy and governance.

Kim on AI and Inequality

Pauline Kim (Washington University in St. Louis – School of Law) has posted “AI and Inequality” (Forthcoming in The Cambridge Handbook on Artificial Intelligence & the Law, Kristin Johnson & Carla Reyes, eds. (2022)) on SSRN. Here is the abstract:

This Chapter examines the social consequences of artificial intelligence (AI) when it is used to make predictions about people in contexts like employment, housing and criminal law enforcement. Observers have noted the potential for erroneous or arbitrary decisions about individuals; however, the growing use of predictive AI also threatens broader social harms. In particular, these technologies risk increasing inequality by reproducing or exacerbating the marginalization of historically disadvantaged groups, and by reinforcing power hierarchies that contribute to economic inequality. Using the employment context as the primary example, this Chapter explains how AI-powered tools that are used to recruit, hire and promote workers can reflect race and gender biases, reproducing past patterns of discrimination and exclusion. It then explores how these tools also threaten to worsen class inequality because the choices made in building the models tend to reinforce the existing power hierarchy. This dynamic is visible in two distinct trends. First, firms are severing the employment relationship altogether, relying on AI to maintain control over workers and the value created by their labor without incurring the legal obligations owed to employees. And second, employers are using AI tools to increase scrutiny of and control over employees within the firm. Well-established law prohibiting discrimination provides some leverage for addressing biased algorithms, although uncertainty remains over precisely how these doctrines will be applied. At the same time, U.S. law is far less concerned with power imbalances, and thus, more limited in responding to the risk that predictive AI will contribute to economic inequality. Workers currently have little voice in how algorithmic management tools are used and firms face few constraints on further increasing their control. Addressing concerns about growing inequality will require broad legal reforms that clarify how anti-discrimination norms apply to predictive AI and strengthen employee voice in the workplace.

Sharkey on AI for Retrospective Review

Catherine M. Sharkey (NYU School of Law) has posted “AI for Retrospective Review” (8 Belmont Law Review 374 (2021)) on SSRN. Here is the abstract:

This article explores the significant administrative law issues that agencies will face as they devise and implement AI-enhanced strategies to identify rules that should be subject to retrospective review. Against the backdrop of a detailed examination of HHS’s “AI for Deregulation” pilot and the very first use of AI-driven technologies in a published federal rule, the article proposes enhanced public participation and notice-and-comment processes as necessary features of AI-driven retrospective review. It challenges conventional wisdom that divides uses of AI technologies into those that “support” agency action—and therefore do not implicate the APA’s directives—and those that “determine” agency actions and thus should be subject to the full panoply of APA demands. In so doing, it takes aim at the talismanic significance of “human in the loop” that shields AI uses from disclosure and review by casting them in a merely supportive role.