Chatziathanasiou on ‘Hungry Judges’ Should not Motivate the Use of ‘Artificial Intelligence’ in Law

Konstantin Chatziathanasiou (Institute for International and Comparative Public Law, University of Münster; MPI for Research on Collective Goods) has posted “Beware the Lure of Narratives: ‘Hungry Judges’ Should not Motivate the Use of ‘Artificial Intelligence’ in Law” (German Law Journal, forthcoming) on SSRN. Here is the abstract:

The ‘hungry judge’ effect, as presented by a famous study, is a common point of reference to underline human bias in judicial decision-making. This is particularly pronounced in the literature on ‘artificial intelligence’ (AI) in law. Here, the effect is invoked to counter concerns about bias in automated decision-aids and to motivate their use. However, the validity of the ‘hungry judge’ effect is doubtful. In our context, this is problematic for, at least, two reasons. First, shaky evidence leads to a misconstruction of the problem that may warrant an AI intervention. Second, painting the justice system worse than it actually is, is a dangerous argumentative strategy as it undermines institu-tional trust. Against this background, this article revisits the original ‘hungry judge’ study and argues that it cannot be relied on as an argument in the AI discourse or beyond. The case of ‘hungry judges’ demonstrates the lure of narratives, the dangers of ‘problem gerrymandering’, and ultimately the need for a careful reception of social science.

Mökander et al. on A Guide to the Role of Auditing in the Proposed European AI Regulation

Jakob Mökander (Oxford Internet Institute) et al. have posted “Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation” on SSRN. Here is the abstract:

The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.

Raizonville & Lambin on Algorithmic Explainability and Obfuscation under Regulatory Audits

Adrien Raizonville (Institut Polytechnique de Paris) and Xavier Lambin (ESSEC Business School) have posted “Algorithmic Explainability and Obfuscation under Regulatory Audits” on SSRN. Here is the abstract:

The best-performing and most popular algorithms are often the least explainable. In parallel, there is growing concern and evidence that sophisticated algorithms may engage, autonomously, in profit-maximizing but welfare-reducing strategies. Drawing on the literature on self-regulation, we model a regulator who seeks to encourage firms’ compliance to socially desirable strategies through the threat of (costly and imperfect) audits. Firms may invest in explainability to better understand their algorithms and reduce their cost of compliance. We find that, when audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator’s detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed.

Taeihagh on Governance of AI

Araz Taeihagh (NUS) has posted “Governance of Artificial Intelligence” (Policy and Society, 40:2) on SSRN. Here is the abstract:

The rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose serious challenges to governments as they must manage the scale and speed of socio-technical transitions occurring. While there is considerable literature emerging on various aspects of AI, governance of AI is a significantly underdeveloped area. The new applications of AI offer opportunities for increasing economic efficiency and quality of life, but they also generate unexpected and unintended consequences and pose new forms of risks that need to be addressed. To enhance the benefits from AI while minimising the adverse risks, governments worldwide need to understand better the scope and depth of the risks posed and develop regulatory and governance processes and structures to address these challenges. This introductory article unpacks AI and describes why the Governance of AI should be gaining far more attention given the myriad of challenges it presents. It then summarises the special issue articles and highlights their key contributions. This special issue introduces the multifaceted challenges of governance of AI, including emerging governance approaches to AI, policy capacity building, exploring legal and regulatory challenges of AI and Robotics, and outstanding issues and gaps that need attention. The special issue showcases the state-of-the-art in the governance of AI, aiming to enable researchers and practitioners to appreciate the challenges and complexities of AI governance and highlight future avenues for exploration.