Coglianese & Lai on Assessing Automated Administration

Cary Coglianese (University of Pennsylvania Carey Law School) and Alicia Lai (same) have posted “Assessing Automated Administration” (In Oxford Handbook on AI Governance (Justin Bullock et al. eds., forthcoming)). Here is the abstract:

To fulfill their responsibilities, governments rely on administrators and employees who, simply because they are human, are prone to individual and group decision-making errors. These errors have at times produced both major tragedies and minor inefficiencies. One potential strategy for overcoming cognitive limitations and group fallibilities is to invest in artificial intelligence (AI) tools that allow for the automation of governmental tasks, thereby reducing reliance on human decision-making. Yet as much as AI tools show promise for improving public administration, automation itself can fail or can generate controversy. Public administrators face the question of when exactly they should use automation. This paper considers the justifications for governmental reliance on AI along with the legal concerns raised by such reliance. Comparing AI-driven automation with a status quo that relies on human decision-making, the paper provides public administrators with guidance for making decisions about AI use. After explaining why prevailing legal doctrines present no intrinsic obstacle to governmental use of AI, the paper presents considerations for administrators to use in choosing when and how to automate existing processes. It recommends that administrators ask whether their contemplated uses meet the preconditions for the deployment of AI tools and whether these tools are in fact likely to outperform the status quo. In moving forward, administrators should also consider the possibility that a contemplated AI use will generate public or legal controversy, and then plan accordingly. The promise and legality of automated administration ultimately depends on making responsible decisions about when and how to deploy this technology.

Ashley on Capturing the Dialectic between Principles and Cases

Kevin Ashley (University of Pittsburgh – School of Law) has posted “Capturing the Dialectic between Principles and Cases” (Jurimetrics, Vol. 44, p. 229, 2004) on SSRN. Here is the abstract:

Theorists in ethics and law posit a dialectical relationship between principles and cases; abstract principles both inform and are informed by the decisions of specific cases. Until recently, however, it has not been possible to investigate or confirm this relationship empirically. This work involves a systematic study of a set of ethics cases written by a professional association’s board of ethical review. Like judges, the board explains its decisions in opinions. It applies normative standards, namely principles from a code of ethics, and cites past cases. We hypothesized that the board’s explanations of its decisions elaborated upon the meaning and applicability of the abstract code principles and past cases. In effect, the board operationalizes the principles and cases. We hypothesized further that this operationalization could be captured computationally and used to improve automated information retrieval. A computer program was designed to retrieve from the on-line database those ethics code principles and past cases that are relevant to analyzing new problems. In an experiment, we used the computer program to test the hypotheses. The experiment demonstrated that the dialectical relationship between principles and cases exists and that the associated operationalization information improves the program’s ability to assess which codes and cases are relevant to analyzing new problems. The results have significance both to the study of legal reasoning and improvement of legal information retrieval.

Malgieri & Pasquale on Ex Ante Accountability for AI

Gianclaudio Malgieri (EDHEC; Vrije Universiteit Brussel Law) and Frank A. Pasquale (Brooklyn Law School) have posted “From Transparency to Justification: Toward Ex Ante Accountability for AI” on SSRN. Here is the abstract:

At present, policymakers tend to presume that AI used by firms is legal, and only investigate and regulate when there is suspicion of wrongdoing. What if the presumption were flipped? That is, what if a firm had to demonstrate that its AI met clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability, before it was deployed? This paper proposes a system of “unlawfulness by default” for AI systems, an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes. The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the market, must perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the AI is not illegitimate (and thus not unfair, not discriminatory, and not inaccurate). Such a standard may not seem administrable now, given the widespread and rapid use of AI at firms of all sizes. But such requirements could be applied, at first, to the largest firms’ most troubling practices, and only gradually (if at all) to smaller firms and less menacing practices.