Gavoor & Teperdjian on a Structural Solution to Mitigating Artificial Intelligence Bias in Administrative Agencies 

Aram A. Gavoor (George Washington University Law School) and Raffi Teperdjian (George Washington University Law School) have posted “A Structural Solution to Mitigating Artificial Intelligence Bias in Administrative Agencies” (89 George Washington Law Review Arguendo (2021 Forthcoming)) on SSRN. Here is the abstract:

The rise of artificial intelligence (AI) from nascent theoretical science to an advancing juggernaut of industry with national security implications has begun to permeate U.S. federal administrative agencies. For all the potential benefits AI brings, misapplied or underregulated administrative agency utilization of AI risks eroding American values. The Executive Branch must carefully calibrate its administrative uses of AI to mitigate for biases that flow from models ranging from simple algorithms to complex machine learning systems, especially for biases that would adversely affect protected classes and vulnerable groups. Save for a voluntary survey by an independent advisory agency, the federal government lacks an organic accounting of AI use cases and development across administrative agencies. Recent executive actions have only begun to address these issues by establishing broad-stroke foundational principles and recommendations that can lead to the development of optimal AI regulation and general utilization. Despite these initial gains, the prospective utilization of AI in administrative adjudications, rulemakings, grant administration and the like, lacks the structural framework to apply meaningful implementing and accountability mechanisms. The Biden administration will have the opportunity and challenge to expand on the foundation of the prior two administrations and normalize the process of administrative integration of AI with the quality control, consistency measures, and policymaking processes that best leverage federal government resources. This is especially important in light of the related national security implications that flow from this issue. Regardless of whether the Biden administration seeks to undergird executive discretion with legislation or operate on a self-restraint basis, the appropriate regulation of AI in administrative agencies should balance technological innovation with legal compliance and fidelity to well-tread limiting principles. We conclude that two units of the Executive Office of the President, the Office of Information and Regulatory Affairs and the Office of Science and Technology Policy, are optimally situated and experienced to lead the policy-making, adoption, and utilization of AI systems in administrative agencies.

Shackelford, Asare, Dockery, Raymond, and Sergueeva on the Comparative Analysis of AI Governance

Scott Shackelford (Indiana University – Kelley School of Business – Department of Business Law, Harvard Kennedy School Belfer Center for Science & International Affairs, Center for Applied Cybersecurity Research, Stanford Center for Internet and Society, Stanford Law School), Isak Nti Asare (Indiana University – Hamilton Lugar School of Global and International Studies), Rachel Dockery (Indiana University Maurer School of Law), Anjanette Raymond (Indiana University – Kelley School of Business – Department of Business Law, Queen Mary University of London, School of Law, Indiana University Maurer School of Law), and Anexandra Sergueeva (Indiana University Bloomington) have posted “Should We Trust a Black Box to Safeguard Human Rights? a Comparative Analysis of AI Governance” (UCLA Journal of International Law and Foreign Affairs, 2021) on SSRN. Here is the abstract:

The race to take advantage of the numerous economic, security, and social opportunities made possible by artificial intelligence (AI) is on with nations, intergovernmental organizations, cities, and firms publishing an array of AI strategies. Simultaneously, there are various efforts to identify and distill an array of AI norms. Thus far, there has been limited effort to mine existing AI strategies to see whether common AI norms such as transparency, human-centered design, accountability, awareness, and public benefit are entering into these strategies. Such data is vital to identify areas of convergence and divergence that could highlight opportunities for further norm development in this space by crystallizing State practice.

This Article analyzes more than forty existing national AI strategies paying particular attention to the U.S. context, and then comparing those strategies with private-sector efforts and addressing common criticisms of this process within a polycentric framework. Our findings support the contention that State practices are converging around certain AI principles, focusing primarily upon public benefit. AI is a critical component of international peace, security, and sustainable development in the twenty-first century, and as such reaching consensus on AI governance will become vital to help build bridges and trust.

Coupette, Beckedorf, Hartung, Bonmarito & Katz on Measuring Law Over Time

Corinna Coupette (Max Planck Institute for Informatics), Janis Beckedorf (Heidelberg University Faculty of Law) Dirk Hartung
Bucerius Center for Legal Technology & Data Science; Stanford Codex Center, Michael James Bommarito
(Bommarito Consulting, LLC; Licensio, LLC; Stanford Center for Legal Informatics; Michigan State College of Law), and Daniel Martin Katz (Illinois Tech – Chicago Kent College of Law; Stanford CodeX; Bucerius Center for Legal Technology & Data Science) have posted “Measuring Law Over Time: A Network Analytical Framework with an Application to Statutes and Regulations in the United States and Germany” on SSRN. Here is the abstract:

How do complex social systems evolve in the modern world? This question lies at the heart of social physics, and network analysis has proven critical in providing answers to it. In recent years, network analysis has also been used to gain a quantitative understanding of law as a complex adaptive system, but most research has focused on legal documents of a single type, and there exists no unified framework for quantitative legal document analysis using network analytical tools. Against this background, we present a comprehensive framework for analyzing legal documents as multi-dimensional, dynamic document networks. We demonstrate the utility of this framework by applying it to an original dataset of statutes and regulations from two different countries, the United States and Germany, spanning more than twenty years (1998–2019). Our framework provides tools for assessing the size and connectivity of the legal system as viewed through the lens of specific document collections as well as for tracking the evolution of individual legal documents over time. Implementing the framework for our dataset, we find that at the federal level, the American legal system is increasingly dominated by regulations, whereas the German legal system remains governed by statutes. This holds regardless of whether we measure the systems at the macro, the meso, or the micro level.

Warner & Sloan on Making Artificial Intelligence Transparent

Richard Warner (Chicago-Kent College of Law) and Robert H. Sloan (University of Illinois at Chicago) have posted “Making Artificial Intelligence Transparent: Fairness and the Problem of Proxy Variables” on SSRN. Here is the abstract:

AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach transparency is to require that systems be explainable, as that concept is understood in computer science. An system is explainable if one can provide a human-understandable explanation of why it makes any particular prediction. Explainability should not be equated with transparency. Instead, we define transparency for a regulatory purpose. A system is transparent for a regulatory purpose (r-transparent) when and only when regulators have an explanation, adequate for that purpose, of why it yields the predictions it does. Explainability remains relevant to transparency but turns out to be neither necessary nor sufficient for it. The concepts of explainability and r-transparency combine to yield four possibilities: explainable and either r-transparent or not; and not explainable and either not r-transparent or r-transparent. Combining r-transparency with ideas from the Harvard computer scientist Cynthia Dwork, we propose for requirements on AI systems.

Liu, Maas, Danaher, Scarcella, Lexer, and van Rompaey on Artificial Intelligence and Legal Disruption

Hin-Yan Liu (University of Copenhagen Faculty of Law), Matthijs M. Maas (CSER Cambridge; King’s College, Cambridge; University of Copenhagen CECS), John Danaher (NUIG School of Law), Luisa Scarcella, Michaela Lexer (University of Graz), and Léonard Van Rompaey have posted “Artificial Intelligence and Legal Disruption: A New Model for Analysis” (Law, Innovation and Technology 12, no. 2 (Sept. 16, 2020)) on SSRN. Here is the abstract:

Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies.