Gallese on the AI Act and the Right to Technical Interpretability

Chiara Gallese (University of Trieste Dept of Engineering) has posted “The AI Act Proposal: a New Right to Technical Interpretability?” on SSRN. Here is the abstract:

The debate about the concept of the so called right to explanation in AI is the subject of a wealth of literature. It has focused, in the legal scholarship, on art. 22 GDPR and, in the technical scholarship, on techniques that help explain the output of a certain model (XAI). The purpose of this work is to investigate if the new provisions introduced by the proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act), in combination with Convention 108 plus and GDPR, are enough to indicate the existence of a right to technical explainability in the EU legal framework and, if not, whether the EU should include it in its current legislation. This is a preliminary work submitted to the online event organised by the Information Society Law Center and it will be later developed into a full paper.

Sarel on Restraining ChatGPT

Roee Sarel (Institute of Law and Economics, University of Hamburg) has posted “Restraining ChatGPT” (UC Law SF Journal (formerly Hastings Law Journal), Forthcoming) on SSRN. Here is the abstract:

ChatGPT is a prominent example of how Artificial Intelligence (AI) has stormed into our lives. Within a matter of weeks, this new AI—which produces coherent and human-like textual answers to questions—has managed to become an object of both admiration and anxiety. Can we trust generative AI systems, such as ChatGPT, without regulatory oversight?

Designing an effective legal framework for AI requires answering three main questions: (i) is there a market failure that requires legal intervention? (ii) should AI be governed through public regulation, tort liability, or a mixture of both? and (iii) should liability be based on strict liability or a fault-based regime such as negligence? The law and economics literature offers clear considerations for these choices, focusing on the incentives of injurers and victims to take precautions, engage in efficient activity levels, and acquire information.

This Article is the first to comprehensively apply these considerations to ChatGPT as a leading test case. As the United States is lagging behind in its response to the AI revolution, I focus on the recent proposals in the European Union to restrain AI systems, which apply a risk-based approach and combine regulation and liability. The analysis reveals that this approach does not map neatly onto the relevant distinctions in law and economics, such as market failures, unilateral versus bilateral care, and known versus unknown risks. Hence, the existing proposals may lead to various incentive distortions and inefficiencies. The Article, therefore, calls upon regulators to place a stronger emphasis on law and economics concepts in their design of AI policy.

Sokol on Technology Driven Government Law and Regulation

D. Daniel Sokol (USC Gould School of Law) has posted “Technology Driven Government Law and Regulation” (26 Virginia Journal of Law and Technology 1 (2023)) on SSRN. Here is the abstract:

Digitization and digital transformation provides a shock for government to reconceptualize how it is organized to better optimize legal and regulatory responses to the use of data analytics to create value. Government is well situated for an organizational transformation that will allow it to better orchestrate coordinated responses where appropriate based on expertise of a dedicated centralized data analytics unit that will work across agencies. This is not to argue that each government agency should not develop its own data analytics expertise. Rather, there is some expertise that can be leveraged across different parts of government. This type of intervention requires a unique group to coordinate a response.

Almada on Regulating Machine Learning by Design

Marco Almada (European University Institute – Department of Law) has posted “Regulating Machine Learning by Design” (CPI TechREG Chronicle, February 2023)

The regulation of digital technologies around the world draws from various regulatory techniques. One such technique is regulation by design, in which regulation specify requirements that software designers must follow when creating any systems. This paper examines the suitability of regulation by design approaches to machine learning, arguing that they are potentially useful but have a narrow scope of application. Drawing from EU law examples, it shows how regulation by design relies on the delegation of normative definitions and enforcement to software designers, but such delegation is only effective if a few conditions are present. These conditions, however, are seldom met by applications of machine learning technologies in the real world, and so regulation by design cannot address many of the pressing concerns driving regulation. Nonetheless, by-design provisions can support regulation if applied to well-defined problems that lend themselves to clear expression in software code. Hence, regulation by design, within its proper limits, can be a powerful tool for regulators of machine learning technologies.

Fagan on Law’s Computational Paradox

Frank Fagan (South Texas College of Law Houston) has posted “Law’s Computational Paradox” (Virginia Journal of Law and Technology, forthcoming) on SSRN. Here is the abstract:

Artificial intelligence (AI) and machine learning will bring about many changes to how law is practiced, made, and enforced. However, machines cannot do everything that humans can do, and law must face the limitations of computational learning just as much as any other human endeavor. For predictive learning, these limitations are permanent and can be used to ascertain the future of law. The basic tasks of lawyering, such as brief writing, oral argument, and witness coaching, will become increasingly precise, but that precision will eventually plateau, and the essential character of lawyering will remain largely unchanged. Similarly, where machines can be used to clarify application of law, they simply will limit judicial discretion consistent with moves from standards to rules or from rules to personalized law.

In each of these scenarios—lawyering and case clarification—enhanced precision is made possible through systemic closure of the machine’s domain and AI will ascend easily. In scenarios where law’s architecture is open, and systemic closure is not possible or worth it, machines will be frustrated by an inability to discern patterns, or by a powerlessness to comprehend the predictive power of previously discerned patterns in newly changed contexts. Lawmakers may add new variables to compensate and encourage attempts to model future environments, but open innovation and social change will undermine even a determined empiricism. In response to these limitations, lawmakers may attempt to actively impose closure of dynamic legal domains in an effort to enhance law’s precision. By limiting admissibility of evidence, black-listing variables, requiring specific thresholds of white-listed variables, and pursuing other formalist strategies of closure, law can elevate its predictive precision for a given environment, but this elevation comes at the expense of openness and innovation. This is law’s computational paradox.

This Article introduces the paradox across machine learning applications in lawmaking, enforcement, rights allocation, and lawyering, and shows that innovation serves as a self-corrective to the excessive mechanization of law. Because innovation, change, and open legal domains are necessary ingredients for continual technological ascendance in law and elsewhere, fears of AI-based law as an existential threat to human-centered law are exaggerated. It should be emphasized, however, that there is ample room for quantification and counting in both closed and open settings; the products of innovation will always undergo measurement and machine learning algorithms will always require updating and refinement. This is the process of technological becoming. The goal for law is to never fully arrive.

The uncertainty of dynamic legal environments, even if diminishing with growing predictive power in law, forms the basis of an interpersonal constitutional authority. Understanding that some disruptions will always be unplanned prevents the construction of blind pathways for longer-term legal error, and relatedly, prevents empirical and technical rationales from overrunning a human-centered public square. A growing awareness of paradoxical error generated by precise, but closed, computational environments will generate a societal response that seeks to balance the benefits of precision and innovation. This balancing—what might be termed a “computational legal ethics”—implies that tomorrow’s lawyers, more so than their counterparts of the past, will be called upon to discern what should be considered versus ignored.

Sunstein on The Use of Algorithms in Society

Cass R. Sunstein (Harvard Law School) has posted “The Use of Algorithms in Society” on SSRN. Here is the abstract:

The judgments of human beings can be biased; they can also be noisy. Across a wide range of settings, use of algorithms is likely to improve accuracy, because algorithms will reduce both bias and noise. Indeed, algorithms can help identify the role of human biases; they might even identify biases that have not been named before. As compared to algorithms, for example, human judges, deciding whether to give bail to criminal defendants, show Current Offense Bias and Mugshot Bias; as compared to algorithms, human doctors, deciding whether to test people for heart attacks, show Current Symptom Bias and Demographic Bias. These are cases in which large data sets are able to associate certain inputs with specific outcomes. But in important cases, algorithms struggle to make accurate predictions, not because they are algorithms but because they do not have enough data to answer the question at hand. Those cases often, though not always, involve complex systems. (1) Algorithms might not be able to foresee the effects of social interactions, which can depend on a large number of random or serendipitous factors, and which can lead in unanticipated and unpredictable directions. (2) Algorithms might not be able to foresee the effects of context, timing, or mood. (3) Algorithms might not be able to identify people’s preferences, which might be concealed or falsified, and which might be revealed at an unexpected time. (4) Algorithms might not be able to anticipate sudden or unprecedented leaps or shocks (a technological breakthrough, a successful terrorist attack, a pandemic, a black swan). (5) Algorithms might not have “local knowledge,” or private information, which human beings might have. Predictions about romantic attraction, about the success of cultural products, and about coming revolutions are cases in point. The limitations of algorithms are analogous to the limitations of planners, emphasized by Hayek in his famous critique of central planning. It is an unresolved question whether and to what extent some of the limitations of algorithms might be reduced or overcome over time, with more data or various improvements; calculations are improving in extraordinary ways, but some of the relevant challenges cannot be solved with ex ante calculations.

Coglianese on Regulating Machine Learning: The Challenge of Heterogeneity

Cary Coglianese (U Penn Law) has posetd “Regulating Machine Learning: The Challenge of Heterogeneity” (Competition Policy International: TechReg Chronicle, February 2023) on SSRN. Here is the abstract:

Machine learning, or artificial intelligence, refers to a vast array of different algorithms that are being put to highly varied uses, including in transportation, medicine, social media, marketing, and many other settings. Not only do machine-learning algorithms vary widely across their types and uses, but they are evolving constantly. Even the same algorithm can perform quite differently over time as it is fed new data. Due to the staggering heterogeneity of these algorithms, multiple regulatory agencies will be needed to regulate the use of machine learning, each within their own discrete area of specialization. Even these specialized expert agencies, though, will still face the challenge of heterogeneity and must approach their task of regulating machine learning with agility. They must build up their capacity in data sciences, deploy flexible strategies such as management-based regulation, and remain constantly vigilant. Regulators should also consider how they can use machine-learning tools themselves to enhance their ability to protect the public from the adverse effects of machine learning. Effective regulatory governance of machine learning should be possible, but it will depend on the constant pursuit of regulatory excellence.

Kolt on Algorithmic Black Swans

Noam Kolt (University of Toronto) has posted “Algorithmic Black Swans” (Washington University Law Review, Vol. 101, Forthcoming) on SSRN. Here is the abstract:

From biased lending algorithms to chatbots that spew violent hate speech, AI systems already pose many risks to society. While policymakers have a responsibility to tackle pressing issues of algorithmic fairness, privacy, and accountability, they also have a responsibility to consider broader, longer-term risks from AI technologies. In public health, climate science, and financial markets, anticipating and addressing societal-scale risks is crucial. As the COVID-19 pandemic demonstrates, overlooking catastrophic tail events — or “black swans” — is costly. The prospect of automated systems manipulating our information environment, distorting societal values, and destabilizing political institutions is increasingly palpable. At present, it appears unlikely that market forces will address this class of risks. Organizations building AI systems do not bear the costs of diffuse societal harms and have limited incentive to install adequate safeguards. Meanwhile, regulatory proposals such as the White House AI Bill of Rights and the European Union AI Act primarily target the immediate risks from AI, rather than broader, longer-term risks. To fill this governance gap, this Article offers a roadmap for “algorithmic preparedness” — a set of five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society.

Lobel on The Law of AI for Good

Orly Lobel (U San Diego Law) has posted “The Law of AI for Good” on SSRN. Here is the abstract:

Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.

A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design.

Mökander et al. on The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act

Jakob Mökander (Oxford Internet Institute) et al. has posted “The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?” (Minds and Machines 2022) on SSRN. Here is the abstract:

On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).