Sharkey on A Products Liability Framework for AI

Catherine M. Sharkey (NYU Law) has posted “A Products Liability Framework for AI” (Columbia Science and Technology Law Review, Vol. 25, No. 2, 2024) on SSRN. Here is the abstract:

A products liability framework, drawing inspiration from the regulation of FDA-approved medical products—which includes federal regulation as well as products liability—holds great promise for tackling many of the challenges artificial intelligence (AI) poses. Notwithstanding the new challenges that sophisticated AI technologies pose, products liability provides a conceptual framework capable of responding to the learning and iterative aspects of these technologies. Moreover, this framework provides a robust model of the feedback loop between tort liability and regulation.
The regulation of medical products provides an instructive point of departure. The FDA has recognized the need to revise its traditional paradigm for medical device regulation to fit adaptive AI/Machine Learning (ML) technologies, which enable continuous improvements and modifications to devices based on information gathered during use. AI/ML technologies should hasten an even more significant regulatory paradigm shift at the FDA away from a model that puts most of its emphasis (and resources) on ex ante premarket approval to one that highlights ongoing postmarket surveillance. As such a model takes form, tort (products) liability should continue to play a significant information-production and deterrence role, especially during the transition period before a new ex post regulatory framework is established.

Cosenza on Litigating Government Use of AI

Giulia G. Cusenza (U Udine) has posted “Litigating Governmental use of AI” on SSRN. Here is the abstract:

In the last decade US courts have ruled cases related to the use of AI by governmental bodies. But while legal disputes have served as trailblazer for relevant policy documents and have been used by scholars to support specific arguments, this litigation has not been the subject of a systematic analysis. This paper fills this gap and provides a quantitative and qualitative study of how courts deal with litigation on the use of AI by governmental bodies. The analysis leads to an overarching conclusion, namely that judicial decisions almost exclusively rely on procedural grounds – specifically those concerning due process infringements – thus suggesting that substantial issues are typically addressed through procedural solutions. In turn, these procedural issues consist of six violations: lack of adequate notice and explanation, lack of contestability, lack of human oversight, lack of notice and comment procedures, lack of assessment procedures, and denial of the right to access information. By revealing this tendency and by identifying the six procedural violations, the analysis ultimately provides a taxonomy of the minimum requirements that any governmental body should comply with to shield their use of algorithmic systems from judicial review.

Salib on Abolition by Algorithm

Peter Salib (U Houston Law) has posted “Abolition by Algorithm” (Michigan Law Review, Forthcoming) on SSRN. Here is the abstract:

In one sense, America’s newest Abolitionist movement—advocating the elimination of policing and prison—has been a success. Following the 2020 Black Lives Matter protests, a small group of self-described radicals convinced a wide swath of ordinary liberals to accept a radical claim: Mere reforms cannot meaningfully reduce prison and policing’s serious harms. Only elimination can. On the other hand, Abolitionists have failed to secure lasting policy change. The difficulty is crime. In 2021, following a nationwide uptick in homicides, liberal support for Abolitionist proposals collapsed. Despite being newly “abolition curious,” left-leaning voters consistently rejected concrete abolitionist policies. Faced with the difficult choice between reducing prison and policing and controlling serious crime, voters consistently chose the latter.

This Article presents a policy approach that could accomplish both goals simultaneously: “Algorithmic Abolitionism.” Under Algorithmic Abolitionism, powerful machine learning algorithms would allocate policing and incarceration. They would abolish both maximally, up to the point at which crime would otherwise begin to rise. Results would be dramatic. Using existing technology, Algorithmic Abolitionist policies could: eliminate at least 42% and as many as 86% of Terry stops; free between 40 and 80% of incarcerated persons; eradicate nearly all traffic stops; and remove police patrols from between 50 and 85% of city blocks. All without causing more crime.

Beyond these practical effects, Algorithmic Abolitionist thinking generates new and important normative insights in the debate over algorithmic discrimination. In short, in an Algorithmic Abolitionist world, traditional frameworks for understanding and measuring such discrimination fall apart. They sometimes rate Algorithmic Abolitionist policies as unfair, even when those policies massively reduce the number of people mistreated because of their race. And they rate other policies as fair, even when those policies would cause far more discriminatory harm. To overcome these problems, this Article introduces a new framework for understanding—and a new quantitative tool for measuring—algorithmic discrimination: “bias-impact.” It then explores the complex array of normative trade-offs that bias-impact analyses reveal. As the Article shows, bias-impact analysis will be vital not just in the criminal enforcement context, but in the wide range of settings—healthcare, finance, employment—where Algorithmic Abolitionist designs are possible.

Niblett & Yoon on AI and The Nature of Disagreement

Anthony Niblett (U Toronto Law) and Albert Yoon (same) have posted “A.I. and the Nature of Disagreement” on SSRN. Here is the abstract:

Some legal commentators – including ourselves – have been loudly optimistic about the power of artificial intelligence (AI) to improve litigation. These commentators argue that AI can provide clearer information, cutting through much of the complexity of the law, reducing frictions and disagreements between the parties. Further, the possibility of using AI to determine the outcomes of legal disputes has given rise to the concept of “robot judges” in legal scholarship.

But in this paper, we argue that much of this literature fails to fully appreciate what litigated disputes are really about. Litigants may disagree about the facts of the case, the applicable rules, or how the rules apply to the facts. These disagreements are often complex and intertwined.

We contend that AI tools may be limited in their ability to resolve litigated disputes because these tools often address only one type of disagreement, leaving others unresolved. The optimistic view of AI in litigation assumes that parties disagree mainly about the likelihood of winning or the size of damages awards for a given set of agreed facts. But we question whether litigation is really fueled by such disagreements.

Our main takeaway is that if litigation is driven by disagreements over the facts or which rules should govern, AI’s capacity to reduce disagreement may fall short of what some proponents of AI claim. We call for more empirical and theoretical work to explore what litigants actually disagree about to better assess the likely impact of algorithmic decision-making in legal systems.

Garon on Ethics 3.0 – Attorney Responsibility in the Age of Generative AI

Jon Garon (Nova Law) has posted “Ethics 3.0 – Attorney Responsibility in the Age of Generative AI” (The Business Lawyer , Am. Bar Assoc, Vol. 79, Winter 2023–2024) on SSRN. Here is the abstract:

A lawyer’s duty to remain competent and diligent in light of technological change begins with the Model Rules but it must extend to the substantive relevant law. This article focuses on the obligations of client confidentiality, the duty to understand cybersecurity, the need to exploit the new technologies of generative AI and the metaverse with caution, and the need to communicate in a permissible manner. These are all key obligations under the ABA Model Rules of Professional Conduct related to the use of technology. The Model Rules provide a normative guideline that goes beyond the technical requirements for minimum competency and may provide standards for professional malpractice liability and other legal standards, but they are only a start. To fully understand the scope of the lawyers’ duty regarding technology, the practitioner must also look at state and federal regulations including HIPAA) data privacy and security rules, digital exportation under the Export Administration Act and the International Traffic in Arms Regulations, state consumer privacy laws, FTC Guides Concerning the Use of Endorsements and Testimonials in Advertising and similar truth-in-advertising obligations, and more.

Mills on A Contractual Approach to Social Media Governance

Gilad Mills (Harvard Law School) has posted “A Contractual Approach to Social Media Governance”
(Yale Law & Policy Review, Vol. 42, Forthcoming) on SSRN. Here is the abstract:

The heated scholarly debate in recent years around social media governance has been dominated by a clear public law bias and has yielded a substantively incomplete analysis of the issues at hand. Captured by public law analogies that depict platforms as governors who perform legislative, administrative, and adjudicatory functions, scholars and policymakers have repeatedly turned to public law norms as the hook on which they hang proposed governance solutions. As a practical strategy, they either called to impose public law norms by way of regulatory intervention or, conversely, called on platforms to adopt them voluntarily. This approach to social media governance, however, has met with limited success, stymied by political deadlocks, constitutional constraints, and platforms’ commercial preferences. At the same time, private law has been broadly overlooked as a potentially superior source of governance norms for social media, while the potential role the judiciary could play in generating these norms has been seriously discounted or even ignored altogether.

This Article tackles this blind spot in the current scholarship and thinking, offering a novel, comprehensive contractual approach to social media governance. Applying relational contract theory to social media contracting, it lays out the normative underpinnings for subjecting platforms to contractual duties of fairness and diligence, from which governance norms can and should be derived, it is argued. A doctrinal analysis is also provided, to equip courts and litigators with the practical tools for holding platforms liable when such contractual duties are breached. Finally, to mitigate concerns about judicial over-encroachment on platforms’ decision-making, the Article offers a pragmatic remedial approach that prefers equitable remedies to damages and adopts a deferential standard of review––a “platform judgment rule”––that would insulate platforms from judicial scrutiny so long as they uphold their “best-efforts” commitments to conduct informed, unbiased, content-moderation in good faith, and to refrain from grossly misusing personal data.

Solow-Niederman on AI Standards and Politics

Alicia Solow-Niederman (George Washington Law) has posted “Can AI Standards Have Politics?” (71 UCLA L. Rev. Disc. 2 (forthcoming)) on SSRN. Here is the abstract:

How to govern a technology like artificial intelligence (AI)? When it comes to designing and deploying fair, ethical, and safe AI systems, standards are a tempting answer. By establishing the best way of doing something, standards might seem to provide plug-and-play guardrails for AI systems that avoid the costs of formal legal intervention. AI standards are all the more tantalizing because they seem to provide a neutral, objective way to proceed in a normatively contested space. But this vision of AI standards blinks a practical reality. Standards do not appear out of thin air. They are constructed. This Essay analyzes three concrete examples from the European Union, China, and the United States to underscore how standards are neither objective nor neutral. It thereby exposes an inconvenient truth for AI governance: Standards have politics, and yet recognizing that standards are crafted by actors who make normative choices in particular institutional contexts, subject to political and economic incentives and constraints, may undermine the functional utility of standards as soft law regulatory instruments that can set forth a single, best formula to disseminate across contexts.

Gans on How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption

Joshua S. Gans (U Toronto – Rotman; NBER) has posted “How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption” on SSRN. Here is the abstract:

This paper examines recent proposals and research suggesting that AI adoption should be delayed until its potential harms are properly understood. It is shown that conclusions regarding the social optimality of delayed AI adoption are sensitive to assumptions regarding the process by which regulators learn about the salience of particular harms. When such learning is by doing — based on the real-world adoption of AI — this generally favours acceleration of AI adoption to surface and react to potential harms more quickly. This case is strengthened when AI adoption is potentially reversible. The paper examines how different conclusions regarding the optimality of accelerated or delayed AI adoption influence and are influenced by other policies that may moderate AI harm.

Lee on Synthetic Data

Peter Lee (UC Davis Law) has posted “Synthetic Data and the Future of AI” (110 Cornell Law Review (Forthcoming)) on SSRN. Here is the abstract:

The future of artificial intelligence (AI) is synthetic. Several of the most prominent technical and legal challenges of AI derive from the need to amass huge amounts of real-world data to train machine learning (ML) models. Collecting such real-world data can be highly difficult and can threaten privacy, introduce bias in automated decision making, and infringe copyrights on a massive scale. This Article explores the emergence of a seemingly paradoxical technical creation that can mitigate—though not completely eliminate—these concerns: synthetic data. Increasingly, data scientists are using simulated driving environments, fabricated medical records, fake images, and other forms of synthetic data to train ML models. Artificial data, in other words, is being used to train artificial intelligence. Synthetic data offers a host of technical and legal benefits; it promises to radically decrease the cost of obtaining data, sidestep privacy issues, reduce automated discrimination, and avoid copyright infringement. Alongside such promise, however, synthetic data offers perils as well. Deficiencies in the development and deployment of synthetic data can exacerbate the dangers of AI and cause significant social harm.

In light of the enormous value and importance of synthetic data, this Article sketches the contours of an innovation ecosystem to promote its robust and responsible development. It identifies three objectives that should guide legal and policy measures shaping the creation of synthetic data: provisioning, disclosure, and democratization. Ideally, such an ecosystem should incentivize the generation of high-quality synthetic data, encourage disclosure of both synthetic data and processes for generating it, and promote multiple sources of innovation. This Article then examines a suite of “innovation mechanisms” that can advance these objectives, ranging from open source production to proprietary approaches based on patents, trade secrets, and copyrights. Throughout, it suggests policy and doctrinal reforms to enhance innovation, transparency, and democratic access to synthetic data. Just as AI will have enormous legal implications, law and policy can play a central role in shaping the future of AI.

Kolt on Governing AI Agents

Noam Kolt (University of Toronto) has posted “Governing AI Agents” on SSRN. Here is the abstract:

While language models and generative AI have taken the world by storm, a more transformative technology is already being developed: “AI agents” — AI systems that can autonomously plan and execute complex tasks with only limited human oversight. Companies that pioneered the production of tools for generating synthetic content are now building AI agents that can independently navigate the internet, perform a wide range of online tasks, and increasingly serve as automated personal assistants. The opportunities presented by this new technology are tremendous, as are the associated risks. Fortunately, there exist robust analytic frameworks for confronting many of these challenges, namely the economic theory of principal-agent problems and the common law doctrine of agency relationships. Drawing on these frameworks, this Article makes three contributions. First, it uses agency law and theory to identify and characterize problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty. Second, it illustrates the limitations of conventional solutions to agency problems: incentive design, monitoring, and enforcement might not be effective for governing AI agents that make uninterpretable decisions and operate at unprecedented speed and scale. Third, the Article explores the implications of agency law and theory for designing and regulating AI agents, arguing that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.