Ayres & Balkin on Risky, Intentionless AI Agents

Ian Ayres (Yale Law School) and Jack M. Balkin (same) have posted “The Law of AI is the Law of Risky Agents without Intentions” (U Chicago L Rev Online 2024) on SSRN. Here is the abstract:

Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain mens rea or intention. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. 

Of course, the AI programs themselves are not the responsible actors; instead, they are technologies designed, deployed and by human beings that have effects on other human beings. The people who design, deploy, and use AI are the real parties in interest.

We can think of AI programs as acting on behalf of human beings. In this sense AI programs are like agents that lack intentions but that create risks of harm to people. Hence the law of AI is the law of risky agents without intentions.

The law should hold these risky agents to objective standards of behavior, which are familiar in many different parts of the law. These legal standards ascribe intentions to actors—for example, that given the state of their knowledge, actors are presumed to intend the reasonable and foreseeable consequences of their actions. Or legal doctrines may hold actors to objective standards of conduct, for example, a duty of reasonable care or strict liability.

Holding AI agents to objective standards of behavior, in turn, means holding the people and organizations that implement these technologies to objective standards of care and requirements of reasonable reduction of risk.

Take defamation law. Mens rea requirements like the actual malice rule protect human liberty and prevent chilling people’s discussion of public issues. But these concerns do not apply to AI programs, which do not exercise human liberty and cannot be chilled. The proper analogy is not to a negligent or reckless journalist but to a defectively designed product—produced by many people in a chain of production—that causes injury to a consumer. The law can give the different players in the chain of production incentives to mitigate AI-created risks.

In copyright law, we should think of AI systems as risky agents that create pervasive risks of copyright infringement at scale. The law should require that AI companies take a series of reasonable steps that reduce the risk of copyright infringement even if they cannot completely eliminate it. A fair use defense tied to these requirements is akin to a safe harbor rule. Instead of litigating in each case whether a particular output of a particular AI prompt violated copyright, this approach asks whether the AI company has put sufficient efforts into risk reduction. If it has, its practices constitute fair use.

These examples suggest why AI systems may require changes in many different areas of the law. But we should always view AI technology in terms of the people and companies that design, deploy, offer and use it. To properly regulate AI, we need to keep our focus on the human beings behind it.

Wills on Care for Chatbots

Peter Wills (Oxford) has posted “Care for Chatbots” (UBC Law Review 2024) on SSRN. Here is the abstract:

Individuals will rely on language models (LMs) like ChatGPT to make decisions. Sometimes, due to that reliance, they will get hurt, have their property be damaged, or lose money. If the LM had been a person, they might sue the LM. But LMs are not persons.

This paper analyses whom the individual could sue, and on what facts they can succeed according to the Hedley Byrne-inspired doctrine of negligence. The paper identifies a series of hurdles conventional Canadian and English negligence doctrine poses and how they may be overcome. Such hurdles include identifying who is making a representation or providing a service when an LM generates a statement, determining whether that person can owe a duty of care based on text the LM reacts to, and identifying the proper analytical path for breach and causation.

To overcome such hurdles, the paper questions how courts should understand who “controls” a system. Should it be the person who designs the system, or the person who uses the system? Or both? The paper suggests that, in answering this question, courts should prioritise social dimensions of control (for example, who understand how a system works, not merely what it does) over physical dimensions of control (such as on whose hardware a program is running) when assessing control and therefore responsibility.

The paper make further contributions in assessing what it means (or should mean) for a person to not only act, but react via an LM. It identifies a doctrinal assumption that when one person reacts to another’s activity, the first person must know something about the second’s activity. LMs break that assumption, because they allow the first person to react to information from another person without any human having knowledge. The paper thus reassesses what it means to have knowledge in light of these technological developments. It proposes redefining “knowledge” such that it would accommodate duties of care to individuals when an LM provides individualised advice.

The paper then shows that there is a deep tension running through the breach and causation analyses in Anglo-Canadian negligence doctrine, relating to how to describe someone who takes an imprudent process when performing an act but whose ultimate act is nonetheless justifiable. One option is to treat them as in breach of a standard of care, but that breach did not cause the injury; another is to treat them as not in breach at all. The answer to this question could significantly affect LM-based liability because it affects whether “using an LM” is itself treated as a breach of a standard of care.

Finally, the paper identifies alternative approaches to liability for software propounded in the literature and suggests that these approaches are not plainly superior to working within the existing framework that treats software as a tool used by a legal person.

Sharkey on A Products Liability Framework for AI

Catherine M. Sharkey (NYU Law) has posted “A Products Liability Framework for AI” (Columbia Science and Technology Law Review, Vol. 25, No. 2, 2024) on SSRN. Here is the abstract:

A products liability framework, drawing inspiration from the regulation of FDA-approved medical products—which includes federal regulation as well as products liability—holds great promise for tackling many of the challenges artificial intelligence (AI) poses. Notwithstanding the new challenges that sophisticated AI technologies pose, products liability provides a conceptual framework capable of responding to the learning and iterative aspects of these technologies. Moreover, this framework provides a robust model of the feedback loop between tort liability and regulation.
The regulation of medical products provides an instructive point of departure. The FDA has recognized the need to revise its traditional paradigm for medical device regulation to fit adaptive AI/Machine Learning (ML) technologies, which enable continuous improvements and modifications to devices based on information gathered during use. AI/ML technologies should hasten an even more significant regulatory paradigm shift at the FDA away from a model that puts most of its emphasis (and resources) on ex ante premarket approval to one that highlights ongoing postmarket surveillance. As such a model takes form, tort (products) liability should continue to play a significant information-production and deterrence role, especially during the transition period before a new ex post regulatory framework is established.

Choi on AI Malpractice

Bryan H. Choi (Ohio State Law) has posted “AI Malpractice” (73 DePaul Law Review 301 (2024)) on SSRN. Here is the abstract:

Should AI modelers be held to a professional standard of care? Recent scholarship has argued that those who build AI systems owe special duties to the public to promote values such as safety, fairness, transparency, and accountability. Yet, there is little agreement as to what the content of those duties should be. Nor is there a framework for how conflicting views should be resolved as a matter of law.

This Article builds on prior work applying professional malpractice law to conventional software development work, and extends it to AI work. The malpractice doctrine establishes an alternate standard of care—the customary care standard—that substitutes for the ordinary reasonable care standard. That substitution is needed in areas like medicine or law where the service is essential, the risk of harm is severe, and a uniform duty of care cannot be defined. The customary care standard offers a more flexible approach that tolerates a range of professional practices above a minimum expectation of competence. This approach is especially apt for occupations like software development where the science of the field is hotly contested or is rapidly evolving.

Although it is tempting to treat AI liability as a simple extension of software liability, there are key differences. First, AI work has not yet become essential to the social fabric the way software services have. The risk of underproviding AI services is less troublesome than it is for conventional professional services. Second, modern deep-learning AI techniques differ significantly from conventional software development practices, in ways that will likely facilitate greater convergence and uniformity in expert knowledge.

Those distinguishing features suggest that the law of AI liability will chart a different path than the law of software liability. For the immediate term, the interloper status of AI indicates a strict liability approach is most appropriate, given the other factors. In the longer term, as AI work becomes integrated into ordinary society, courts should expect to transition away from strict liability. For aspects that elude expert consensus and require exercise of discretionary judgment, courts should favor the professional malpractice standard. However, if there are broad swaths of AI work where experts can come to agreement on baseline standards, then courts can revert to the default of ordinary reasonable care.

Diamantis on Reasonable AI: A Negligence Standard

Mihailis Diamantis (U Iowa Law) has posted “Reasonable AI: A Negligence Standard” (77 Vand. L. Rev. __ (2025 Forthcoming)) on SSRN. Here is the abstract:

Even as artificial intelligence promises to turbocharge social and economic progress, its human costs are becoming apparent. By design, AI behaves in unexpected ways. That is how it finds unanticipated solutions to complex problems. But unpredictability also means that AI will sometimes harm us. To curtail these harms, scholars and lawmakers have proposed strict regulations for firms developing safe algorithms and strict corporate liability for injuries that nonetheless occur. These rigid “solutions” go too far. They dampen innovation and disadvantage domestic firms in the international technology race.

The law needs a more nuanced framework that balances progress with fairness. Tort law offers a compelling template, but the challenge is to adapt its distinctly human notion of fault to algorithms. Tort law’s central liability standard is negligence, which compares the defendant’s behavior to other “reasonable” people’s behavior. But there is no clear comparison class for AI. Assessing algorithms by reference to people would set too low of a bar—AI can and should outperform reasonable humans on many tasks. Assessing AI instead by reference to itself is often impossible—there are not enough algorithms in many contexts to establish a meaningful baseline.

This Paper offers a novel negligence standard for AI. Rather than compare any given AI to humans or to other algorithms, the law should compare it to both. By this hybrid measure, an algorithm would be deemed negligent if it causes injury more frequently than the combined incident rate for all actors—both human and AI—engaged in the same type of conduct. This negligence standard has three attractive features. First, it offers a baseline even when there are very few comparable algorithms. Second, it incentivizes firms to release all and only algorithms that make us safer overall. Third, the standard evolves over time, demanding more of AI as algorithms improve.

Tschider on Humans Outside the Loop

Charlotte Tschider (Loyola U (Chicago) Law) has posted “Humans Outside the Loop” (Yale J. L. & Tech., Forthcoming) on SSRN. Here is the abstract:

Artificial Intelligence is not all artificial. After all, despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create it. Through data selection, decisional design, training, testing, and tuning to managing AI’s developments as it is used in the human world, humans exert agency and control over these choices and practices. AI is now ubiquitous: it is part of every sector and, for most people, their everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to actually remedy any wrongs.

This paper introduces the myriad of choices humans make to create safe and effective AI products, then explores key issues in existing liability models. Significant issues in negligence and products liability negligence schemes, including contractual limitations on liability, separate organizations creating AI products from the actual harm, obscure the origin of issues, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the relative limits of tort law in these types of technologies, challenging long-held divisions and theoretical constructs, frustrating its goals. From the perspectives of both businesses licensing AI and AI users, this paper identifies key impediments to realizing tort goals and proposes an alternative regulatory scheme that reframes liability from the human in the loop to the humans outside the loop.

Wagner on Liability Rules for the Digital Age – Aiming for the Brussels Effect

Gerhard Wagner (Humboldt University School of Law; University of Chicago Law School) has posted “Liability Rules for the Digital Age – Aiming for the Brussels Effect” on SSRN. Here is the abstract:

With legislative proposals for two directives published in September 2022, the EU Commission aims to adapt the existing liability system to the challenges posed by digitalization. One of the proposals remains related and limited to liability for artificially intelligent systems, but the other contains nothing less than a full revision of the 1985 Product Liability Directive which lies at the heart of European tort law. Whereas the current Product Liability Directive largely followed the model of U.S. law, the revised version breaks new ground. It does not limit itself to the expansion of the concept of product to include intangible digital goods such as software and data as well related services, important enough in itself, but also targets the new intermediaries of e-commerce as liable parties. With all of that, the proposal for a new product liability directive is a great leap forward and has the potential to grow into a worldwide benchmark in the field. In comparison, the proposal of a directive on AI liability is much harder to assess. It remains questionable whether a second directive is actually needed at this stage of the technological development.

Soh on Legal Dispositionism and Artificially-Intelligent Attributions

Jerrold Soh (Singapore Management University – Yong Pung How School of Law) has posted “Legal Dispositionism and Artificially-Intelligent Attributions” (Legal Studies, forthcoming) on SSRN. Here is the abstract:

It is often said that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be faulted should the system’s actions harm. Since the system cannot be held liable on its own account either, existing laws expose victims to accountability gaps and require reform. Drawing on attribution theory, however, this article argues that the ‘autonomy’ that law tends to ascribe to AI is premised less on fact than science fiction. Specifically, the folk dispositionism which demonstrably underpins the legal discourse on AI liability, personality, publications, and inventions, leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the article contends that AI systems are better conceptualised as situational characters whose actions remain constrained by their programming, and that properly viewing AI as such illuminates how existing legal doctrines could be sensibly applied to AI. In this light, the article advances a framework for re-conceptualising AI.

Recommended.

Hacker on The European AI Liability Directives

Philipp Hacker (European University Viadrina Frankfurt (Oder) – European New School of Digital Studies) has posted “The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future” on SSRN. Here is the abstract:

The optimal liability framework for AI systems remains an unsolved problem across the globe. With ChatGPT and other large models taking the technology to the next level, solutions are urgently needed. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive (AILD) and a revision of the Product Liability Directive (PLD). They constitute the final cornerstone of AI regulation in the EU. Crucially, the liability proposals and the AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a “Brussels effect” in AI regulation, with significant consequences for the US and other countries.

Against this background, this paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article makes suggestions for amendments to the proposed AI liability framework. They are collected in a concise Annex at the end of the paper. I argue, inter alia, that the dichotomy between the fault-based AILD Proposal and the supposedly strict liability PLD Proposal is fictional and should be abandoned; that an EU framework for AI liability should comprise one fully harmonizing regulation instead of two insufficiently coordinated directives; and that the current proposals unjustifiably collapse fundamental distinctions between social and individual risk by equating high-risk AI systems in the AI Act with those under the liability framework.

Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. More specifically, I make four key proposals. Effective compensation should be ensured by combining truly strict liability for certain high-risk AI systems with general presumptions of defectiveness, fault and causality in cases involving SMEs or non-high-risk AI systems. The paper introduces a novel distinction between illegitimate- and legitimate-harm models to delineate strict liability’s scope. Truly strict liability should be reserved for high-risk AI systems that, from a social perspective, should not cause harm (illegitimate-harm models, e.g., autonomous vehicles or medical AI). Models meant to cause some unavoidable harm by ranking and rejecting individuals (legitimate-harm models, e.g., credit scoring or insurance scoring) may only face rebuttable presumptions of defectiveness and causality. General-purpose AI systems should only be subjected to high-risk regulation, including liability for high-risk AI systems, in specific high-risk use cases for which they are deployed. Consumers ought to be liable based on regular fault, in general.

Furthermore, innovation and legal certainty should be fostered through a comprehensive regime of safe harbours, defined quantitatively to the best extent possible. Moreover, trustworthy AI remains an important goal for AI regulation. Hence, the liability framework must specifically extend to non-discrimination cases and provide for clear rules concerning explainability (XAI).

Finally, awareness for the climate effects of AI, and digital technology more broadly, is rapidly growing in computer science. In diametrical opposition to this shift in discourse and understanding, however, EU legislators thoroughly neglect environmental sustainability in both the AI Act and the proposed liability regime. To counter this, I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).

Gunawan, Santos & Kamara on Redress for Dark Patterns Privacy Harms

Johanna Gunawan (Northeastern University Khoury College of Computer Sciences), Cristiana Santos (Utrecht University), and Irene Kamara (Tilburg University – Tilburg Institute for Law, Technology, and Society (TILT); Free University of Brussels (LSTS)) have posted “Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions” on SSRN. Here is the abstract:

Internet users are constantly subjected to incessant demands for attention in a noisy digital world. Countless inputs compete for the chance to be clicked, to be seen, and to be interacted with, and they can deploy tactics that take advantage of behavioral psychology to ‘nudge’ users into doing what they want. Some nudges are benign; others deceive, steer, or manipulate users, as the U.S. FTC Commissioner says, “into behavior that is profitable for an online service, but often harmful to [us] or contrary to [our] intent”. These tactics are dark patterns, which are manipulative and deceptive interface designs used at-scale in more than ten percent of global shopping websites and more than ninety-five percent of the most popular apps in online services.

Literature discusses several types of harms caused by dark patterns that includes harms of a material nature, such as financial harms, or anticompetitive issues, as well as harms of a non-material nature, such as privacy invasion, time loss, addiction, cognitive burdens, loss of autonomy, and emotional or psychological distress. Through a comprehensive literature review of this scholarship and case law analysis conducted by our interdisciplinary team of HCI and legal scholars, this paper investigates whether harms caused by such dark patterns could give rise to redress for individuals subject to dark pattern practices using consent interactions and the GDPR consent requirements as a case study.