Lee on Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms

Christina Lee (George Washington U Law) has posted “Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms” (16 U.C. Irvine Law Review ___ (forthcoming 2026)) on SSRN. Here is the abstract:

AI regulations are popping up around the world, and they mostly involve ex-ante risk assessment and mitigating those risks. But even with careful risk assessment, harms inevitably occur. This leads to algorithmic remedies: what to do once algorithmic harms occur, especially when traditional remedies are ineffective. What makes a particular algorithmic remedy appropriate for a given algorithmic harm?

I explore this question through case study of a prominent algorithmic remedy: algorithmic disgorgement—destruction of models tainted by illegality. Since the FTC first used it in 2019, it has garnered significant attention, and other enforcers and litigants around the country and the world have started to invoke it. Alongside its increasing popularity came a significant expansion in scope. Initially, the FTC invoked it in cases where data was allegedly collected unlawfully and ordered deletion of models created using such data. The remedy’s scope has since expanded; regulators and litigants now invoke it against AI whose use, not creation, causes harm. It has become a remedy many turn to for all things algorithmic.

I examine this remedy with a critical eye, concluding that though it looms large, it is often inappropriate. Algorithmic disgorgement has evolved into two distinct remedies. Data-based algorithmic disgorgement seeks to remedy harms committed during a model’s creation; use-based algorithmic disgorgement seeks to remedy harms caused by a model’s use. These two remedies aim to vindicate different principles underlying traditional remedies: data-based algorithmic disgorgement follows the disgorgement principle underlying remedies like monetary disgorgement and the exclusionary rule, while use-based algorithmic disgorgement follows the consumer protection principle underlying remedies like product recall. However, they often fail to live up to the principles. AI systems exist in the context of the algorithmic supply chain; they are controlled by many hands, and seemingly unrelated entities are connected to each other in complicated ways through complex data flows. The realities of algorithmic supply chain means that algorithmic disgorgement is often a bad fit for the harm at issue and causes undesirable effects throughout the algorithmic supply chain, imposing burden on innocent parties while not imposing cost on the blameworthy; ultimately, algorithmic disgorgement undermines the principles it seeks to promote.

From this analysis, I derive considerations for determining whether an algorithmic remedy is appropriate—the responsiveness of the remedy to the harm and the full impact of the remedy throughout the supply chain—and underscore the need for a diversity of algorithmic remedies.

Choi on Tainted Source Code

Bryan H. Choi (Ohio State U (OSU) Michael E. Moritz College Law) has posted “Tainted Source Code” (39 Harv. J.L. & Tech. (2025)) on SSRN. Here is the abstract:

Open-source software has long eluded tort liability. Fierce ideological commitments and sticky license terms support a long tradition of forbearance against penalizing harmful or negligent work in open-source communities. The free, noncommercial, distributed, and anonymous characteristics of open-source contributions present additional obstacles to legal enforcement.

The exponential rise in software supply chain attacks has given new urgency to the problem of bad open-source code. Yet, current approaches are unlikely to meaningfully improve open-source security and safety. On the one hand, technological tools and self-governance mechanisms remain woefully underdeveloped and underutilized. On the other hand, liability proposals that place all the burden on commercial vendors to inspect the open-source packages they use is an impractical solution that ignores how software is built and maintained.

This Article argues that donated code should be subject to tort liability by analogy to the law of tainted food and blood donations. Food safety law is the progenitor of modern tort law, and it reveals an older set of tensions between altruistic efforts to address societal hunger and the need for accountability in regulating the quality of food supply chains. At common law, the charitable nature of a donation is a nonfactor in determining liability. Legislatures have intervened to provide safe harbors, but only up to an extent. This nuanced history offers a principled path forward for extending a liability framework to donations of open-source code.

Gordon-Tapiero on A Liability Framework for AI Companions

Ayelet Gordon-Tapiero (Hebrew U Jerusalem Benin Computer Science and Engineering) has posted “A Liability Framework for AI Companions” (1 Geo. Wash. J. L. & Tech (Forthcoming)) on SSRN. Here is the abstract:

Everyday tens of millions of people engage in online conversations. These virtual interactions range from casual chats about daily life to deeply personal exchanges, where individuals share secrets, vulnerabilities, sexual fantasies, hopes and dreams. Through these conversations users receive emotional support and empathetic responses, get practical advice and productivity tips. Most importantly they feel seen, heard, and less alone. These people are not chatting with friends or family members. They are corresponding with AI-powered chatbots, also known as AI companions, which have gained immense popularity recently. AI companions offer a range of benefits to users including providing a feeling of friendship, emotional support, and organization of everyday tasks. But AI companions also harbor a darker side. They are designed by large corporations with the goal of maximizing their profits and collecting more data on which to train future models. Users often find themselves subject to manipulation, growing emotional dependence, and even addiction. Tragically, it is the most vulnerable users that are most susceptible to these harms. In a horrific case, a teenager even took his own life after being encouraged to do so by his AI companion.

Against this backdrop, this Article argues for the urgent need to develop a comprehensive legal response to the emerging ecosystem of AI companions. Specifically, it proposes applying products-liability law to AI companions as a promising legal avenue. This Article also offers a typology of the promises and perils associated with the use of AI companions. Recognizing both the benefits and harms stemming from a technology is a crucial first step in crafting a regulatory response that preserves its advantages while mitigating its risks.

AI companions are designed to maximize the profits of the companies that develop them by facilitating engagement and fostering dependency, which can lead to addiction. In this reality, users’ interests are secondary at best. Courts have long recognized two types of product defects that can give rise to liability: design defects and failure to warn. Thus, an AI companion designed to maximize user engagement, encourage user-dependence and facilitate addiction could be considered to have been defectively designed. Similarly, companies deploying AI companions known to harm vulnerable users should, at the very least, warn them of these risks.

Products-liability law offers an appropriate and necessary framework for addressing the challenges posed by AI companions. It allows courts to gradually establish standards for what should be considered a defective product, while holding companies accountable for their failure to warn users about potential dangers. This approach incentivizes companies to design safer products, limiting the harms generated by AI companions, while allowing users to continue enjoying the benefits offered by them.

Wilf-Townsend on Artificial Intelligence and Aggregate Litigation

Daniel Wilf-Townsend (Georgetown U Law Center) has posted “Artificial Intelligence and Aggregate Litigation” (103 Wash. U. L. Rev. __ (forthcoming 2026)) on SSRN. Here is the abstract:

The era of AI litigation has begun, and it is already clear that the class action will have a distinctive role to play. AI-powered tools are often valuable because they can be deployed at scale. And the harms they cause often exist at scale as well, pointing to the class action as a key device for resolving the correspondingly numerous potential legal claims. This article presents the first general account of the complex interplay between aggregation and artificial intelligence. 

First, the article identifies a pair of effects that the use of AI tools is likely to have on the availability of class actions to pursue legal claims. While the use of increased automation by defendants will tend to militate in favor of class certification, the increased individualization enabled by AI tools will cut against it. These effects, in turn, will be strongly influenced by the substantive laws governing AI tools—especially by whether liability attaches “upstream” or “downstream” in a given course of conduct, and by the kinds of causal showings that must be made to establish liability. 

After identifying these influences, the article flips the usual script and describes how, rather than merely being a vehicle for enforcing substantive law, aggregation could actually enable new types of liability regimes. AI tools can create harms that are only demonstrable at the level of an affected group, which is likely to frustrate traditional individual claims. Aggregation creates opportunities to prove harm and assign remedies at the group level, providing a path to address this difficult problem. Policymakers hoping for fair and effective regulations should therefore attend to procedure, and aggregation in particular, as they write the substantive laws governing AI use.

Deffains & Fluet on Decision Making Algorithms: Product Liability and the Challenges of AI

Bruno Deffains (U Paris II Panthéon-Assas) and Claude Fluet (U Laval) have posted “Decision Making Algorithms: Product Liability and the Challenges of AI” on SSRN. Here is the abstract:

The question of AI liability (e.g., for robots, autonomous systems or decisionmaking devices) has been widely discussed in recent years. The issue is how to adapt non-contractual civil liability rules and in particular producer liability legislation to the challenges posed by the risk of harm caused by AI applications, centering on notions such as fault-based liability vs strict liability vs liability for defective products. The purpose of this paper is to discuss the lessons that can be drawn from the canonical Law & Economics model of producer liability, insofar as it can be applied to decision-making AI applications. We extend the canonical model by relating the risk of harm facing the users of an application to the risk of decisonmaking errors. Investments in safety, e.g. through better design and software, reduce the risk of decision-making errors. The cost of improving safety is shared by all users of the product.

Wilf-Townsend on The Deletion Remedy

Daniel Wilf-Townsend (Georgetown U Law Center) has posted “The Deletion Remedy” (103 North Carolina Law Review __ (forthcoming 2025)) on SSRN. Here is the abstract:

A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms

But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.

This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses.

Da Cunha Lopes on Navigating the Legal Labyrinth: The Avant-Garde Challenges of Robotics in the Modern Workforce

Teresa Da Cunha Lopes (Michoacan U Saint Nicholas Hidalgo) has posted “Navigating the Legal Labyrinth: The Avant-Garde Challenges of Robotics in the Modern Workforce” on SSRN. Here is the abstract:

As robotics and artificial intelligence (AI) increasingly integrate into the workforce, they bring technological innovation and a host of unprecedented legal challenges. This article examines key issues such as liability in cases of malfunction or harm, the legal status of autonomous machines, intellectual property concerns, and the implications of replacing human labor with robotic systems. It also addresses the ethical considerations and regulatory frameworks needed to ensure that the integration of robotics into the workforce is both legally sound and socially responsible. By delving into these avant-garde challenges, the article aims to provide a comprehensive understanding of the legal implications of robotics in the modern workforce and propose potential pathways for future legislation and policy development.

Ayres & Balkin on Risky, Intentionless AI Agents

Ian Ayres (Yale Law School) and Jack M. Balkin (same) have posted “The Law of AI is the Law of Risky Agents without Intentions” (U Chicago L Rev Online 2024) on SSRN. Here is the abstract:

Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain mens rea or intention. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. 

Of course, the AI programs themselves are not the responsible actors; instead, they are technologies designed, deployed and by human beings that have effects on other human beings. The people who design, deploy, and use AI are the real parties in interest.

We can think of AI programs as acting on behalf of human beings. In this sense AI programs are like agents that lack intentions but that create risks of harm to people. Hence the law of AI is the law of risky agents without intentions.

The law should hold these risky agents to objective standards of behavior, which are familiar in many different parts of the law. These legal standards ascribe intentions to actors—for example, that given the state of their knowledge, actors are presumed to intend the reasonable and foreseeable consequences of their actions. Or legal doctrines may hold actors to objective standards of conduct, for example, a duty of reasonable care or strict liability.

Holding AI agents to objective standards of behavior, in turn, means holding the people and organizations that implement these technologies to objective standards of care and requirements of reasonable reduction of risk.

Take defamation law. Mens rea requirements like the actual malice rule protect human liberty and prevent chilling people’s discussion of public issues. But these concerns do not apply to AI programs, which do not exercise human liberty and cannot be chilled. The proper analogy is not to a negligent or reckless journalist but to a defectively designed product—produced by many people in a chain of production—that causes injury to a consumer. The law can give the different players in the chain of production incentives to mitigate AI-created risks.

In copyright law, we should think of AI systems as risky agents that create pervasive risks of copyright infringement at scale. The law should require that AI companies take a series of reasonable steps that reduce the risk of copyright infringement even if they cannot completely eliminate it. A fair use defense tied to these requirements is akin to a safe harbor rule. Instead of litigating in each case whether a particular output of a particular AI prompt violated copyright, this approach asks whether the AI company has put sufficient efforts into risk reduction. If it has, its practices constitute fair use.

These examples suggest why AI systems may require changes in many different areas of the law. But we should always view AI technology in terms of the people and companies that design, deploy, offer and use it. To properly regulate AI, we need to keep our focus on the human beings behind it.

Wills on Care for Chatbots

Peter Wills (Oxford) has posted “Care for Chatbots” (UBC Law Review 2024) on SSRN. Here is the abstract:

Individuals will rely on language models (LMs) like ChatGPT to make decisions. Sometimes, due to that reliance, they will get hurt, have their property be damaged, or lose money. If the LM had been a person, they might sue the LM. But LMs are not persons.

This paper analyses whom the individual could sue, and on what facts they can succeed according to the Hedley Byrne-inspired doctrine of negligence. The paper identifies a series of hurdles conventional Canadian and English negligence doctrine poses and how they may be overcome. Such hurdles include identifying who is making a representation or providing a service when an LM generates a statement, determining whether that person can owe a duty of care based on text the LM reacts to, and identifying the proper analytical path for breach and causation.

To overcome such hurdles, the paper questions how courts should understand who “controls” a system. Should it be the person who designs the system, or the person who uses the system? Or both? The paper suggests that, in answering this question, courts should prioritise social dimensions of control (for example, who understand how a system works, not merely what it does) over physical dimensions of control (such as on whose hardware a program is running) when assessing control and therefore responsibility.

The paper make further contributions in assessing what it means (or should mean) for a person to not only act, but react via an LM. It identifies a doctrinal assumption that when one person reacts to another’s activity, the first person must know something about the second’s activity. LMs break that assumption, because they allow the first person to react to information from another person without any human having knowledge. The paper thus reassesses what it means to have knowledge in light of these technological developments. It proposes redefining “knowledge” such that it would accommodate duties of care to individuals when an LM provides individualised advice.

The paper then shows that there is a deep tension running through the breach and causation analyses in Anglo-Canadian negligence doctrine, relating to how to describe someone who takes an imprudent process when performing an act but whose ultimate act is nonetheless justifiable. One option is to treat them as in breach of a standard of care, but that breach did not cause the injury; another is to treat them as not in breach at all. The answer to this question could significantly affect LM-based liability because it affects whether “using an LM” is itself treated as a breach of a standard of care.

Finally, the paper identifies alternative approaches to liability for software propounded in the literature and suggests that these approaches are not plainly superior to working within the existing framework that treats software as a tool used by a legal person.

Sharkey on A Products Liability Framework for AI

Catherine M. Sharkey (NYU Law) has posted “A Products Liability Framework for AI” (Columbia Science and Technology Law Review, Vol. 25, No. 2, 2024) on SSRN. Here is the abstract:

A products liability framework, drawing inspiration from the regulation of FDA-approved medical products—which includes federal regulation as well as products liability—holds great promise for tackling many of the challenges artificial intelligence (AI) poses. Notwithstanding the new challenges that sophisticated AI technologies pose, products liability provides a conceptual framework capable of responding to the learning and iterative aspects of these technologies. Moreover, this framework provides a robust model of the feedback loop between tort liability and regulation.
The regulation of medical products provides an instructive point of departure. The FDA has recognized the need to revise its traditional paradigm for medical device regulation to fit adaptive AI/Machine Learning (ML) technologies, which enable continuous improvements and modifications to devices based on information gathered during use. AI/ML technologies should hasten an even more significant regulatory paradigm shift at the FDA away from a model that puts most of its emphasis (and resources) on ex ante premarket approval to one that highlights ongoing postmarket surveillance. As such a model takes form, tort (products) liability should continue to play a significant information-production and deterrence role, especially during the transition period before a new ex post regulatory framework is established.