Christina Lee (George Washington U Law) has posted “Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms” (16 U.C. Irvine Law Review ___ (forthcoming 2026)) on SSRN. Here is the abstract:
AI regulations are popping up around the world, and they mostly involve ex-ante risk assessment and mitigating those risks. But even with careful risk assessment, harms inevitably occur. This leads to algorithmic remedies: what to do once algorithmic harms occur, especially when traditional remedies are ineffective. What makes a particular algorithmic remedy appropriate for a given algorithmic harm?
I explore this question through case study of a prominent algorithmic remedy: algorithmic disgorgement—destruction of models tainted by illegality. Since the FTC first used it in 2019, it has garnered significant attention, and other enforcers and litigants around the country and the world have started to invoke it. Alongside its increasing popularity came a significant expansion in scope. Initially, the FTC invoked it in cases where data was allegedly collected unlawfully and ordered deletion of models created using such data. The remedy’s scope has since expanded; regulators and litigants now invoke it against AI whose use, not creation, causes harm. It has become a remedy many turn to for all things algorithmic.
I examine this remedy with a critical eye, concluding that though it looms large, it is often inappropriate. Algorithmic disgorgement has evolved into two distinct remedies. Data-based algorithmic disgorgement seeks to remedy harms committed during a model’s creation; use-based algorithmic disgorgement seeks to remedy harms caused by a model’s use. These two remedies aim to vindicate different principles underlying traditional remedies: data-based algorithmic disgorgement follows the disgorgement principle underlying remedies like monetary disgorgement and the exclusionary rule, while use-based algorithmic disgorgement follows the consumer protection principle underlying remedies like product recall. However, they often fail to live up to the principles. AI systems exist in the context of the algorithmic supply chain; they are controlled by many hands, and seemingly unrelated entities are connected to each other in complicated ways through complex data flows. The realities of algorithmic supply chain means that algorithmic disgorgement is often a bad fit for the harm at issue and causes undesirable effects throughout the algorithmic supply chain, imposing burden on innocent parties while not imposing cost on the blameworthy; ultimately, algorithmic disgorgement undermines the principles it seeks to promote.
From this analysis, I derive considerations for determining whether an algorithmic remedy is appropriate—the responsiveness of the remedy to the harm and the full impact of the remedy throughout the supply chain—and underscore the need for a diversity of algorithmic remedies.
