Hutson & Winters on Algorithmic Disgorgement

Jevan Hutson & Ben Winters (Electronic Privacy Information Center) have posted “America’s Next ‘Stop Model!’: Algorithmic Disgorgement” on SSRN. Here is the abstract:

Beginning with its 2019 final order In the Matter of Cambridge Analytica, LLC, followed by a May 2021 decision and order In the Matter of Everalbum, Inc. in the context of facial recognition technology and affirmed by its March 2022 stipulated order in United States of America v. Kurbo, Inc. et al in the context of children’s privacy, the United States Federal Trade Commission now wields algorithmic disgorgement—effectively the destruction of algorithms and models built upon unfairly or deceptively sourced (i.e., ill-gotten) data — as a consumer protection tool in its ongoing, uphill battle against unfair and deceptive practices in an increasingly data-driven world. The thesis of this Article is that algorithmic disgorgement is (i) an essential tool for consumer protection enforcement to address the complex layering of unfairness and deception common in data-intensive products and businesses and (ii) worthy of express endorsement by lawmakers and immediate use by consumer protection law enforcement. To that end, the Article will explain how the harms of algorithms built on and enhanced by ill-gotten data are layered, hard to trace, and require an enforcement tool that is consequently comprehensive and effective as a deterrent. This Article first traces the development of algorithmic disgorgement in the United States and then situates the development of algorithmic disgorgement within historical and other current US consumer protection law enforcement mechanisms. From there, this Article reflects upon on the need for and importance of algorithmic disgorgement and broader consumer protection enforcement for issues of unfairness and deception in AI, highlighting the significance of the Kurbo case being a violation of a children’s privacy law, which does not have a corollary for adults in the U.S. Ultimately, this Article argues that (i) state and federal lawmakers should enshrine algorithmic disgorgement into law to insulate it from potential challenge and (ii) state and federal consumer protection law enforcement entities ought to wield algorithmic disgorgement more aggressively to remedy and deter unfair and deceptive practices.

Gervais on How Courts Can Define Humanness in the Age of Artificial Intelligence

Daniel J. Gervais (Vanderbilt University – Law School) has posted “Human as a Matter of Law: How Courts Can Define Humanness in the Age of Artificial Intelligence” on SSRN. Here is the abstract:

This Essay considers the ability of AI machines to perform intellectual functions long associated with human higher mental faculties as a form of sapience, a notion that more fruitfully describes their abilities than either intelligence or sentience. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, the essay aims to situates the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.

Blanke on The CCPA, ‘Inferences Drawn,’ and Federal Preemption

Jordan Blanke (Mercer University) has posted “The CCPA, ‘Inferences Drawn,’ and Federal Preemption”
(Richmond Journal of Law and Technology, Vol. 29, No. 1 (Forthcoming 2022)) on SSRN. Here is the abstract:

In 2018 California passed an extensive data privacy law. One of its most significant features was the inclusion of “inferences drawn” within its definition of “personal information.” The law was significantly strengthened in 2020 with the expansion of rights for California consumers, new obligations on businesses, including the incorporation of GDPR-like principles of data minimization, purpose limitation, and storage limitation, and the creation of an independent agency to enforce these laws. In 2022 the Attorney General of California issued an Opinion that provided for an extremely broad interpretation of “inferences drawn.” Thereafter the American Data Privacy Protection Act was introduced in Congress. It does not provide nearly the protection for inferences that California law does, but it threatens to preempt almost all of it. This article argues that, given the importance of California being able to finally regulate inferences drawn, any federal bill must either provide similar protection, exclude California law from preemption, or be opposed.