Cooper et al. on Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy and Research

A. Feder Cooper (Microsoft Research) et al. have posted “Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy and Research” on SSRN. Here is the abstract:

“Machine unlearning” is a popular proposed solution for mitigating the existence of content in an AI model that is problematic for legal or moral reasons, including privacy, copyright, safety, and more. For example, unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model’s parameters, e.g., a particular individual’s personal data or the inclusion of copyrighted content in the model’s training data. Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs, e.g., generations that closely resemble a particular individual’s data or reflect the concept of “Spiderman.” Both of these goals—the targeted removal of information from a model and the targeted suppression of information from a model’s outputs—present various technical and substantive challenges. We provide a framework for ML researchers and policymakers to think rigorously about these challenges, identifying several mismatches between the goals of unlearning and feasible implementations. These mismatches explain why unlearning is not a general-purpose solution for circumscribing generative-AI model behavior in service of broader positive impact.

Coglianese on On the Need for Digital Regulators

Cary Coglianese (U Pennsylvania Carey Law) has posted “On the Need for Digital Regulators” (in Research Handbook on Digital Regulatory Agencies, Martha Garcia-Murillo and Ian MacInnes eds., forthcoming) on SSRN. Here is the abstract:

The growing digital economy brings increasing recognition of the need for digital regulators. This chapter considers two senses of the term “digital regulators”: one of these refers to regulatorsof digital technology; the other refers to how any regulatory organization can improve its operationswith the use of digital technology. Today’s economy requires digital regulators of both types. The need for regulatorsof digital technology grows out of perennial concerns about market failures and other implicated social values, such as privacy. This chapter sketches the rationales that in the past have justified regulating digital technology, and then it explains how market-failure justifications continue to reveal a need for regulating today’s rapidly evolving digital technologies, including artificial intelligence. The chapter then shows how the need for regulatorswith digital technology has been evident since the advent of the internet and has grown even more compelling today with the possibilities created by artificial intelligence. One common thread from the past through to today is the need for multiple regulators both to oversee digital technologies and to use these technologies to improve their regulatory performance.

Esposito et al. on Mitigating the Risks of Generative AI in Government through Algorithmic Governance

Mark Esposito (Harvard U) et al. have posted “Mitigating the Risks of Generative AI in Government through Algorithmic Governance” on SSRN. Here is the abstract:

The launch of the generative artificial intelligence (gen AI) application ChatGPT by OpenAI launched artificial intelligence into public discourse and led to a wave of mass uptake of this technology in organizations in the private sector. At the same time, AI is increasingly incorporated into government functions and the public sector. We propose that governments and the public sector can set an example for the responsible use of AI technologies by following the principles of algorithmic governance traditionally recommended to the private sector. Algorithmic governance has historically been defined in the literature as governance by algorithms, or how artificial intelligence is used to make governance decisions and affect social ordering. However, we take an alternative approach; instead, we conceptualize algorithmic governance as the governance of algorithms. We summarize the risks of generative AI use in governments and the public sector, then outline algorithmic governance principles, a step-by-step approach to implementing algorithmic governance into government or public sector projects, opportunities for inter-sector collaboration, the role of polycentric governance, and conclusions.

Dickinson on Section 230: A Juridical History

Gregory M. Dickinson (U Nebraska Lincoln College Law) has posted “Section 230: A Juridical History” (28 Stan. Tech. L. Rev. 1 (2025)) on SSRN. Here is the abstract:

Section 230 of the Communications Decency Act of 1996 is the most important law in the history of the internet. It is also one of the most flawed. Under Section 230, online entities are absolutely immune from lawsuits related to content authored by third parties. The law has been essential to the internet’s development over the last twenty years, but it has not kept pace with the times and is now a source of deep consternation to courts and legislatures. Lawmakers and legal scholars from across the political spectrum praise the law for what it has done, while criticizing its protection of bad-actor websites and obstruction of internet law reform.

Absent from the fray, however, has been the Supreme Court, which has never issued a decision interpreting Section 230. That is poised to change, as the Court now appears determined to peel back decades of lower court case law and interpret the statute afresh to account for the tremendous technological advances of the last two decades. Rather than offer a proposal for reform, of which there are plenty, this Article acts as a guidebook to reformers by examining how we got to where we are today. It identifies those interpretive steps and missteps by which courts constructed an immunity doctrine insufficiently resilient against technological change, with the aim of aiding lawmakers and scholars in crafting an immunity doctrine better situated to accommodate future innovation.

Munir et al. on Artificial Intelligence, Data Protecting and Transparency: A Comparative Study of GDPR and CCPA

Bakht Munir (U Kansas Law) et al. have posted “Artificial Intelligence, Data Protecting and Transparency: A Comparative Study of GDPR and CCPA” on SSRN. Here is the abstract:

This study explores the relationship between artificial intelligence (AI), data protection, and transparency, focusing on the legal frameworks designed to manage these complexities, particularly the European Union’s General Data Protection Regulation (GDPR). As modern technologies become deeply embedded in daily life, privacy, transparency, fairness, and accountability concerns have intensified. This research critically examines the GDPR’s development in the context of AI, assessing its effectiveness in safeguarding privacy, security, and data protection. Furthermore, it compares the GDPR with the California Consumer Privacy Act (CCPA) to highlight the need for a globally harmonized and comprehensive data protection framework. The study ultimately takes an optimistic stance, arguing that the evolving challenges of data privacy can be effectively addressed through continuous legal adaptations, stronger international cooperation, and the integration of ethical principles into AI governance.