Hacker et al. on The Regulation of Fine-Tuning: Federated Compliance for Modified General-Purpose AI Models

Philipp Hacker (European U Viadrina Frankfurt (Oder) European New Digital Studies) and Matthias Holweg (U Oxford Said Business) have posted “The Regulation of Fine-Tuning: Federated Compliance for Modified General-Purpose AI Models” on SSRN. Here is the abstract:

This paper addresses the regulatory and liability implications of modifying general-purpose AI (GPAI) models under the EU AI Act and related legal frameworks. We make five principal contributions to this debate. First, the analysis maps the spectrum of technical modifications to GPAI models and proposes a detailed taxonomy of these interventions and their associated compliance burdens. Second, the discussion clarifies when exactly a modifying entity qualifies as a GPAI provider under the AI Act, which significantly alters the compliance mandate. Third, we develop a novel, hybrid legal test to distinguish substantial from insubstantial modifications that combines a compute-based threshold with consequence scanning to assess the introduction or amplification of risk. Fourth, the paper examines liability under the revised Product Liability Directive (PLD) and tort law, arguing that entities substantially modifying GPAI models become “manufacturers” under the PLD and may face liability for defects. The paper aligns the concept of “substantial modification” across both regimes for legal coherence and argues for a one-to-one mapping between “new provider” (AI Act) and “new manufacturer” (PLD). Fifth, the recommendations offer concrete governance strategies for policymakers and managers that propose a federated compliance structure, based on joint testing of base and modified models, implementation of Failure Mode and Effects Analysis and consequence scanning, a new database for GPAI models and modifications, robust documentation, and adherence to voluntary codes of practice. The framework also proposes simplified compliance options for SMEs while maintaining their liability obligations. Overall, the paper aims to map out a proportionate and risk-sensitive regulatory framework for modified GPAI models that integrates technical, legal, and wider societal considerations.

Cook et al. on Social Group Bias in AI Finance

Thomas R. Cook (Federal Reserve Bank Kansas City) and Sophia Kazinnik (Stanford U) have posted “Social Group Bias in AI Finance” on SSRN. Here is the abstract:

Financial institutions increasingly rely on large language models (LLMs) for highstakes decision-making. However, these models risk perpetuating harmful biases if deployed without careful oversight. This paper investigates racial bias in LLMs specifically through the lens of credit decision-making tasks, operating on the premise that biases identified here are indicative of broader concerns across financial applications. We introduce a reproducible, counterfactual testing framework that evaluates how models respond to simulated mortgage applicants identical in all attributes except race. Our results reveal significant race-based discrepancies, exceeding historically observed bias levels. Leveraging layer-wise analysis, we track the propagation of sensitive attributes through internal model representations. Building on this, we deploy a control-vector intervention that effectively reduces racial disparities by up to 70% (33% on average) without impairing overall model performance. Our approach provides a transparent and practical toolkit for the identification and mitigation of bias in financial LLM deployments.

Ciriello et al. on Compassionate AI Design, Governance, and Use

Raffaele Ciriello (U Sydney) and Angelina Chen (The U Sydney) have posted “Compassionate AI Design, Governance, and Use” on SSRN. Here is the abstract:

The rapid rise of generative AI reshapes society, transforming jobs, relationships, and core beliefs about human essence. AI’s ability to simulate empathy, once considered uniquely human, offers promise in industries from marketing to healthcare but also risks exploiting emotional vulnerabilities, fostering dependency, and compromising privacy. These risks are particularly acute with AI companion chatbots, which mimic emotional speech but may erode genuine human connections. Rooted in Schopenhauer’s compassionate imperative, we present a novel framework for compassionate AI design, governance, and use, emphasizing equitable distribution of AI’s benefits and burdens based on stakeholder vulnerability. We advocate for responsible AI development that prioritizes empathy, dignity, and human flourishing.

Solow-Niederman on AI and Doctrinal Collapse

Alicia Solow-Niederman (George Washington U Law) has posted “AI and Doctrinal Collapse” (78 Stanford Law Review __ (forthcoming 2026)) on SSRN. Here is the abstract:

Artificial intelligence runs on data. But the two legal regimes that govern data—information privacy law and copyright law—are under pressure. Formally, each regime demands different things. Functionally, the boundaries between them are blurring, and their distinct rules and logics are becoming illegible.

This Article identifies this phenomenon, which I call “inter-regime doctrinal collapse,” and exposes the individual and institutional consequences. Through analysis of pending litigation, discovery disputes, and licensing agreements, this Article highlights two dominant exploitation tactics enabled by collapse: Companies “buy” data through business-to-business deals that sidestep individual privacy interests, or “ask” users for broad consent through privacy policies and terms of service that leverage notice-and-choice frameworks. Left unchecked, the data acquisition status quo favors established corporate players and impedes law’s ability to constrain the arbitrary exercise of private power.

Doctrinal collapse poses a fundamental challenge to the rule of law. When a leading AI developer can simultaneously argue that data is public enough to scrape—diffusing privacy and copyright controversies—and private enough to keep secret—avoiding disclosure or oversight of its training data—something has gone seriously awry with how law constrains power. To manage these costs and preserve space for salutary innovation, we need a law of collapse. This Article offers institutional responses, drawn from conflict of laws and legal pluralism, to create one.

Perot on Anticipating AI: A Partial Solution to Image Rights Protection for Performers

Emma Perot (U the West Indies (Saint Augustine)) has posted “Anticipating AI: A Partial Solution to Image Rights Protection for Performers” (European Intellectual Property Review, Volume 46(7), pgs 407 – 418) on SSRN. Here is the abstract:

This article assesses Equity’s ‘Stop AI from Stealing the Show’ survey and suggests that a statutory image right could address some of the harms posed by AI, namely, unauthorised digital replicas. Unauthorised commercial use of persona can already be pursued under passing off and Advertising Codes in certain circumstances, but the inclusion of persona in films, television programs, and audio works is not addressed by the existing law. Even the US right of publicity is potentially inadequate in this regard because this type of harm is novel and has not been fully contemplated outside of the realm of video game avatars. Introducing a statutory image right in the UK that reflects the US ‘No Fakes’ Bill will only be a partial solution because of the existing contractual practices that result from inequality of bargaining power in the entertainment industry. Additionally, nefarious uses of deepfakes are more suited to technological intervention and criminal penalties.

Brcic on The Memory Wars: AI Memory, Network Effects, and the Geopolitics of Cognitive Sovereignty

Mario Brcic (U Zagreb Electrical Engineering and Computing) has posted “The Memory Wars: AI Memory, Network Effects, and the Geopolitics of Cognitive Sovereignty” on SSRN. Here is the abstract:

The advent of continuously learning Artificial Intelligence (AI) assistants marks a paradigm shift from episodic interactions to persistent, memory-driven relationships. This paper introduces the concept of “Cognitive Sovereignty”, the ability of individuals, groups, and nations to maintain autonomous thought and preserve identity in the age of powerful AI systems, especially those that hold their deep personal memory. It argues that the primary risk of these technologies transcends traditional data privacy to become an issue of cognitive and geopolitical control. We propose “Network Effect 2.0,” a model where value scales with the depth of personalized memory, creating powerful cognitive moats and unprecedented user lock-in. We analyze the psychological risks of such systems, including cognitive offloading and identity dependency, by drawing on the “extended mind” thesis. These individual-level risks scale to geopolitical threats, such as a new form of digital colonialism and subtle shifting of public discourse. To counter these threats, we propose a policy framework centered on memory portability, transparency, sovereign cognitive infrastructure, and strategic alliances. This work reframes the discourse on AI assistants in an era of increasingly intimate machines, pointing to challenges to individual and national sovereignty.

Nobel et al. on Untangling AI Openness

Parth Nobel (Stanford U) et al. have posted “Untangling AI Openness” (2026 Wisconsin Law Review (forthcoming)) on SSRN. Here is the abstract:

The debate over AI openness—whether to make components of an artificial intelligence system available for public inspection and modification—forces policymakers to balance innovation, democratized access, safety and national security. By inviting startups and researchers into the fold, it enables independent oversight and inclusive collaboration. But technology giants can also use it to entrench their own power, while adversaries can use it to shortcut years and billions of dollars in building systems, like China’s Deepseek-R1, that rival our own. How we govern AI openness today will shape the future of AI and America’s role in it.

Policymakers and scholars grasp the stakes of AI openness, but the debate is trapped in a flawed premise: that AI is either “open” and “closed.” This dangerous oversimplification—inherited from the world of open source software—belies the complex calculus at the heart of AI openness. Unlike traditional software, AI is a composite technology built on a stack of discrete components—from compute to labor—controlled by different stakeholders with competing interests. Each component’s openness is neither a binary choice nor inherently desirable. Effective governance demands a nuanced understanding of how the relative openness of each component serves some goals while undermining others. Only then can we determine the trade-offs we are willing to make and how we hope to achieve them.

This Article aims to equip policymakers with the analytical toolkit to do just that. First, it introduces a novel taxonomy of “differential openness,” untangling AI into its constituent components and illustrating how each one has its own spectrum of openness. Second, it uses this taxonomy to systematically analyze how each component’s relative openness necessitates intricate trade-offs both within and between policy goals. Third, it operationalizes these insights by advancing a research agenda that shows how law can be analyzed and refined to support more precise configurations of component openness.

AI openness is neither all or nothing nor inherently good or evil—it is a tool that must be wielded with precision if it has any hope of serving the public interest.

Mei & Broyde on Reclaiming Constitutional Authority of Algorithmic Power

Yiyang Mei (Emory U) and Michael J. Broyde (Emory U Law) have posted “Reclaiming Constitutional Authority of Algorithmic Power” on SSRN. Here is the abstract:

Whether and how to govern AI is no longer a question of technical regulation. It is a question of constitutional authority. Across jurisdictions, algorithmic systems now perform functions once reserved to public institutions: allocating welfare, determining legal status, mediating access to housing, employment, and healthcare. These are not merely administrative operations. They are acts of rule. Yet the dominant models of AI governance fail to confront this reality. The European approach centers on rights-based oversight, presenting its regulatory framework as a principled defense of human dignity. The American model relies on decentralized experimentation, treating fragmentation as a proxy for democratic legitimacy. Both, in different ways, evade the structural question: who authorizes algorithmic power, through what institutions, and on what terms. This Article offers an alternative. Drawing from early modern Reformed political thought, it reconstructs a constitutional framework grounded in covenantal authority and the right of lawful resistance. It argues that algorithmic governance must rest on three principles. First, that all public power must be lawfully delegated through participatory authorization. Second, that authority must be structured across representative communities with the standing to consent, contest, or refuse. Third, that individuals retain a constitutional right to resist systems that impose orthodoxy or erode the domain of conscience. These principles are then operationalized through doctrinal analysis of federalism, nondelegation, compelled speech, and structural accountability. On this view, the legitimacy of algorithmic governance turns not on procedural safeguards or policy design, but on whether it reflects a constitutional order in which power is authorized by the governed, constrained by law, and answerable to those it affects.

Dornis & Lucchi on Generative AI and the Scope of EU Copyright Law: A Doctrinal Analysis in Light of C-250/25

Tim W. Dornis (Leibniz U Hannover) and Nicola Lucchi (Universitat Pompeu Fabra Law) have posted “Generative AI and the Scope of EU Copyright Law: A Doctrinal Analysis in Light of C-250/25” (IIC (International Review of Intellectual Property and Competition Law) vol. 56 (2025), forthcoming November (issue 10)) on SSRN. Here is the abstract:

This article offers a doctrinal analysis of the copyright implications raised by Like Company v. Google Ireland (C-250/25), the first case to bring generative AI before the CJEU. It examines whether the training and output of systems like Gemini infringe exclusive rights under EU copyright law. We argue that AI model training may involve acts of reproduction under Article 2 of the InfoSoc Directive, while the dissemination of AI-generated outputsespecially through public interfaces-may trigger the right of communication to the public under Article 3. Particular concerns arise when protected content is recognisably reproduced or when AI outputs serve as functional substitutes for original works, thereby affecting the normal exploitation of those works. While not a formal infringement criterion, such functional substitution is relevant in assessing the application of exceptions and compliance with the three-step test. The paper also challenges the applicability of the text and data mining exception to generative uses, highlighting its incompatibility with the limitations imposed by the three-step test. Ultimately, the analysis supports a technologically neutral, rights-based interpretation that safeguards the economic viability of creative production in the algorithmic age.

Chung & Schiff on AI and the Social Contract

Chee Hae Chung (Purdue U) and Daniel Schiff (Purdue U) have posted “AI and the Social Contract” (Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-25)) on SSRN. Here is the abstract:

As artificial intelligence (AI) systems increasingly shape public governance, they challenge foundational principles of political legitimacy. This paper evaluates AI governance against five canonical social contract theories—Hobbes, Locke, Rousseau, Rawls, and Nozick—while examining how structural features of AI strain these theories’ durability. Using a structured comparative framework, the study applies three forms of legitimacy (procedural, moral-substantive, and recognitional) and three types of consent (explicit, tacit, and hypothetical) as normative benchmarks. Applying each theory, the analysis finds AI governance is marked by deficits in accountability, participation, rights protection, fairness, and freedom from coercion, while AI’s opacity, global influence, and hybrid public-private control reveal blind spots within the social contract tradition itself. Though no single theory offers a complete solution and each contains specific weaknesses, the paper develops a hybrid model integrating Hobbesian accountability, Lockean rights protections, Rousseauian participation norms, Rawlsian fairness, and Nozickian safeguards against coercion. The paper concludes by distilling normative priorities for aligning governance with these hybrid contractarian standards: embedding participatory mechanisms, encouraging pluralistic ethical perspectives, ensuring institutional transparency, and strengthening democratic oversight. These interventions aim to reconfigure the social contract—and AI—for an era in which algorithmic systems increasingly mediate the exercise of political authority.