Mei et al. on The Illusory Normativity of Rights-Based AI Regulation

Yiyang Mei (Emory U) and Matthew Sag (Emory U Law) have posted “The Illusory Normativity of Rights-Based AI Regulation” on SSRN. Here is the abstract:

Whether and how to regulate AI is now a central question of governance. Across academic, policy, and international legal circles, the European Union is widely treated as the normative leader in this space. Its regulatory framework, anchored in the General Data Protection Regulation, the Digital Services and Markets Acts, and the AI Act, is often portrayed as a principled model grounded in fundamental rights. This Article challenges that assumption. We argue that the rights-based narrative surrounding EU AI regulation mischaracterizes the logic of its institutional design. While rights language pervades EU legal instruments, its function is managerial, not foundational. These rights operate as tools of administrative ordering, used to mitigate technological disruption, manage geopolitical risk, and preserve systemic balance, rather than as expressions of moral autonomy or democratic consent. Drawing on comparative institutional analysis, we situate EU AI governance within a longer tradition of legal ordering shaped by the need to coordinate power across fragmented jurisdictions. We contrast this approach with the American model, which reflects a different regulatory logic rooted in decentralized authority, sectoral pluralism, and a constitutional preference for innovation and individual autonomy. Through case studies in five key domains—data privacy, cybersecurity, healthcare, labor, and disinformation—we show that EU regulation is not meaningfully rights-driven, as is often claimed. It is instead structured around the containment of institutional risk. Our aim is not to endorse the American model but to reject the presumption that the EU approach reflects a normative ideal that other nations should uncritically adopt. The EU model is best understood as a historically contingent response to its own political conditions, not a template for others to blindly follow.

Lyons on The Litigation Solution: Why Courts, Not Code Mandates, Should Address AI Discrimination

Daniel Lyons (Boston College Law) has posted “The Litigation Solution: Why Courts, Not Code Mandates, Should Address AI Discrimination” on SSRN. Here is the abstract:

As artificial intelligence systems increasingly influence decisionmaking in high-stakes sectors, policymakers have focused on regulating model design to combat algorithmic bias. Drawing on examples from the European Union’s AI Act and recent state legislation, this Article critiques the emerging “fairness by design” paradigm. It argues that design mandates rest on a flawed premise: that bias can be objectively defined and mitigated ex ante without compromising competing values such as accuracy, privacy, or innovation. In reality, efforts to engineer fairness through prescriptive regulation risk distorting markets, entrenching incumbents, and stifling technological advancement. Moreover, the opaque, evolving nature of AI systems—especially generative models—makes it difficult to anticipate or eliminate future biases through design alone, often creating tradeoffs that regulators are ill-equipped to manage.

Rather than regulating AI inputs, the Article advocates for a litigation-first approach that focuses on AI outputs and leverages existing antidiscrimination law to address harms as they arise. By applying traditional disparate treatment and disparate impact frameworks to AI-assisted decisions, courts can assess when biased outcomes rise to the level of unlawful discrimination—without prematurely constraining innovation or imposing rigid mandates. This model mirrors America’s historical preference for permissive innovation, allowing technology to evolve while holding bad actors accountable under general principles of law. The result is a more flexible, targeted regulatory regime that fosters AI development while safeguarding civil rights.

Klonowska et al. on Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI

Klaudia Klonowska (T.M.C. Asser Institute) and Taylor Kate Woodcock (T.M.C. Asser Institute) have posted “Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI” (Forthcoming in Boutin B., Woodcock T. K. & Soltanzadeh S. (eds.), Decision at the Edge: Interdisciplinary Dilemmas in Military Artificial Intelligence, Asser Press (2025)) on SSRN. Here is the abstract:

The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in targeting processes, legal and ethical debates often compare who performs better, humans or machines? In this Chapter, we unpack and critique the prevalence of comparisons between humans and AI systems, including in analyses of the fulfilment of legal obligations under International Humanitarian Law (IHL). We challenge this binary framing by highlighting misleading assumptions that neglect how the use of AI results in complex human-machine interactions that transform targeting practices. We unpack what is meant by ‘better performance’, demonstrating how prevailing metrics for speed and accuracy can create misleading expectations around the use of AI given the realities of warfare. We conclude that holistic but granular attention must be paid to the landscape of human-machine interactions to understand how the use of AI impacts compliance with IHL targeting obligations.

Feher et al. on Is AI Trained on Public Money? Evidence from US Data Centers

Adam Feher (U Lausanne) et al. have posted “Is AI Trained on Public Money? Evidence from US Data Centers” on SSRN. Here is the abstract:

Rapid data center growth has raised concerns about rising energy demand and its effects. Leveraging a novel dataset of U.S. data center energy loads, utility prices, and establishment-level outcomes, we quantify local spillover effects on electricity prices, firm performance, and emissions. Using an IV continuous DiD exploiting exogenous variation in data center location attractiveness, we find no local spillovers over 2010–2024. A regional model calibrated to the empirical null suggests that shocks larger than those observed through 2024 could still result in noticeable increases in household utility bills if not offset by regulation or external supply.

Fan et al. on Novel Corporate Governance Structures

Jennifer S. Fan (Loyola Law Los Angeles) and Xuan-thao Nguyen (U Washington Law) have posted “Novel Corporate Governance Structures” (Harvard Journal of Law & Technology, Volume 38, Number 4 Spring 2025) on SSRN. Here is the abstract:

Artificial Intelligence (“AI”) startups have taken center stage, rapidly disrupting conventional industries at an unprecedented pace with their groundbreaking innovations. Hailed by many as the most significant technological advancement of our era, AI’s profound societal impact has garnered heightened public and governmental scrutiny. The spotlight has recently fallen on OpenAI, the creator of ChatGPT, which weathered a tumultuous period marked by the ouster and subsequent rehiring of CEO Sam Altman, a board reconfiguration, and Altman’s later return to the board. Concerns over AI safety were offered as the rationale for the tandem corporate governance structure of nonprofit and for-profit at OpenAI which led to board friction, a management coup, and superalignment defection. Similarly, concerns over AI safety also underscore the creation of the corporate structures at Anthropic and xAI.

This Article explores the innovative corporate governance models that have emerged from leading AI startups like OpenAI, Anthropic, and xAI, assessing their long-term viability as these companies race against one another in building AI foundation models. Ultimately, it proposes a path forward for improved governance in AI startups by  advocating for an amendment to corporate law requiring a board-level AI Safety Committee at AI startups.

Barnett on The Free Content Illusion

Jonathan Barnett (USC Gould Law) has posted “The Free Content Illusion” (Journal of Intellectual Property Law (2026)) on SSRN. Here is the abstract:

Peer-to-peer file sharing in the early 2000s destabilized traditional content markets and associated business models that rely on preserving control over the use of creative assets.  Academics and other commentators widely argued that robust forms of intellectual property rights had been rendered largely obsolete in a digital environment of low production and distribution costs. Reflecting this view, courts expanded the fair use doctrine and generously applied safe harbors under the Digital Millenium Copyright Act, which largely immunized platforms against liability for user infringement and consistently favored content aggregators over originators.  The subsequent evolution of digital markets nonetheless shows that exclusivity protections remain critical to sustaining an independently viable content economy that does not rely on philanthropic or governmental patronage.  Streaming services in audio, video, and literary media restored revenue flows to content originators through contractual and technological complements to copyright protection, while content segments (notably, the news industry) that failed to deploy such mechanisms struggled economically.  Contrary to prevailing views, meaningful property rights and other exclusivity protections remain essential for sustaining the production, financing, and development of creative assets in digital environments and, together with technological and contractual complements, are likely to retain this role in supporting a robust flow of original content for the artificial intelligence ecosystem.

Alwis on “Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems

Rangita De Silva De Alwis (U Pennsylvania Carey Law) has posted “”Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems” (Chicago Journal of International Law, forthcoming) on SSRN. Here is the abstract:

Paragraph 2 of the UN General Assembly Resolution 78/241 requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on 1 February 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention inviting their formal input. This paper for the first time analyzes the positions of Member States on AI- driven LAWS. Using a qualitative coding matrix, the paper examines Member States’ positions in relation to human centric approaches to AI- driven LAWS, and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution but whether they can be free of data, algorithmic, and programmer bias.  Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI- driven weapons asymmetry between different nation states depending on who has access to AI.

The article raises the question whether Yale Law’s Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “mistakes” in war may also apply to the pattern of biases in AI- driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

During the 2017 testimony to the US Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, ….“because we take our values to war …. I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” The laws of war are rapidly advancing to a critical crossroads in war’s relationship with technology.

Su & Teo on Can AI Agents have Rights?

Anna Su (U Toronto Law) and Sue Anne Teo (Raoul Wallenberg Institute of Human Rights and Humanitarian Law; Harvard University – Carr-Ryan Center for Human Rights) have posted “Can AI Agents have Rights?” on SSRN. Here is the abstract:

AI agents are rapidly moving from theoretical constructs to real-world deployments. This raises urgent questions about governance and, more specifically, rights. While rights for artificial agents have received some scholarly attention, the question of rights for AI agents understood through the paradigmatic shift introduced by generative AI remains underexamined. This article addresses that gap. We develop four theoretical pathways through which rights for AI agents might be recognized. The derivative argument draws on the legal and moral foundations of agency law and tests their applicability to AI agents. The diffusion argument holds that AI agents’ deep embeddedness in social life creates pressure to render their actions legible within existing frameworks of responsibility and liability. The distinction argument examines whether AI agents possess capacities-including a potential role in resolving collective action problems requiring high levels of social coordination-that independently justify rights recognition. The devolution argument frames rights as a counterweight to the concentration of corporate power over AI systems. A central contribution of this analysis is decoupling the question of AI rights from moral personhood and its associated qualities, such as sentience and consciousness. We also address three objections: that AI rights would dilute human rights, generate a problematic proliferation of rights, and that regulatory goals could be achieved through legal duties alone. As AI agents become increasingly embedded in social, professional, and political life, questions about their rights will inevitably arise. This article offers a more nuanced framework for addressing them.

Abiri on Mutually Assured Deregulation

Gilad Abiri (Peking U Transnational Law) has posted “Mutually Assured Deregulation” (Stanford Technology Law Review) on SSRN. Here is the abstract:

We have convinced ourselves that the way to make AI safe is to make it unsafe. Since 2022, many policymakers worldwide have embraced the “Regulation Sacrifice”—the belief that dismantling safety oversight will somehow deliver security through AI dominance. The reasoning follows a perilous pattern: fearing that China or the USA will dominate the AI landscape, we rush to eliminate any safeguard that might slow our progress. This Essay reveals the fatal flaw in such thinking. Though AI development certainly poses national security challenges, the solution demands stronger regulatory frameworks, not weaker ones. A race without guardrails doesn’t build competitive strength—it breeds shared danger.

The Regulation Sacrifice makes three promises. Each one is false. First, it promises durable technological leads. But as a form of dual-use software, AI capabilities spread like wildfire. Performance gaps between U.S. and Chinese systems collapsed from 9% to 2% in thirteen months. When advantages evaporate in months, sacrificing permanent safety for temporary speed makes no sense.

Second, it promises that deregulation accelerates innovation. The opposite is quite often true. Companies report that well-designed governance frameworks streamline their development. Investment flows toward regulated markets, not away from them. Clear rules reduce uncertainty. Uncertain liability creates paralysis. We have seen this movie before—environmental standards didn’t kill the auto industry; they created Tesla and BYD.

Third, the promise of enhanced national security through deregulation is perhaps the most dangerous fallacy, as it actually undermines security across all timeframes. In the near term, it hands our adversaries perfect tools for information warfare. In the medium term, it puts bioweapon capabilities in everyone’s hands. In the long term, it guarantees we’ll deploy AGI systems we cannot control, racing to be the first to push a button we can’t unpush.

The Regulation Sacrifice persists because it serves powerful interests, not because it serves security. Tech companies prefer freedom to accountability. Politicians prefer simple stories to complex truths. Together they are trying to convince us that recklessness is patriotism. But here is the punchline: these ideas create a system of mutually assured deregulation, where each nation’s sprint for advantage guarantees collective vulnerability. The only way to win this game is not to play.

Peng et al. on Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore

Huijuan Peng (Singapore Management U Yong Pung How Law) and Pey-woan Lee (Singapore Management U Yong Pung How Law) have posted “Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore” (Journal of Tort Law, 0[10.1515/jtl-2025-0028]) on SSRN. Here is the abstract:

This Article explores how U.S. tort law can respond more effectively to the distinct harms posed by deepfakes, including reputational injury, identity appropriation, and emotional distress. Traditional tort doctrines, such as defamation, the right of publicity, and intentional infliction of emotional distress (IIED), remain fragmented and ill-suited to the speed, scale, and anonymity of deepfake dissemination. Using a comparative functionalist approach, the Article analyzes how China and Singapore respond to deepfake harms through structurally divergent but functionally instructive frameworks. China’s model combines codified personality rights with intermediary obligations under a civil law regime, while Singapore adopts a hybrid approach that integrates common law torts with targeted statutory and administrative interventions. Although neither model is directly replicable in the United States, both offer valuable comparative insights to guide the reform of U.S. tort law. The article advances an integrated governance model for U.S. tort law: reconstructing personality-based torts, repositioning tort law through conditional intermediary liability, and clarifying constitutionally grounded limits for speechbased claims. Drawing on Chinese and Singaporean legal approaches, the Article sets out a comparative reform framework that enables U.S. tort law to better address deepfake harms while safeguarding autonomy and dignity in AI-driven digital environments.