Feher et al. on Is AI Trained on Public Money? Evidence from US Data Centers

Adam Feher (U Lausanne) et al. have posted “Is AI Trained on Public Money? Evidence from US Data Centers” on SSRN. Here is the abstract:

Rapid data center growth has raised concerns about rising energy demand and its effects. Leveraging a novel dataset of U.S. data center energy loads, utility prices, and establishment-level outcomes, we quantify local spillover effects on electricity prices, firm performance, and emissions. Using an IV continuous DiD exploiting exogenous variation in data center location attractiveness, we find no local spillovers over 2010–2024. A regional model calibrated to the empirical null suggests that shocks larger than those observed through 2024 could still result in noticeable increases in household utility bills if not offset by regulation or external supply.

Fan et al. on Novel Corporate Governance Structures

Jennifer S. Fan (Loyola Law Los Angeles) and Xuan-thao Nguyen (U Washington Law) have posted “Novel Corporate Governance Structures” (Harvard Journal of Law & Technology, Volume 38, Number 4 Spring 2025) on SSRN. Here is the abstract:

Artificial Intelligence (“AI”) startups have taken center stage, rapidly disrupting conventional industries at an unprecedented pace with their groundbreaking innovations. Hailed by many as the most significant technological advancement of our era, AI’s profound societal impact has garnered heightened public and governmental scrutiny. The spotlight has recently fallen on OpenAI, the creator of ChatGPT, which weathered a tumultuous period marked by the ouster and subsequent rehiring of CEO Sam Altman, a board reconfiguration, and Altman’s later return to the board. Concerns over AI safety were offered as the rationale for the tandem corporate governance structure of nonprofit and for-profit at OpenAI which led to board friction, a management coup, and superalignment defection. Similarly, concerns over AI safety also underscore the creation of the corporate structures at Anthropic and xAI.

This Article explores the innovative corporate governance models that have emerged from leading AI startups like OpenAI, Anthropic, and xAI, assessing their long-term viability as these companies race against one another in building AI foundation models. Ultimately, it proposes a path forward for improved governance in AI startups by  advocating for an amendment to corporate law requiring a board-level AI Safety Committee at AI startups.

Barnett on The Free Content Illusion

Jonathan Barnett (USC Gould Law) has posted “The Free Content Illusion” (Journal of Intellectual Property Law (2026)) on SSRN. Here is the abstract:

Peer-to-peer file sharing in the early 2000s destabilized traditional content markets and associated business models that rely on preserving control over the use of creative assets.  Academics and other commentators widely argued that robust forms of intellectual property rights had been rendered largely obsolete in a digital environment of low production and distribution costs. Reflecting this view, courts expanded the fair use doctrine and generously applied safe harbors under the Digital Millenium Copyright Act, which largely immunized platforms against liability for user infringement and consistently favored content aggregators over originators.  The subsequent evolution of digital markets nonetheless shows that exclusivity protections remain critical to sustaining an independently viable content economy that does not rely on philanthropic or governmental patronage.  Streaming services in audio, video, and literary media restored revenue flows to content originators through contractual and technological complements to copyright protection, while content segments (notably, the news industry) that failed to deploy such mechanisms struggled economically.  Contrary to prevailing views, meaningful property rights and other exclusivity protections remain essential for sustaining the production, financing, and development of creative assets in digital environments and, together with technological and contractual complements, are likely to retain this role in supporting a robust flow of original content for the artificial intelligence ecosystem.

Alwis on “Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems

Rangita De Silva De Alwis (U Pennsylvania Carey Law) has posted “”Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems” (Chicago Journal of International Law, forthcoming) on SSRN. Here is the abstract:

Paragraph 2 of the UN General Assembly Resolution 78/241 requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on 1 February 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention inviting their formal input. This paper for the first time analyzes the positions of Member States on AI- driven LAWS. Using a qualitative coding matrix, the paper examines Member States’ positions in relation to human centric approaches to AI- driven LAWS, and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution but whether they can be free of data, algorithmic, and programmer bias.  Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI- driven weapons asymmetry between different nation states depending on who has access to AI.

The article raises the question whether Yale Law’s Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “mistakes” in war may also apply to the pattern of biases in AI- driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

During the 2017 testimony to the US Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, ….“because we take our values to war …. I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” The laws of war are rapidly advancing to a critical crossroads in war’s relationship with technology.

Su & Teo on Can AI Agents have Rights?

Anna Su (U Toronto Law) and Sue Anne Teo (Raoul Wallenberg Institute of Human Rights and Humanitarian Law; Harvard University – Carr-Ryan Center for Human Rights) have posted “Can AI Agents have Rights?” on SSRN. Here is the abstract:

AI agents are rapidly moving from theoretical constructs to real-world deployments. This raises urgent questions about governance and, more specifically, rights. While rights for artificial agents have received some scholarly attention, the question of rights for AI agents understood through the paradigmatic shift introduced by generative AI remains underexamined. This article addresses that gap. We develop four theoretical pathways through which rights for AI agents might be recognized. The derivative argument draws on the legal and moral foundations of agency law and tests their applicability to AI agents. The diffusion argument holds that AI agents’ deep embeddedness in social life creates pressure to render their actions legible within existing frameworks of responsibility and liability. The distinction argument examines whether AI agents possess capacities-including a potential role in resolving collective action problems requiring high levels of social coordination-that independently justify rights recognition. The devolution argument frames rights as a counterweight to the concentration of corporate power over AI systems. A central contribution of this analysis is decoupling the question of AI rights from moral personhood and its associated qualities, such as sentience and consciousness. We also address three objections: that AI rights would dilute human rights, generate a problematic proliferation of rights, and that regulatory goals could be achieved through legal duties alone. As AI agents become increasingly embedded in social, professional, and political life, questions about their rights will inevitably arise. This article offers a more nuanced framework for addressing them.

Abiri on Mutually Assured Deregulation

Gilad Abiri (Peking U Transnational Law) has posted “Mutually Assured Deregulation” (Stanford Technology Law Review) on SSRN. Here is the abstract:

We have convinced ourselves that the way to make AI safe is to make it unsafe. Since 2022, many policymakers worldwide have embraced the “Regulation Sacrifice”—the belief that dismantling safety oversight will somehow deliver security through AI dominance. The reasoning follows a perilous pattern: fearing that China or the USA will dominate the AI landscape, we rush to eliminate any safeguard that might slow our progress. This Essay reveals the fatal flaw in such thinking. Though AI development certainly poses national security challenges, the solution demands stronger regulatory frameworks, not weaker ones. A race without guardrails doesn’t build competitive strength—it breeds shared danger.

The Regulation Sacrifice makes three promises. Each one is false. First, it promises durable technological leads. But as a form of dual-use software, AI capabilities spread like wildfire. Performance gaps between U.S. and Chinese systems collapsed from 9% to 2% in thirteen months. When advantages evaporate in months, sacrificing permanent safety for temporary speed makes no sense.

Second, it promises that deregulation accelerates innovation. The opposite is quite often true. Companies report that well-designed governance frameworks streamline their development. Investment flows toward regulated markets, not away from them. Clear rules reduce uncertainty. Uncertain liability creates paralysis. We have seen this movie before—environmental standards didn’t kill the auto industry; they created Tesla and BYD.

Third, the promise of enhanced national security through deregulation is perhaps the most dangerous fallacy, as it actually undermines security across all timeframes. In the near term, it hands our adversaries perfect tools for information warfare. In the medium term, it puts bioweapon capabilities in everyone’s hands. In the long term, it guarantees we’ll deploy AGI systems we cannot control, racing to be the first to push a button we can’t unpush.

The Regulation Sacrifice persists because it serves powerful interests, not because it serves security. Tech companies prefer freedom to accountability. Politicians prefer simple stories to complex truths. Together they are trying to convince us that recklessness is patriotism. But here is the punchline: these ideas create a system of mutually assured deregulation, where each nation’s sprint for advantage guarantees collective vulnerability. The only way to win this game is not to play.

Peng et al. on Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore

Huijuan Peng (Singapore Management U Yong Pung How Law) and Pey-woan Lee (Singapore Management U Yong Pung How Law) have posted “Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore” (Journal of Tort Law, 0[10.1515/jtl-2025-0028]) on SSRN. Here is the abstract:

This Article explores how U.S. tort law can respond more effectively to the distinct harms posed by deepfakes, including reputational injury, identity appropriation, and emotional distress. Traditional tort doctrines, such as defamation, the right of publicity, and intentional infliction of emotional distress (IIED), remain fragmented and ill-suited to the speed, scale, and anonymity of deepfake dissemination. Using a comparative functionalist approach, the Article analyzes how China and Singapore respond to deepfake harms through structurally divergent but functionally instructive frameworks. China’s model combines codified personality rights with intermediary obligations under a civil law regime, while Singapore adopts a hybrid approach that integrates common law torts with targeted statutory and administrative interventions. Although neither model is directly replicable in the United States, both offer valuable comparative insights to guide the reform of U.S. tort law. The article advances an integrated governance model for U.S. tort law: reconstructing personality-based torts, repositioning tort law through conditional intermediary liability, and clarifying constitutionally grounded limits for speechbased claims. Drawing on Chinese and Singaporean legal approaches, the Article sets out a comparative reform framework that enables U.S. tort law to better address deepfake harms while safeguarding autonomy and dignity in AI-driven digital environments.

Long on The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion

Sean Norick Long (Georgetown U Law Center) has posted “The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion” on SSRN. Here is the abstract:

A US federal judge recently reasoned that a pricing algorithm learns “no different” from an attorney. This comparison is flawed in its immediate context, but it poses a greater danger: entrenching a mental model that blinds antitrust enforcement to the emergent threat of autonomous algorithmic collusion, where AI agents coordinate without human instruction. To prove collusion, courts cannot look directly into the human mind for intent, so they rely on an indirect proxy: evidence of observable communication between competitors. This paper argues the proxy is obsolete for AI agents, because their initial design and behavioral patterns are directly observable-offering a new basis to rule out independent action. In its place, I propose a two-part Mirror Test: an ex ante Design Test examines initial conditions for collusive bias, while an ex post Pattern Test detects coordinated pricing patterns inconsistent with independent action. This test can be implemented through agency guidance rather than new legislation, protecting the competitive process while giving companies predictable standards for compliance.

Fagan on When Fair Use Fails: Contingent Licensing for AI Training

Frank Fagan (South Texas College Law Houston) has posted “When Fair Use Fails: Contingent Licensing for AI Training” (forthcoming, Foundation for American Innovation, 2025) on SSRN. Here is the abstract:

As content producers increasingly gate material in response to AI-driven substitution-despite no changes to fair use law-there is growing risk that socially valuable inputs may disappear from the generative AI training ecosystem. This paper proposes a narrowly tailored, contingent licensing scheme to preserve access to high-value content when market failures prevent voluntary licensing. The scheme activates only when three conditions are met: (1) the content is demonstrably valuable for training; (2) the producer is economically marginal-that is, likely to restrict or withdraw access absent compensation; and (3) voluntary licensing has failed due to high transaction costs or bargaining asymmetries. While the proposal is focused on economically marginal creators at risk of exit, it allows for future extension to inframarginal producers if systemic gating emerges (defined here as a sustained, measurable reduction in access to critical content, whether by a majority of producers or by a small set whose gating materially degrades model performance). Drawing on the model of compulsory music licensing, the fallback mechanism operates only when necessary and always includes an opt-out, offering a light-touch intervention to sustain open access without undermining innovation or core publication incentives. In this way, the proposal aims to preserve innovation conditions when asymmetric withdrawal risks distorting competition and locking in advantages for firms with early licensing deals or deep proprietary libraries. Stronger measures that compel content creators to license their works, and without an opt-out, are considered but tentatively rejected as inefficient and likely to distort functioning markets.

Murray on Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence

Peter Murray (Oak Brook College Law) has posted “Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence” (Transactions on Engineering and Computing Sciences, volume 13, issue 02, 2025[10.14738/tecs.1302.18401]) on SSRN. Here is the abstract:

Crimebots are fueling the cybercrime pandemic by exploiting artificial intelligence (AI) to facilitate crimes such as fraud, misrepresentation, extortion, blackmail, identity theft, and security breaches. These AI-driven criminal activities pose a significant threat to individuals, businesses, online transactions, and even the integrity of the legal system. Crimebots enable unjust exonerations and wrongful convictions by fabricating evidence, creating deepfake alibis, and generating misleading crime reconstructions. In response, lawbots have emerged as a counterforce, designed to uphold justice. Legal professionals use lawbots to collect and analyze evidence, streamline legal processes, and enhance the administration of justice. To mitigate the risks posed by both crimebots and lawbots, many jurisdictions have established ethical guidelines promoting the responsible use of AI by lawyers and clients. Approximately 1.34% of lawyers have been involved in AI-related legal disputes, often revolving around issues such as fees, conflicts of interest, negligence, ethical violations, evidence tampering, and discrimination. Additional concerns include fraud, confidentiality breaches, harassment, and the misuse of AI for criminal purposes. For lawbots to succeed in the ongoing battle against crimebots, strict adherence to complex AI regulations is essential. Ensuring compliance with these guidelines minimizes malpractice risks, prevents professional sanctions, preserves client trust, and upholds the ethical and legal professional standards of excellence.