Alwis on “Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems

Rangita De Silva De Alwis (U Pennsylvania Carey Law) has posted “”Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems” (Chicago Journal of International Law, forthcoming) on SSRN. Here is the abstract:

Paragraph 2 of the UN General Assembly Resolution 78/241 requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on 1 February 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention inviting their formal input. This paper for the first time analyzes the positions of Member States on AI- driven LAWS. Using a qualitative coding matrix, the paper examines Member States’ positions in relation to human centric approaches to AI- driven LAWS, and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution but whether they can be free of data, algorithmic, and programmer bias.  Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI- driven weapons asymmetry between different nation states depending on who has access to AI.

The article raises the question whether Yale Law’s Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “mistakes” in war may also apply to the pattern of biases in AI- driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

During the 2017 testimony to the US Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, ….“because we take our values to war …. I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” The laws of war are rapidly advancing to a critical crossroads in war’s relationship with technology.

Su & Teo on Can AI Agents have Rights?

Anna Su (U Toronto Law) and Sue Anne Teo (Raoul Wallenberg Institute of Human Rights and Humanitarian Law; Harvard University – Carr-Ryan Center for Human Rights) have posted “Can AI Agents have Rights?” on SSRN. Here is the abstract:

AI agents are rapidly moving from theoretical constructs to real-world deployments. This raises urgent questions about governance and, more specifically, rights. While rights for artificial agents have received some scholarly attention, the question of rights for AI agents understood through the paradigmatic shift introduced by generative AI remains underexamined. This article addresses that gap. We develop four theoretical pathways through which rights for AI agents might be recognized. The derivative argument draws on the legal and moral foundations of agency law and tests their applicability to AI agents. The diffusion argument holds that AI agents’ deep embeddedness in social life creates pressure to render their actions legible within existing frameworks of responsibility and liability. The distinction argument examines whether AI agents possess capacities-including a potential role in resolving collective action problems requiring high levels of social coordination-that independently justify rights recognition. The devolution argument frames rights as a counterweight to the concentration of corporate power over AI systems. A central contribution of this analysis is decoupling the question of AI rights from moral personhood and its associated qualities, such as sentience and consciousness. We also address three objections: that AI rights would dilute human rights, generate a problematic proliferation of rights, and that regulatory goals could be achieved through legal duties alone. As AI agents become increasingly embedded in social, professional, and political life, questions about their rights will inevitably arise. This article offers a more nuanced framework for addressing them.

Abiri on Mutually Assured Deregulation

Gilad Abiri (Peking U Transnational Law) has posted “Mutually Assured Deregulation” (Stanford Technology Law Review) on SSRN. Here is the abstract:

We have convinced ourselves that the way to make AI safe is to make it unsafe. Since 2022, many policymakers worldwide have embraced the “Regulation Sacrifice”—the belief that dismantling safety oversight will somehow deliver security through AI dominance. The reasoning follows a perilous pattern: fearing that China or the USA will dominate the AI landscape, we rush to eliminate any safeguard that might slow our progress. This Essay reveals the fatal flaw in such thinking. Though AI development certainly poses national security challenges, the solution demands stronger regulatory frameworks, not weaker ones. A race without guardrails doesn’t build competitive strength—it breeds shared danger.

The Regulation Sacrifice makes three promises. Each one is false. First, it promises durable technological leads. But as a form of dual-use software, AI capabilities spread like wildfire. Performance gaps between U.S. and Chinese systems collapsed from 9% to 2% in thirteen months. When advantages evaporate in months, sacrificing permanent safety for temporary speed makes no sense.

Second, it promises that deregulation accelerates innovation. The opposite is quite often true. Companies report that well-designed governance frameworks streamline their development. Investment flows toward regulated markets, not away from them. Clear rules reduce uncertainty. Uncertain liability creates paralysis. We have seen this movie before—environmental standards didn’t kill the auto industry; they created Tesla and BYD.

Third, the promise of enhanced national security through deregulation is perhaps the most dangerous fallacy, as it actually undermines security across all timeframes. In the near term, it hands our adversaries perfect tools for information warfare. In the medium term, it puts bioweapon capabilities in everyone’s hands. In the long term, it guarantees we’ll deploy AGI systems we cannot control, racing to be the first to push a button we can’t unpush.

The Regulation Sacrifice persists because it serves powerful interests, not because it serves security. Tech companies prefer freedom to accountability. Politicians prefer simple stories to complex truths. Together they are trying to convince us that recklessness is patriotism. But here is the punchline: these ideas create a system of mutually assured deregulation, where each nation’s sprint for advantage guarantees collective vulnerability. The only way to win this game is not to play.

Peng et al. on Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore

Huijuan Peng (Singapore Management U Yong Pung How Law) and Pey-woan Lee (Singapore Management U Yong Pung How Law) have posted “Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore” (Journal of Tort Law, 0[10.1515/jtl-2025-0028]) on SSRN. Here is the abstract:

This Article explores how U.S. tort law can respond more effectively to the distinct harms posed by deepfakes, including reputational injury, identity appropriation, and emotional distress. Traditional tort doctrines, such as defamation, the right of publicity, and intentional infliction of emotional distress (IIED), remain fragmented and ill-suited to the speed, scale, and anonymity of deepfake dissemination. Using a comparative functionalist approach, the Article analyzes how China and Singapore respond to deepfake harms through structurally divergent but functionally instructive frameworks. China’s model combines codified personality rights with intermediary obligations under a civil law regime, while Singapore adopts a hybrid approach that integrates common law torts with targeted statutory and administrative interventions. Although neither model is directly replicable in the United States, both offer valuable comparative insights to guide the reform of U.S. tort law. The article advances an integrated governance model for U.S. tort law: reconstructing personality-based torts, repositioning tort law through conditional intermediary liability, and clarifying constitutionally grounded limits for speechbased claims. Drawing on Chinese and Singaporean legal approaches, the Article sets out a comparative reform framework that enables U.S. tort law to better address deepfake harms while safeguarding autonomy and dignity in AI-driven digital environments.

Long on The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion

Sean Norick Long (Georgetown U Law Center) has posted “The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion” on SSRN. Here is the abstract:

A US federal judge recently reasoned that a pricing algorithm learns “no different” from an attorney. This comparison is flawed in its immediate context, but it poses a greater danger: entrenching a mental model that blinds antitrust enforcement to the emergent threat of autonomous algorithmic collusion, where AI agents coordinate without human instruction. To prove collusion, courts cannot look directly into the human mind for intent, so they rely on an indirect proxy: evidence of observable communication between competitors. This paper argues the proxy is obsolete for AI agents, because their initial design and behavioral patterns are directly observable-offering a new basis to rule out independent action. In its place, I propose a two-part Mirror Test: an ex ante Design Test examines initial conditions for collusive bias, while an ex post Pattern Test detects coordinated pricing patterns inconsistent with independent action. This test can be implemented through agency guidance rather than new legislation, protecting the competitive process while giving companies predictable standards for compliance.

Fagan on When Fair Use Fails: Contingent Licensing for AI Training

Frank Fagan (South Texas College Law Houston) has posted “When Fair Use Fails: Contingent Licensing for AI Training” (forthcoming, Foundation for American Innovation, 2025) on SSRN. Here is the abstract:

As content producers increasingly gate material in response to AI-driven substitution-despite no changes to fair use law-there is growing risk that socially valuable inputs may disappear from the generative AI training ecosystem. This paper proposes a narrowly tailored, contingent licensing scheme to preserve access to high-value content when market failures prevent voluntary licensing. The scheme activates only when three conditions are met: (1) the content is demonstrably valuable for training; (2) the producer is economically marginal-that is, likely to restrict or withdraw access absent compensation; and (3) voluntary licensing has failed due to high transaction costs or bargaining asymmetries. While the proposal is focused on economically marginal creators at risk of exit, it allows for future extension to inframarginal producers if systemic gating emerges (defined here as a sustained, measurable reduction in access to critical content, whether by a majority of producers or by a small set whose gating materially degrades model performance). Drawing on the model of compulsory music licensing, the fallback mechanism operates only when necessary and always includes an opt-out, offering a light-touch intervention to sustain open access without undermining innovation or core publication incentives. In this way, the proposal aims to preserve innovation conditions when asymmetric withdrawal risks distorting competition and locking in advantages for firms with early licensing deals or deep proprietary libraries. Stronger measures that compel content creators to license their works, and without an opt-out, are considered but tentatively rejected as inefficient and likely to distort functioning markets.

Murray on Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence

Peter Murray (Oak Brook College Law) has posted “Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence” (Transactions on Engineering and Computing Sciences, volume 13, issue 02, 2025[10.14738/tecs.1302.18401]) on SSRN. Here is the abstract:

Crimebots are fueling the cybercrime pandemic by exploiting artificial intelligence (AI) to facilitate crimes such as fraud, misrepresentation, extortion, blackmail, identity theft, and security breaches. These AI-driven criminal activities pose a significant threat to individuals, businesses, online transactions, and even the integrity of the legal system. Crimebots enable unjust exonerations and wrongful convictions by fabricating evidence, creating deepfake alibis, and generating misleading crime reconstructions. In response, lawbots have emerged as a counterforce, designed to uphold justice. Legal professionals use lawbots to collect and analyze evidence, streamline legal processes, and enhance the administration of justice. To mitigate the risks posed by both crimebots and lawbots, many jurisdictions have established ethical guidelines promoting the responsible use of AI by lawyers and clients. Approximately 1.34% of lawyers have been involved in AI-related legal disputes, often revolving around issues such as fees, conflicts of interest, negligence, ethical violations, evidence tampering, and discrimination. Additional concerns include fraud, confidentiality breaches, harassment, and the misuse of AI for criminal purposes. For lawbots to succeed in the ongoing battle against crimebots, strict adherence to complex AI regulations is essential. Ensuring compliance with these guidelines minimizes malpractice risks, prevents professional sanctions, preserves client trust, and upholds the ethical and legal professional standards of excellence.

Kolt et al. on Legal Alignment for Safe and Ethical AI

Noam Kolt et al. have posted “Legal Alignment for Safe and Ethical AI” on SSRN. Here is the abstract:

Alignment of artificial intelligence (AI) encompasses the normative problem of specifying how AI systems should act and the technical problem of ensuring AI systems comply with those specifications. To date, AI alignment has generally overlooked an important source of knowledge and practice for grappling with these problems: law. In this paper, we aim to fill this gap by exploring how legal rules, principles, and methods can be leveraged to address problems of alignment and inform the design of AI systems that operate safely and ethically. This emerging field — legal alignment — focuses on three research directions: (1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems. These research directions present new conceptual, empirical, and institutional questions, which include examining the specific set of laws that particular AI systems should follow, creating evaluations to assess their legal compliance in real-world settings, and developing governance frameworks to support the implementation of legal alignment in practice. Tackling these questions requires expertise across law, computer science, and other disciplines, offering these communities the opportunity to collaborate in designing AI for the better.

Bednar et al. on Artificial Intelligence and Human Legal Reasoning

Nicholas Bednar (U Minn Law), David R. Cleveland (same), Allan Erbsen (same), and Daniel Schwarcz (same) have posted “Artificial Intelligence and Human Legal Reasoning” on SSRN. Here is the abstract:

Empirical evidence increasingly demonstrates that generative artificial intelligence has the capacity to improve the speed and quality of legal work, yet many lawyers, judges, and clients are reluctant to fully embrace AI. One important reason for hesitation is the concern that AI may undermine the human reasoning and judgment on which competent legal practice depends. This Article provides the first empirical evidence evaluating that concern by testing whether upper level law students who rely on AI at an early stage of a project experience reduced comprehension and impaired legal reasoning on later stages when AI is not an available option.

To evaluate the possibility that AI degrades comprehension and reasoning, we conducted a randomized controlled trial involving approximately one hundred second and third year law students at the University of Minnesota Law School. Participants completed four sequential lawyering tasks: writing a memo synthesizing the law based on a packet of legal materials, answering closed-book multiple choice questions that tested their comprehension of the materials, writing a memo applying the materials to a fact pattern, and revising their second memo. Participants were randomly assigned either to a control group, which could not use AI until the final revision task, or to an AI-exposed group, which used AI during both the initial synthesis task and the final revision task, but not during the intervening comprehension and application tasks.

The results provide a more complex picture of AI’s effects on legal reasoning than critics or enthusiasts often assume. As expected, participants who used AI to help craft synthesis memos produced substantially stronger work and completed that task more quickly. But contrary to our preregistered hypothesis, AI exposure at this initial stage did not diminish downstream comprehension of the underlying legal principles. To the contrary, participants who used AI on the synthesis task outperformed the control group on the later application task even when neither group had access to AI. Yet when all participants used AI to revise their reasoning memos, participants who started with weaker memos improved while participants who started with stronger memos regressed. These findings suggest that AI does not inevitably erode or promote independent legal reasoning, but that its effects depend on when and how law students and junior lawyers use AI. The Article builds on this insight by suggesting best practices for AI use and avenues for further empirical research.

Haynes on Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance

Maria De Lourdes Haynes (American U Dubai) has posted “Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance” on SSRN. Here is the abstract:

The European Artificial Intelligence Act (AI Act) constitutes a landmark regulatory framework governing artificial intelligence technologies, with core principles grounded in transparency, accountability, and risk mitigation. While designed to foster innovation and safeguard fundamental rights, the Act poses considerable implementation challenges. Organisations must navigate complex compliance obligations imposed to various actors across the value chain. These requirements entail rigorous reporting, auditing, monitoring and governance mechanisms, placing increased demands on corporate governance structures.A defining feature of the AI Act is its extraterritorial scope, mirroring the reach of the General Data Protection Regulation (GDPR). The AI Act applies not only to entities established within the European Union but also to non-EU businesses operating or placing AI products on the EU Market. Its extensive provision, covering authorised representatives and specific duties for actors across the AI value chain, are expected to incentivise non-EU jurisdictions and corporations to align their AI development and deployment practices with EU standards. Non-compliance may lead to hefty fines and exposure to reputational damage along with an erosion of consumer trust.AI Act is poised to emerge as a global benchmark for AI regulation. Board-level governance bodies must reconcile innovation and business objectives with regulatory imperatives, address liability risks, and embed AI literacy into strategic management and decision-making. As the regulatory framework evolves, it reinforces the necessity of integrating multidisciplinary legal, ethical, and strategic considerations into managerial and corporate governance frameworks to navigate this dynamic environment effectively and mitigate emerging risks.