Peng et al. on Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore

Huijuan Peng (Singapore Management U Yong Pung How Law) and Pey-woan Lee (Singapore Management U Yong Pung How Law) have posted “Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore” (Journal of Tort Law, 0[10.1515/jtl-2025-0028]) on SSRN. Here is the abstract:

This Article explores how U.S. tort law can respond more effectively to the distinct harms posed by deepfakes, including reputational injury, identity appropriation, and emotional distress. Traditional tort doctrines, such as defamation, the right of publicity, and intentional infliction of emotional distress (IIED), remain fragmented and ill-suited to the speed, scale, and anonymity of deepfake dissemination. Using a comparative functionalist approach, the Article analyzes how China and Singapore respond to deepfake harms through structurally divergent but functionally instructive frameworks. China’s model combines codified personality rights with intermediary obligations under a civil law regime, while Singapore adopts a hybrid approach that integrates common law torts with targeted statutory and administrative interventions. Although neither model is directly replicable in the United States, both offer valuable comparative insights to guide the reform of U.S. tort law. The article advances an integrated governance model for U.S. tort law: reconstructing personality-based torts, repositioning tort law through conditional intermediary liability, and clarifying constitutionally grounded limits for speechbased claims. Drawing on Chinese and Singaporean legal approaches, the Article sets out a comparative reform framework that enables U.S. tort law to better address deepfake harms while safeguarding autonomy and dignity in AI-driven digital environments.

Long on The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion

Sean Norick Long (Georgetown U Law Center) has posted “The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion” on SSRN. Here is the abstract:

A US federal judge recently reasoned that a pricing algorithm learns “no different” from an attorney. This comparison is flawed in its immediate context, but it poses a greater danger: entrenching a mental model that blinds antitrust enforcement to the emergent threat of autonomous algorithmic collusion, where AI agents coordinate without human instruction. To prove collusion, courts cannot look directly into the human mind for intent, so they rely on an indirect proxy: evidence of observable communication between competitors. This paper argues the proxy is obsolete for AI agents, because their initial design and behavioral patterns are directly observable-offering a new basis to rule out independent action. In its place, I propose a two-part Mirror Test: an ex ante Design Test examines initial conditions for collusive bias, while an ex post Pattern Test detects coordinated pricing patterns inconsistent with independent action. This test can be implemented through agency guidance rather than new legislation, protecting the competitive process while giving companies predictable standards for compliance.

Fagan on When Fair Use Fails: Contingent Licensing for AI Training

Frank Fagan (South Texas College Law Houston) has posted “When Fair Use Fails: Contingent Licensing for AI Training” (forthcoming, Foundation for American Innovation, 2025) on SSRN. Here is the abstract:

As content producers increasingly gate material in response to AI-driven substitution-despite no changes to fair use law-there is growing risk that socially valuable inputs may disappear from the generative AI training ecosystem. This paper proposes a narrowly tailored, contingent licensing scheme to preserve access to high-value content when market failures prevent voluntary licensing. The scheme activates only when three conditions are met: (1) the content is demonstrably valuable for training; (2) the producer is economically marginal-that is, likely to restrict or withdraw access absent compensation; and (3) voluntary licensing has failed due to high transaction costs or bargaining asymmetries. While the proposal is focused on economically marginal creators at risk of exit, it allows for future extension to inframarginal producers if systemic gating emerges (defined here as a sustained, measurable reduction in access to critical content, whether by a majority of producers or by a small set whose gating materially degrades model performance). Drawing on the model of compulsory music licensing, the fallback mechanism operates only when necessary and always includes an opt-out, offering a light-touch intervention to sustain open access without undermining innovation or core publication incentives. In this way, the proposal aims to preserve innovation conditions when asymmetric withdrawal risks distorting competition and locking in advantages for firms with early licensing deals or deep proprietary libraries. Stronger measures that compel content creators to license their works, and without an opt-out, are considered but tentatively rejected as inefficient and likely to distort functioning markets.

Murray on Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence

Peter Murray (Oak Brook College Law) has posted “Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence” (Transactions on Engineering and Computing Sciences, volume 13, issue 02, 2025[10.14738/tecs.1302.18401]) on SSRN. Here is the abstract:

Crimebots are fueling the cybercrime pandemic by exploiting artificial intelligence (AI) to facilitate crimes such as fraud, misrepresentation, extortion, blackmail, identity theft, and security breaches. These AI-driven criminal activities pose a significant threat to individuals, businesses, online transactions, and even the integrity of the legal system. Crimebots enable unjust exonerations and wrongful convictions by fabricating evidence, creating deepfake alibis, and generating misleading crime reconstructions. In response, lawbots have emerged as a counterforce, designed to uphold justice. Legal professionals use lawbots to collect and analyze evidence, streamline legal processes, and enhance the administration of justice. To mitigate the risks posed by both crimebots and lawbots, many jurisdictions have established ethical guidelines promoting the responsible use of AI by lawyers and clients. Approximately 1.34% of lawyers have been involved in AI-related legal disputes, often revolving around issues such as fees, conflicts of interest, negligence, ethical violations, evidence tampering, and discrimination. Additional concerns include fraud, confidentiality breaches, harassment, and the misuse of AI for criminal purposes. For lawbots to succeed in the ongoing battle against crimebots, strict adherence to complex AI regulations is essential. Ensuring compliance with these guidelines minimizes malpractice risks, prevents professional sanctions, preserves client trust, and upholds the ethical and legal professional standards of excellence.

Kolt et al. on Legal Alignment for Safe and Ethical AI

Noam Kolt et al. have posted “Legal Alignment for Safe and Ethical AI” on SSRN. Here is the abstract:

Alignment of artificial intelligence (AI) encompasses the normative problem of specifying how AI systems should act and the technical problem of ensuring AI systems comply with those specifications. To date, AI alignment has generally overlooked an important source of knowledge and practice for grappling with these problems: law. In this paper, we aim to fill this gap by exploring how legal rules, principles, and methods can be leveraged to address problems of alignment and inform the design of AI systems that operate safely and ethically. This emerging field — legal alignment — focuses on three research directions: (1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems. These research directions present new conceptual, empirical, and institutional questions, which include examining the specific set of laws that particular AI systems should follow, creating evaluations to assess their legal compliance in real-world settings, and developing governance frameworks to support the implementation of legal alignment in practice. Tackling these questions requires expertise across law, computer science, and other disciplines, offering these communities the opportunity to collaborate in designing AI for the better.

Bednar et al. on Artificial Intelligence and Human Legal Reasoning

Nicholas Bednar (U Minn Law), David R. Cleveland (same), Allan Erbsen (same), and Daniel Schwarcz (same) have posted “Artificial Intelligence and Human Legal Reasoning” on SSRN. Here is the abstract:

Empirical evidence increasingly demonstrates that generative artificial intelligence has the capacity to improve the speed and quality of legal work, yet many lawyers, judges, and clients are reluctant to fully embrace AI. One important reason for hesitation is the concern that AI may undermine the human reasoning and judgment on which competent legal practice depends. This Article provides the first empirical evidence evaluating that concern by testing whether upper level law students who rely on AI at an early stage of a project experience reduced comprehension and impaired legal reasoning on later stages when AI is not an available option.

To evaluate the possibility that AI degrades comprehension and reasoning, we conducted a randomized controlled trial involving approximately one hundred second and third year law students at the University of Minnesota Law School. Participants completed four sequential lawyering tasks: writing a memo synthesizing the law based on a packet of legal materials, answering closed-book multiple choice questions that tested their comprehension of the materials, writing a memo applying the materials to a fact pattern, and revising their second memo. Participants were randomly assigned either to a control group, which could not use AI until the final revision task, or to an AI-exposed group, which used AI during both the initial synthesis task and the final revision task, but not during the intervening comprehension and application tasks.

The results provide a more complex picture of AI’s effects on legal reasoning than critics or enthusiasts often assume. As expected, participants who used AI to help craft synthesis memos produced substantially stronger work and completed that task more quickly. But contrary to our preregistered hypothesis, AI exposure at this initial stage did not diminish downstream comprehension of the underlying legal principles. To the contrary, participants who used AI on the synthesis task outperformed the control group on the later application task even when neither group had access to AI. Yet when all participants used AI to revise their reasoning memos, participants who started with weaker memos improved while participants who started with stronger memos regressed. These findings suggest that AI does not inevitably erode or promote independent legal reasoning, but that its effects depend on when and how law students and junior lawyers use AI. The Article builds on this insight by suggesting best practices for AI use and avenues for further empirical research.

Haynes on Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance

Maria De Lourdes Haynes (American U Dubai) has posted “Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance” on SSRN. Here is the abstract:

The European Artificial Intelligence Act (AI Act) constitutes a landmark regulatory framework governing artificial intelligence technologies, with core principles grounded in transparency, accountability, and risk mitigation. While designed to foster innovation and safeguard fundamental rights, the Act poses considerable implementation challenges. Organisations must navigate complex compliance obligations imposed to various actors across the value chain. These requirements entail rigorous reporting, auditing, monitoring and governance mechanisms, placing increased demands on corporate governance structures.A defining feature of the AI Act is its extraterritorial scope, mirroring the reach of the General Data Protection Regulation (GDPR). The AI Act applies not only to entities established within the European Union but also to non-EU businesses operating or placing AI products on the EU Market. Its extensive provision, covering authorised representatives and specific duties for actors across the AI value chain, are expected to incentivise non-EU jurisdictions and corporations to align their AI development and deployment practices with EU standards. Non-compliance may lead to hefty fines and exposure to reputational damage along with an erosion of consumer trust.AI Act is poised to emerge as a global benchmark for AI regulation. Board-level governance bodies must reconcile innovation and business objectives with regulatory imperatives, address liability risks, and embed AI literacy into strategic management and decision-making. As the regulatory framework evolves, it reinforces the necessity of integrating multidisciplinary legal, ethical, and strategic considerations into managerial and corporate governance frameworks to navigate this dynamic environment effectively and mitigate emerging risks.

Nugent on Generative Cybersecurity

Nicholas Nugent (U Tennessee) has posted “Generative Cybersecurity” on SSRN. Here is the abstract:

Cybersecurity is experiencing a sea change, and AI is to blame. Bots, which now outnumber human users, prowl networks day and night, using deep learning to discover vulnerabilities and threatening to make all software essentially transparent. The number of skilled human hackers alive in the world no longer poses a meaningful constraint on the amount of damage that can be done, as even the least experienced “script kiddie” can outsource his dark arts to hundreds of self-executing AI agents, each independently working to worm its way into a target’s system. And the age of real-time deepfakes is now upon us, as scammers personally converse with the victims of their social engineering schemes while powerful hardware dynamically swaps their faces and voices with those of impersonated relatives or coworkers.

At the same time, traditional legal doctrines are showing their age. Firms have few legal options to stop bots from continually probing their systems for vulnerabilities, as courts long ago hollowed out the tort of cyber-trespass. The federal Computer Fraud and Abuse Act punishes hackers who use AI to break into protected computers just as surely as it punishes traditional hacking. But the 1986 statute is showing its age, its language poorly suited to situations in which adversaries trick lawful AI systems into voluntarily spilling their secrets without ever crossing the access barrier—the problem of “adversarial AI.” And wire fraud, theft, and right-of-publicity laws map awkwardly, if they map at all, onto certain elements of deepfake scams.

Existing liability frameworks compound the problem, making it difficult to hold AI companies accountable when bad actors use their tools to harm others. Negligence doctrines typically insulate vendors from secondary liability where products admit of substantial lawful uses or where intervening criminality breaks the chain of proximate causation. And firms that deploy defensive AI systems to fight fire with fire may likewise find themselves without a backstop if those systems fail, or unexpectedly wreak havoc on others, given tort law’s reluctance to apply product liability rules to software.

Despite a growing literature on legal issues related to artificial intelligence and a separate body of cybersecurity scholarship, the legal academy has not yet treated AI-driven cybersecurity as a distinct, system-level field of inquiry. Where scholars or policymakers acknowledge that a particular AI use case challenges a traditional rule, they tend to offer ad hoc fixes (or none at all). As a result, cybersecurity law risks falling behind in a rapidly evolving threat environment, leaving firms and individuals without adequate remedies.

This Article tackles the problem head-on, offering the first system-level treatment of the “AI problem” facing cybersecurity and, by extension, cybersecurity law. It provides a comprehensive taxonomy of the ways AI intersects with cybersecurity. That taxonomy organizes the field around three primary roles: using AI as a tool for malicious cyber-activity (“AI as Threat”), attacking AI systems (“AI as Target”), and leveraging AI’s defensive capabilities (“AI as Shield”). It builds out detailed subcategories grounded in specific technologies, operations, and injuries, and draws on the computer science literature and real-world incidents to show that each distinct threat is real rather than theoretical.

Not limited to technical description, the Article systematically identifies the existing laws and doctrines that apply to each distinct use case and exposes the structural gaps AI has created. It then advances an integrated reform agenda designed to realign cybersecurity law to a landscape defined by autonomous, learning systems. The Article proposes five core shifts: rethinking the doctrine of electronic trespass, decentering intrusion as a necessary element in hacking offenses, protecting individual likeness per se, establishing artificial duties of care, and recalibrating negligence doctrine for agentic systems. Taken together, these reforms would move cybersecurity law beyond its human- and intrusion-era origins and toward a design suited to the new reality of machine-mediated threats and security.

Morley et al. on Closing the AI Benefits Gap: Systems Design for Population Health Equity

Jessica Morley (Yale U Digital Ethics Center) and Luciano Floridi (Yale U Digital Ethics Center) have posted “Closing the AI Benefits Gap: Systems Design for Population Health Equity” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is currently failing to live up to its potential. Its champions promise that it will make healthcare more effective, efficient, and equitable, thereby improving population health. However, these benefits are not consistently materialising. Examples of AI working effectively at scale remain limited, and even when implementation succeeds, group or population-level improvements in outcomes are often not discernible. Drawing on the 2024 Global Health in the Age of AI symposium, we argue that this benefits gap stems from two fundamental problems. First, AI is being built on inadequate foundations. Second, AI has been tasked with optimising individual health; a function incapable of improving population outcomes. The benefits gap cannot, therefore, be closed through ad hoc policy interventions designed to address specific implementation barriers. Instead, AI must first be assigned a new population-level function, then robust foundations must be built through systems design to support it. Crucially, both the function and the foundations must be co-created by those most affected by health inequities, working together with frontline health workers, public health practitioners, AI developers, and governance bodies. Only by taking this approach will it be possible to realise AI’s population health potential and avoid a disillusionment-driven healthcare-specific AI winter.

Lim et al. on Introduction to Inclusive Innovation in the Age of AI and Big Data

Daryl Lim (Pennsylvania State U) and Peter K. Yu (Texas A&M U Law) have posted “Introduction to Inclusive Innovation in the Age of AI and Big Data” (INCLUSIVE INNOVATION IN THE AGE OF AI AND BIG DATA, Daryl Lim and Peter K. Yu, eds., Oxford University Press, 2026, Forthcoming) on SSRN. Here is the abstract:

As artificial intelligence and big data analytics reshape economies and societies, the promise of innovation is increasingly shadowed by concerns over inclusion, equity, and global justice.Inclusive Innovation in the Age of AI and Big Data brings together established and emerging voices from across the world to critically examine issues lying at the intersection of innovation, intellectual property, and inequality in the age of AI and big data.

Featuring empirical studies, legal analyses, policy critiques, interdisciplinary perspectives, and global insights, this accessible, interdisciplinary, and open-access volume underscores the tremendous impact gender, race, and other socioeconomic factors have on innovation and intellectual property ecosystems. It also explores structural barriers in these ecosystems, diversity initiatives in the patent area, metrics for measuring inclusivity and diversity in innovation, changes brought about by AI and big data, and the evolution of the global innovation and intellectual property systems.

This introductory chapter begins by identifying three core questions in the emerging debate on inclusive innovation: Innovation by whom? Innovation for whom? And innovation to what end? The chapter then discusses the interrelationship between inclusive innovation and intellectual property, new equity concerns raised by AI-driven innovation, and the multiple pathways to promote inclusive innovation. The chapter continues to outline the structure of the volume, which is organized into five thematic parts: (1) innovation gaps and demographics; (2) disparities in the patent system; (3) initiatives to promote inclusive innovation; (4) AI technology and equitable development; and (5) AI-driven innovation and global challenges.