Williams & Westlake on A Taste of Armageddon: Legal Considerations for Lethal Autonomous Weapons Systems

Paul R. Williams (Public International Law & Policy Group) and Ryan Jane Westlake (Independent) have posted “A Taste of Armageddon: Legal Considerations for Lethal Autonomous Weapons Systems” (Case Western Reserve Journal of International Law, volume 57, pg. 187, (2025)) on SSRN. Here is the abstract:

Lethal Autonomous Weapons Systems (LAWS) represent a profound shift in the nature of warfare, where machines, not humans, make life-or-death decisions on the battlefield. While these weapons offer strategic advantages, such as reducing human casualties and increasing operational efficiency, they also introduce significant legal, ethical, and accountability challenges. This Article explores the complexities surrounding the proliferation and use of LAWS, arguing that a total ban is unlikely due to the widespread accessibility and benefits these technologies offer to those who deploy them. Rather, this Article proposes the application of strict liability—traditionally a tort law concept—to the developers of LAWS as a means of promoting responsible development and ensuring accountability in the event a LAWS commits a war crime. By adapting this legal doctrine to the international criminal law context, the Article provides a pathway for holding those who design and deploy LAWS accountable for war crimes, thus bridging the gap between rapid technological advancement and the current limitations of international humanitarian law. The Article underscores the necessity of creative legal thinking to address the urgent and evolving challenges posed by autonomously lethal warfare technologies.

Lévesque on Hallucinating Deregulation

Maroussia Lévesque (Harvard Law) has posted “Hallucinating Deregulation” on SSRN. Here is the abstract:

Deregulation appears to be the dominant paradigm when it comes to AI policy. The recent U.S. AI Action plan deems the technology “far too important to smother in bureaucracy at this early stage”. Even the EU, known for robust digital regulation, is losing steam on implementing its groundbreaking AI Act. With deregulatory narratives on the rise and comprehensive AI legislation on the back burner, one could assume that a laissez-faire approach effectively prevails. Yet nothing could be further than the truth. Underneath the surface, regulators engage in significant AI regulation. Chief among them are national security policymakers reaching deep into the AI stack, with heavy-handed intervention shaping access to the fundamental building blocks of AI systems.

While we typically confine AI regulation to endpoint application served to users – OpenAI’s ChatGPT, or Anthropic’s Claude – regulators actually intervene at each node of the AI supply chain to restrict models, their training data, and the underlying infrastructure of data centers and computer hardware. The concept of a technology stack disaggregates these elements operating in the background of user-facing applications, assessing how each is a target of regulation. This more granular view sets the record straight on a common misconception as to a deregulatory zeitgeist.

The goal is descriptive and prescriptive. As a diagnostic tool, a stack approach brings clarity, describing precisely what is being regulated – and what isn’t. Unpacking AI into its hardware, compute, model and application components, the stack anchors the analysis into the materiality of AI’s multiple components.

As a prescriptive tool, a full-stack approach considers different regulatory options along the supply chain. Building on existing practices already intervening on several aspects of the AI stack towards a single policy objective, this Article invites regulators to systematically consider all aspects of AI systems before settling on a regulatory strategy.

Graves on Upload Complete: An Introduction to Creator Economy Law

Franklin Graves (LinkedIn Corporation) has posted “Upload Complete: An Introduction to Creator Economy Law” (Belmont Law Journal, Volume 1, 2024-2025) on SSRN. Here is the abstract:

Individuals have been creating and sharing creative works online since the dawn of the World Wide Web. However, only in the last decade and a half have the platform monetization, business, and audience factors reached the necessary levels to provide the foundation of a fully functioning, sometimes self-sustaining, creator economy. To understand the creator economy, it is important to contextualize it with the evolution of the web, the rise of Web 2.0, the on-going development of web3, and the shift from a consumer economy to a creator economy. The democratization of the web and technological advancements have made it possible for anyone with an internet connection to become a creator.

The creator economy has also introduced a new set of legal challenges and opportunities, including issues related to intellectual property, contract law, privacy, and content regulation. Courts and policymakers are still grappling with how to deal with these issues in a way that recognizes, respects, and protects the rights of creators, brands, platforms, and consumers.

Divided into three parts, this article aims to accelerate the conversation around an emerging area the author proposes to label as “Creator Economy Law” while simultaneously offering a survey of how, for nearly the past two decades, governments, regulators, and courts have been shaping the well-established creator economy.

Part I introduces the creator economy, defining “creators” and providing a brief history of creativity on the internet across the concepts of Web 2.0 and web3. The article then discusses the explosion of the creator economy in recent years, driven by the shift from a producer-consumer economy to an attention-driven creator economy. Part II examines a selection of laws that affect creators, from copyright and trademark to privacy and advertising. The article also discusses the evolving relationship between brands and influencers, the rebirth of remix culture, the liability of platforms for content posted by creators and brands, and the self-regulation and moderation of content by platforms. Part III discusses the future opportunities and challenges facing creators in the digital age, including the potential impact of decentralized platforms and communities, the proliferation of generative artificial intelligence technologies, and future law and policy considerations.

Cohen on Public Utility for What?: Governing AI Datastructures

Julie E. Cohen (Georgetown U Law Center) has posted “Public Utility for What?: Governing AI Datastructures” (Yale Journal of Law and Technology, vol. 27 (2025).) on SSRN. Here is the abstract:

Both in the U.S. and in Europe, initiatives for AI governance have focused principally on identifying and mitigating the risks created by AI models and their downstream uses rather than on those created by the datasets on which the models are trained. As this paper will explain, some of the most intractable dysfunctions of generative AI systems involve datasets. In particular, the very large datasets amassed by dominant providers of generative AI and related services are rapidly taking on infrastructural characteristics and importance. Effective AI governance therefore requires an infrastructural turn in thinking about data. 

First, the paper explains the significance of the infrastructure lens and sketches some of the distinctive implications of data infrastructures, in particular, for governance of networked digital processes and the social and economic activities that they facilitate. Next, it explores two interrelated problems manifesting within generative AI systems-simulation and sociopathy-that illustrate the extent to which the project of AI governance is, unavoidably, a data governance project. In brief, generative AI models trained on content from the public internet are also trained on data infrastructures that have been developed in particular ways for particular purposes and that encourage the production and spread of particular kinds of content. Last, the paper considers whether the concept of public utility, now the subject of growing interest among legal scholars who study various regulated industries, might supply a possible foundation for tackling the data governance problems associated with generative AI systems. The public utility model, however, addresses only some of the considerations that the infrastructure lens highlights. It is highly attuned to questions about access to infrastructures and their outputs but relatively insensitive to questions about infrastructure configuration and input sourcing. The problems of simulation and sociopathy belong in the latter category.

Chiodo & Müller on The Problem of Algorithmic Collisions: Mitigating Unforeseen Risks in a Connected World

Maurice Chiodo (U Cambridge) and Dennis Müller (U Cologne) have posted “The Problem of Algorithmic Collisions: Mitigating Unforeseen Risks in a Connected World” on SSRN. Here is the abstract:

The increasing deployment of Artificial Intelligence (AI) and other autonomous algorithmic systems presents the world with new systemic risks. While focus often lies on the function of individual algorithms, a critical and underestimated danger arises from their interactions, particularly when algorithmic systems operate without awareness of each other, or when those deploying them are unaware of the full algorithmic ecosystem deployment is occurring in. These interactions can lead to unforeseen, rapidly escalating negative outcomes – from market crashes and energy supply disruptions to potential physical accidents and erosion of public trust – often exceeding the human capacity for effective monitoring and the legal capacities for proper intervention. Current governance frameworks are inadequate as they lack visibility into this complex ecosystem of interactions. This paper outlines the nature of this challenge and proposes some initial policy suggestions centered on increasing transparency and accountability through phased system registration, a licensing framework for deployment, and enhanced monitoring capabilities.

Teubner & Ivey on Social AI and Human Connections: Benefits, Risks and Social Impact

Jonathan Teubner (Harvard U Institute Quantitative Social Science) and Ronald Ivey (Harvard U Institute Quantitative Social Science) have posted “Social AI and Human Connections: Benefits, Risks and Social Impact” on SSRN. Here is the abstract:

Since the mid-2000s, the widespread adoption of social media has coincided with a significant decline in face-to-face socialization, raising concerns about its broader impact on mental health, community well-being, and democratic governance. This paper argues that the failure of institutions to effectively respond to the social consequences of digital technology provides an important lesson as we confront a new wave of transformation brought on by artificial intelligence.

To understand this challenge, we identify and analyze two intersecting trends: (1) the continuing erosion of in-person social connectedness in the United States, and (2) the rapid advancement and adoption of conversational AI systems—particularly those designed to simulate social and emotional engagement. These systems, which we refer to as “Social AI” (as defined by Shevlin, 2024) are increasingly positioned to fulfill roles traditionally occupied by human relationships.

Drawing on a review of recent literature, expert interviews, a Salon with leading technologists and scholars of human flourishing, and webinars with Social AI researchers, explore the question: How might we design AI systems for social connectedness and human flourishing?

The paper presents a systemic framework for understanding Social AI and its effects on human social capabilities, and it outlines stakeholder-driven recommendations across five domains: (1) AI systems, (2) AI businesses, (3) AI markets, (4) political systems, and (5) cultural and social systems. We propose this as a foundation for future research, impact evaluation, and policy analysis aimed at ensuring AI supports, rather than undermines, human flourishing.

Taylor on Consciousness as the Foundation of Legal Agency in AGI

Richard D. Taylor (Pennsylvania State U) has posted “Consciousness as the Foundation of Legal Agency in AGI” on SSRN. Here is the abstract:

This paper argues that it is time to address the legal status of emerging Artificial General Intelligence (AGI) that plausibly replicates human behavior.  It concludes that a new juridical entity, “Legally Conscious Persons”, should be created for that category.  This status is based on the model of “legal personhood” which stipulates specific rights and duties for corporate entities, but has a potentially broader scope.

Crimes and intentional torts require “mens rea” which implies a conscious, rational agent having an intention to take (or not take) an act.  Traditionally, living natural persons are thought to be the only category which fits this requirement.  To suggest attributing consciousness to a non-organic artifact (e.g., AGI) creates a legal conundrum: If AGI is not alive, it cannot be conscious.  If AGI is conscious, what does that say about the nature of life?  Would a conscious AGI be a person, a thing, or something else?

The paper reviews the origins of life and the coevolution through stages of the human brain and consciousness.  It then turns to the contested issue of AGI consciousness.  It reviews the scholarly and expert debate over the nature and source of consciousness and concludes that while there is no consensus on its nature, there are many theories which can provide indicators of its presence.  It reviews these theories through the lenses of ontology, epistemology and axiology.  It then introduces the proposed technical and behavioral indicators of the likelihood of consciousness.

Human consciousness is not well understood.  Neurological indicators correlate with but are not the same as “qualia”, a person’s subjective experience.  Likewise with AGI, “consciousness”, if it exists, can only be predicted by technological and behavioral indicators.  For purposes of recognizing an AGI as “legally conscious” it must show indicators that demonstrate a high likelihood of meeting the criteria for “legally conscious”.  A process for reaching these criteria based on existing models is offered.

The time to begin to establish these indicators is now.  The Paper offers a series of specific recommendations for government, enterprise and civil society to meet this challenge.  Failure to do so, it concludes, would someday be seen as the equivalent of malpractice.

Hirsch et al. on Responsible AI Management: Evolving Practice, Growing Value

Dennis D. Hirsch (Ohio State U (OSU) Michael E. Moritz College Law) et al. have posted “Responsible AI Management: Evolving Practice, Growing Value” on SSRN. Here is the abstract:

One hears often today of the need to choose between AI ethics and AI opportunity. Such statements are premised on a trade-off between responsible AI, and competitive AI. But does such a trade-off truly exist? This article reports the results of a survey of business managers knowledgeable about their firm’s responsible AI management (RAIM) practices. The study – conducted by Ohio State researchers in partnership with the IAPP – explored three questions: (1) What are the components of corporate RAIM programs? (2) Who in the organization is responsible for RAIM? and (3) Does RAIM create business value for organizations and, if so, what types of value does it generate?

The researchers found that: (1) Business responsible AI management programs consist of 14 main practices that range from tracking legal and policy developments, to adopting AI ethics principles, to appointing a responsible AI committee, and beyond; (2) Companies were most likely to assign the RAIM function to people with expertise in privacy, although they also relied significantly on those with risk and data analytics expertise. This could mean that effective AI governance requires a combination of skills; and (3) Responsible AI management creates significant business value by improving product quality, building trust and reputation, preparing the organization for future regulation, and enhancing employee relations. This suggests that there may be a win-win, rather than a zero-sum, relationship between responsible, and competitive, AI. If upheld in future work, this finding could move the conversation about RAIM beyond the realm of safety and values and into the sphere of business AI strategy.

Omeonga wa Kayembe on Clinical Affordance as a Framework for Barriers, Transitions, and Policy: A Use Case on AI and NLP Integration in Psychiatry

Naomi Omeonga Wa Kayembe (U Nantes Law and Political Science) has posted “Clinical Affordance as a Framework for Barriers, Transitions, and Policy: A Use Case on AI and NLP Integration in Psychiatry” on SSRN. Here is the abstract:

The integration of Artificial Intelligence (AI) and Natural Language Processing (NLP) in psychiatry has significantly progressed, evolving from early feasibility studies to sophisticated transformer-based models capable of automating clinical assessments, symptom detection, and treatment monitoring. While these technologies hold promise for enhancing psychiatric care, their adoption remains limited due to translational barriers related to the accessibility and acceptability of digital health solutions.

This narrative review synthesizes foundational NLP contributions from the 2010s alongside recent advancements in AI-driven psychiatry, emphasizing both technical scalability and regulatory considerations. To systematize the variables influencing AI adoption in care practice, we introduce Clinical Affordance, a conceptual framework that evaluates the integration potential of AI tools through two interdependent dimensions: accessibility (practical and organizational fit) and acceptability (normative expectations).

Drawing from a selective literature review, we identify the main translational constraints affecting NLP deployment in psychiatry. Ranging from EHR system fragmentation to the burden of explainability mandates and uneven usage patterns, these challenges are analyzed through the lens of Clinical Affordance, with emphasis on their implications for clinical implementation. We further argue that the transition from clinical decision support systems (AI-CDSS) to autonomous medical treatment (AI-Treatment) is central to understanding risk allocation and liability in AI-assisted psychiatry. Finally, we assess how the COVID-19 pandemic impacted public trust in AI-driven mental health solutions, particularly in relation to surveillance and ethical governance.

The article concludes with policy recommendations aimed at reinforcing Clinical Affordance through outcome-based regulation, differentiated accountability, and data governance. By bridging technical innovation with contextual viability, the Clinical Affordance framework supports the sustainable integration of AI and NLP into psychiatric practice and offers a generalizable model for evaluating other digital health technologies.

Kim et al. on Ai Pricing Behavior Under Regulatory Variation

Jeong Yeol Kim (KDI Public Policy and Management) et al. have posted “Ai Pricing Behavior Under Regulatory Variation” on SSRN. Here is the abstract:

This study experimentally examines how generative AI agents adjust pricing under four regulatory environments: no regulation; fixed detection (constant penalty probability above a threshold); linear detection (penalty probability increases with price); and periodic detection (monitoring at fixed intervals). Without regulation, AI agents choose near-monopoly prices. All regulations reduce prices, but do not induce competitive outcomes. Fixed and linear detection produce lower and more stable supra-competitive prices, while periodic detection leads to strategic evasion and higher prices. These findings suggest that AI agents adapt to enforcement structures, maintaining supra-competitive pricing even under regimes designed to deter monopolistic outcomes.