Coleman on Human Confrontation

Ronald J. Coleman (Georgetown U Law Center) has posted “Human Confrontation” (Wake Forest Law Review, Vol. 61, Forthcoming) on SSRN. Here is the abstract:

The U.S. Constitution’s Confrontation Clause ensures the criminally accused a right “to be confronted with the witnesses against” them. Justice Sotomayor recently referred to this clause as “[o]ne of the bedrock constitutional protections afforded to criminal defendants[.]” However, this right faces a new and existential threat. Rapid developments in law enforcement technology are reshaping the evidence available for use against criminal defendants. When an AI or algorithmic system places an alleged perpetrator at the scene of the crime or an automated forensic process produces a DNA report used to convict an alleged perpetrator, should this type of automated evidence invoke a right to confront? If so, how should confrontation be operationalized and on what theoretical basis?

Determining the Confrontation Clause’s application to automated statements is both critically important and highly under-theorized. Existing work treating this issue has largely discussed the scope of the threat to confrontation, called for more scholarship in this area, suggested that technology might not make the types of statements that would implicate a confrontation right, or found that direct confrontation of the technology itself could be sufficient.

This Article takes a different approach and posits that human confrontation is required. The prosecution must produce a human on behalf of relevant machine statements or such statements are inadmissible. Drawing upon the dignity, technology, policing, and confrontation literatures, it offers several contributions. First, it uses automated forensics to show that certain technology-generated statements should implicate confrontation. Second, it claims that for dignitary reasons only cross-examination of live human witnesses can meet the Confrontation Clause. Third, it reframes automation’s challenge to confrontation as a “humans in the loop” problem. Finally, it proposes a “proximate witness approach” that permits a human to testify on behalf of a machine, identifies an open set of principles to guide courts as to who can be a sufficient proximate witness, notes possible supplemental approaches, and discusses certain broader implications of requiring human confrontation. Human confrontation could check the power of the prosecution, aid system legitimacy, and ultimately act as a form of technology regulation.

Wells on Battlefield Evidence in the Age of Artificial Intelligence-Enabled Warfare

Winthrop Wells (International Institute Justice and the Rule Law) has posted “Battlefield Evidence in the Age of Artificial Intelligence-Enabled Warfare” (26 Chicago Journal of International Law 249 (2025)) on SSRN. Here is the abstract:

A number of emerging technologies increasingly prevalent on contemporary battlefields—notably unmanned autonomous systems (UAS) and various military applications of artificial intelligence (AI)—are working a sea change in the way that wars are fought. These technological developments also carry major implications for the investigation and prosecution of serious crimes committed in armed conflict, including for an under-examined yet potentially valuable form of evidence: information and material collected or obtained by military forces themselves.

Such “battlefield evidence” poses various legal and practical challenges. Yet it can play an important role in justice and accountability processes, in which it addresses the longstanding obstacle of law enforcement actors’ inability to access the conflict-torn crime scenes. Indeed, military-collected information and material has been critical to prosecutions of international crimes and terrorism offenses in recent years.

The present Article briefly surveys the historical record of battlefield evidence’s use. It demonstrates that previous technological advances—including in remote sensing, communications interception, biometrics, and digital data storage and analysis—not only enlarged and diversified the broader pool of military data but also had similar downstream effects on the (far) smaller subset of information shared and used for law enforcement purposes.

The Article then examines how current evolutions in the means and methods of warfare impact the utility of this increasingly prominent evidentiary tool. Ultimately, it is argued that the technical features of UAS and military AI give rise to significant, although qualified, opportunities for collection and exploitation of battlefield evidence. At the same time, these technologies and their broader impacts on the conduct of warfare risk inhibiting the sharing of such information and complicating its courtroom use.

Mastro et al. on Human vs. Machine: Behavioral Differences between Expert Humans and Language Models in Wargame Simulations

Oriana Mastro (Stanford U Freeman Spogli Institute International Studies) et al. have posted “Human vs. Machine: Behavioral Differences between Expert Humans and Language Models in Wargame Simulations” (Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, volume 7, 2024[10.1609/aies.v7i1.31681]) on SSRN. Here is the abstract:

To some, the advent of artificial intelligence (AI) promises better decision-making and increased military effectiveness while reducing the influence of human error and emotions. However, there is still debate about how AI systems, especially large language models (LLMs) that can be applied to many tasks, behave compared to humans in high-stakes military decision-making scenarios with the potential for increased risks towards escalation and unnecessary conflicts. To test this potential and scrutinize the use of LLMs for such purposes, we use a new wargame experiment with 107 national security experts designed to examine crisis escalation in a fictional US-China scenario and compare the behavior of human player teams to LLM-simulated team responses in separate simulations. Wargames have a long history in the development of military strategy and the response of nations to threats or attacks. Here, we find that the LLM-simulated responses can be more aggressive and significantly affected by changes in the scenario. We show a considerable high-level agreement in the LLM and human responses and significant quantitative and qualitative differences in individual actions and strategic tendencies. These differences depend on intrinsic biases in LLMs regarding the appropriate level of violence following strategic instructions, the choice of LLM, and whether the LLMs are tasked to decide for a team of players directly or first to simulate dialog between a team of players. When simulating the dialog, the discussions lack quality and maintain a farcical harmony. The LLM simulations cannot account for human player characteristics, showing no significant difference even for extreme traits, such as “pacifist” or “aggressive sociopath.” When probing behavioral consistency across individual moves of the simulation, the tested LLMs deviated from each other but generally showed somewhat consistent behavior. Our results motivate policymakers to be cautious before granting autonomy or following AI-based strategy recommendations.

Williams & Westlake on A Taste of Armageddon: Legal Considerations for Lethal Autonomous Weapons Systems

Paul R. Williams (Public International Law & Policy Group) and Ryan Jane Westlake (Independent) have posted “A Taste of Armageddon: Legal Considerations for Lethal Autonomous Weapons Systems” (Case Western Reserve Journal of International Law, volume 57, pg. 187, (2025)) on SSRN. Here is the abstract:

Lethal Autonomous Weapons Systems (LAWS) represent a profound shift in the nature of warfare, where machines, not humans, make life-or-death decisions on the battlefield. While these weapons offer strategic advantages, such as reducing human casualties and increasing operational efficiency, they also introduce significant legal, ethical, and accountability challenges. This Article explores the complexities surrounding the proliferation and use of LAWS, arguing that a total ban is unlikely due to the widespread accessibility and benefits these technologies offer to those who deploy them. Rather, this Article proposes the application of strict liability—traditionally a tort law concept—to the developers of LAWS as a means of promoting responsible development and ensuring accountability in the event a LAWS commits a war crime. By adapting this legal doctrine to the international criminal law context, the Article provides a pathway for holding those who design and deploy LAWS accountable for war crimes, thus bridging the gap between rapid technological advancement and the current limitations of international humanitarian law. The Article underscores the necessity of creative legal thinking to address the urgent and evolving challenges posed by autonomously lethal warfare technologies.

Müller et al. on Integrators at War: Mediating in AI-assisted Resort-to-Force Decisions

Dennis Müller (Centre the Study Existential Risk) et al. have posted “Integrators at War: Mediating in AI-assisted Resort-to-Force Decisions” on SSRN. Here is the abstract:

The integration of AI systems into the military domain is changing the way war-related decisions are made. It binds together three disparate groups of actors-developers, integrators, users-and creates a relationship between these groups and the machine, embedded in the (pre-)existing organisational and system structures. In this article, we focus on the important, but often neglected, group of integrators within such a sociotechnical system. In complex human-machine configurations, integrators carry responsibility for linking the disparate groups of developers and users in the political and military system. To act as the mediating group requires a deep understanding of the other groups’ activities, perspectives and norms. We thus ask which challenges and shortcomings emerge from integrating AI systems into resort-to-force (RTF) decision-making processes, and how to address them. To answer this, we proceed in three steps. First, we conceptualise the relationship between different groups of actors and AI systems as a sociotechnical system. Second, we identify challenges within such systems for human-machine teaming in RTF decisions. We focus on challenges that arise a) from the technology itself, b) from the integrators’ role in the sociotechnical system, c) from the human-machine interaction. Third, we provide policy recommendations to address these shortcomings when integrating AI systems into RTF decision-making structures.

Lubin on Technology and the Law of Jus Ante Bellum

Asaf Lubin (Indiana U Maurer Law) has posted “Technology and the Law of Jus Ante Bellum” (26(1) Chicago Journal of International Law (forthcoming, 2025)) on SSRN. Here is the abstract:

The temporal boundaries of international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law (IHL).     

This article highlights glaring inadequacies in how the U.N. Charter, IHL, and International Criminal Law (ICL) currently regulate peacetime military preparations, particularly those involving disruptive technologies. The article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities Jus Ante Bellum offer a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.

Almenar et al. on The Protection of AI-Based Space Systems from a Data-Driven Governance Perspective

Roser Almenar (U Valencia Law) et al. have posted “The Protection of AI-Based Space Systems from a Data-Driven Governance Perspective” (75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024.) on SSRN. Here is the abstract:

Space infrastructures have long represented the pinnacle of technological and engineering achievements. This complexity has been further amplified by the advent of the new space race, where private actors are taking the lead, alongside states, in deploying thousands of satellites in outer space. The outer space environment of 2040 will look very different from today. Spacecraft will necessitate more frequent maneuvers to avoid potential collisions, with the need to be more conscious of their surroundings. Indeed, as the frequency of events and the number of space objects rises, decision-making tasks will increasingly challenge human operators, especially as physical and temporal margins diminish. Such complexity is enveloping thanks to the synergy of space technologies and Artificial Intelligence (AI), which is revolutionizing the functioning of space systems.

The forward trajectory clarifies the significance that AI in outer space will retain in the years ahead. TheCorpus Juris Spatialis finds itself at a crossroads, faced with the defiance of withstanding the technological advances catalyzed by the impending integration of AI into all facets of space missions. Given the ubiquitous nature of AI, its implementation will invariably pose multifaceted legal challenges across diverse aspects of International Space Law. The acquired autonomy of space assets prompts crucial questions regarding the legal standards applicable to AI in outer space, and how these autonomous space systems should be protected against hostile interference.

The main purpose of this paper, presented by the Space Law and Policy Project Group of the Space Generation Advisory Council (SGAC), is to examine the pivotal legal dimensions stemming from the automation of space-based applications from a ‘data-driven governance’ standpoint. The increase in production and acquisition of space data will just augment the sophistication of AI systems, therefore necessitating their data assets to be reliable, accurate, and consistent to safeguard the long-term success of AI technologies in space missions. The paper aims to address the overarching legal challenges posed by the integration of AI into outer space operations, specifically on cybersecurity, intellectual property, and data governance, which are critical for safeguarding autonomous systems. By examining the various nuances of these domains, it seeks to contribute to a comprehensive understanding of the legal landscape of the current AI-space pairing. Ultimately, the conclusion will offer a set of recommendations to pave the way for a secure, ethical evolution of autonomous space systems in the near future.

Stark on Appropriate Human Judgement: U.S. Policy on Lethal Autonomous Weapons Systems and The Law of Armed Conlfict

David Micah Stark (United States Air Force Academy) has posted “Appropriate Human Judgement: U.S. Policy on Lethal Autonomous Weapons Systems and The Law of Armed Conlfict” on SSRN. Here is the abstract:

This paper will examine the role of artificial intelligence (AI) and command responsibility on the modern battlefield. As nations become more reliant on AI and other autonomous systems, the employment of Lethal Autonomous Weapons Systems (LAWS) becomes a new concern for international law, particularly concerning issues of distinction and proportionality. This paper analyzes the role of Commander Responsibility in the U.S. Armed Forces as a case study for the implementation of LAWS on the battlefield. Focusing on DODI 3000.09 and its requirement for “adequate levels of human judgement,” this paper examines what constitutes human judgement and the associated obligations to the Law of Armed Conflict imposed upon commanders by that standard. Examining the current definitions, we find that the language requires an unrealistic approval authority level to implement LAWS policy effectively. Therefore, this paper proposes that the approval threshold for LAWS should be lowered to lower levels of command and doctrine-setting teams, depending on the type of LAWS, simultaneously increasing the requisite training of those teams to include a more substantial education in the Law of Armed Conflict, enabling them to make educated decisions. By creating a stratified system of approval based on multiple factors, the U.S. and DoD can more efficiently harness this emerging technology while maintaining commitment to adherence to the laws of war. This article does not delve into issues of the morality of employing LAWS.

Eichensehr & Keats Citron on Resilience in a Digital Age

Kristen Eichensehr (Virginia Law) and Danielle Keats Citron (same) have posted “Resilience in a Digital Age” (University of Chicago Legal Forum (forthcoming 2024)) on SSRN. Here is the abstract:

A resilience agenda is an essential part of protecting national security in a digital age. Digital technologies impact nearly all aspects of everyday life, from communications and medical care to electricity and government services. Societal reliance on digital tools should be paired with efforts to secure societal resilience. A resilience agenda involves preparing for, adapting to, withstanding, and recovering from disruptions in ways that advance societal interests, goals, and values. Emphasizing resilience offers several benefits: 1) It is threat agnostic or at least relatively threat neutral; 2) its inward focus emphasizes actions under the control of a targeted nation, rather than attempting to change behaviors of external adversaries; and 3) because resilience can address multiple threats simultaneously, it may be less subject to politicization. A resilience strategy is well-suited to address both disruptions to computer systems—whether from cyberattacks or natural disasters—and disruptions to the information environment from disinformation campaigns that sow discord. A resilience agenda is realistic, not defeatist, and fundamentally optimistic in its focus on how society can withstand and move forward from adverse events.

This Essay identifies tactics to bolster resilience against digitally enabled threats across three temporal phases: anticipating and preparing for disruptions, adapting to and withstanding disruptions, and recovering from disruptions. The tactics of a resilience strategy across these phases are dynamic and interconnected. Resilience tactics in the preparation phase could include creating redundancies (including low-tech or no-tech redundancies) or “pre-bunking” disinformation campaigns. Actions in the preparation phase help with adapting to and withstanding disruptions when they are ongoing. Forewarning people about cyberattacks can ensure they do not panic when crucial services cease to function. More persistent and recurrent threats like disinformation campaigns may require structural adaptations, like privacy law reform, to curb the exploitation of personal data to amplify democracy-damaging disinformation. Recovering from disruptions draws on steps taken earlier. Resilience tactics in the recovery phase could include reverting to manual controls and turning to pre-positioned hardware stockpiles that enable continuity of operations after cyberattacks and supporting and protecting journalists and researchers subject to intimidating online abuse. These are just possibilities—a resilience strategy is ours to imagine and pursue, and doing so is a crucial step to strengthen national security in a digital age.

Swire et al. on Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures

Peter Swire (Georgia Institute of Technology – Scheller College of Business; Georgia Tech School of Cybersecurity and Privacy; Cross-Border Data Forum) and others have posted “Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures” on SSRN. Here is the abstract:

This paper continues the research program begun in “The Effects of Data Localization on Cybersecurity – Organizational Effects” (“Effects”). This paper supplements Effects by organizing the risks to cybersecurity by the techniques, tactics, and procedures (“TTPs”) of threat actors and defenders. To categorize the TTPs, we rely on two authoritative approaches, the widely-known MITRE ATT&CK Framework and 2019 guidelines on “The State of the Art” for cybersecurity supported by the European Union Agency on Cybersecurity (“ENISA”).

Based on these two approaches, localization laws disrupt the defenders’ ability to determine “The Who and the What” of an attack. Details about “who” is attacking often require access to personal data. Similarly, as an attacker moves through a defender’s system, there are often account names or other personal data in tracking “what” the attacker does in the system. Threat hunting and privilege escalation are two essential defensive measures that are likely to be especially hard hit by limits on data transfer.

Similarly, localization laws can result in “Risks From Knowing Less Than the Attacker.” An essential part of good cyber defense is for the defenders to test the system through “red teaming,” including penetration (“pen”) testing. With localization, attackers can hop across borders to find holes in system defenses; defenders, however, are prohibited from using information gathered in one locality to jump to another locality. Localization thus limits defenders from effectively testing flaws in their systems.
Part II of the paper examines the tension between the European Union’s regulatory requirements for cybersecurity and data protection. Part III examines the MITRE ATT&CK Framework and ENISA guidelines, and how they identify relevant TTPs of a cybersecurity defense system.

Part IV supplements Part III by providing a quantitative model illustrating effects of data localization under plausible assumptions. In the model, halving the number of IP addresses available to a defender would more than double the likely time until a new attack is detected.

Part V extends the analysis to the cybersecurity approaches now being considered under the proposed European Union Cybersecurity Standard. That standard, written in the name of cybersecurity, would create serious risks for cybersecurity, including by undermining state-of-the-art defensive measures such as threat hunting, privilege escalation, and pen testing.

Part VI offers conclusions. The U.S., Europe, and other nations face incessant and sophisticated cyber-attacks. In the face of these threats, imagine that policymakers were considering a law that would degrade threat intelligence, leave systems open to privilege escalation, and bar effective pen testing and other red teaming. Such a proposed law would deserve great skepticism. As documented in this paper’s research, however, data localization laws appear to have such effects. This paper adds to the finding in Effects, that “until and unless proponents of localization address these concerns, scholars, policymakers, and practitioners have strong reason to expect significant cybersecurity harms from hard localization requirements.”