Lubin on Technology and the Law of Jus Ante Bellum

Asaf Lubin (Indiana U Maurer Law) has posted “Technology and the Law of Jus Ante Bellum” (26(1) Chicago Journal of International Law (forthcoming, 2025)) on SSRN. Here is the abstract:

The temporal boundaries of international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law (IHL).     

This article highlights glaring inadequacies in how the U.N. Charter, IHL, and International Criminal Law (ICL) currently regulate peacetime military preparations, particularly those involving disruptive technologies. The article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities Jus Ante Bellum offer a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.

Almenar et al. on The Protection of AI-Based Space Systems from a Data-Driven Governance Perspective

Roser Almenar (U Valencia Law) et al. have posted “The Protection of AI-Based Space Systems from a Data-Driven Governance Perspective” (75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024.) on SSRN. Here is the abstract:

Space infrastructures have long represented the pinnacle of technological and engineering achievements. This complexity has been further amplified by the advent of the new space race, where private actors are taking the lead, alongside states, in deploying thousands of satellites in outer space. The outer space environment of 2040 will look very different from today. Spacecraft will necessitate more frequent maneuvers to avoid potential collisions, with the need to be more conscious of their surroundings. Indeed, as the frequency of events and the number of space objects rises, decision-making tasks will increasingly challenge human operators, especially as physical and temporal margins diminish. Such complexity is enveloping thanks to the synergy of space technologies and Artificial Intelligence (AI), which is revolutionizing the functioning of space systems.

The forward trajectory clarifies the significance that AI in outer space will retain in the years ahead. TheCorpus Juris Spatialis finds itself at a crossroads, faced with the defiance of withstanding the technological advances catalyzed by the impending integration of AI into all facets of space missions. Given the ubiquitous nature of AI, its implementation will invariably pose multifaceted legal challenges across diverse aspects of International Space Law. The acquired autonomy of space assets prompts crucial questions regarding the legal standards applicable to AI in outer space, and how these autonomous space systems should be protected against hostile interference.

The main purpose of this paper, presented by the Space Law and Policy Project Group of the Space Generation Advisory Council (SGAC), is to examine the pivotal legal dimensions stemming from the automation of space-based applications from a ‘data-driven governance’ standpoint. The increase in production and acquisition of space data will just augment the sophistication of AI systems, therefore necessitating their data assets to be reliable, accurate, and consistent to safeguard the long-term success of AI technologies in space missions. The paper aims to address the overarching legal challenges posed by the integration of AI into outer space operations, specifically on cybersecurity, intellectual property, and data governance, which are critical for safeguarding autonomous systems. By examining the various nuances of these domains, it seeks to contribute to a comprehensive understanding of the legal landscape of the current AI-space pairing. Ultimately, the conclusion will offer a set of recommendations to pave the way for a secure, ethical evolution of autonomous space systems in the near future.

Stark on Appropriate Human Judgement: U.S. Policy on Lethal Autonomous Weapons Systems and The Law of Armed Conlfict

David Micah Stark (United States Air Force Academy) has posted “Appropriate Human Judgement: U.S. Policy on Lethal Autonomous Weapons Systems and The Law of Armed Conlfict” on SSRN. Here is the abstract:

This paper will examine the role of artificial intelligence (AI) and command responsibility on the modern battlefield. As nations become more reliant on AI and other autonomous systems, the employment of Lethal Autonomous Weapons Systems (LAWS) becomes a new concern for international law, particularly concerning issues of distinction and proportionality. This paper analyzes the role of Commander Responsibility in the U.S. Armed Forces as a case study for the implementation of LAWS on the battlefield. Focusing on DODI 3000.09 and its requirement for “adequate levels of human judgement,” this paper examines what constitutes human judgement and the associated obligations to the Law of Armed Conflict imposed upon commanders by that standard. Examining the current definitions, we find that the language requires an unrealistic approval authority level to implement LAWS policy effectively. Therefore, this paper proposes that the approval threshold for LAWS should be lowered to lower levels of command and doctrine-setting teams, depending on the type of LAWS, simultaneously increasing the requisite training of those teams to include a more substantial education in the Law of Armed Conflict, enabling them to make educated decisions. By creating a stratified system of approval based on multiple factors, the U.S. and DoD can more efficiently harness this emerging technology while maintaining commitment to adherence to the laws of war. This article does not delve into issues of the morality of employing LAWS.

Eichensehr & Keats Citron on Resilience in a Digital Age

Kristen Eichensehr (Virginia Law) and Danielle Keats Citron (same) have posted “Resilience in a Digital Age” (University of Chicago Legal Forum (forthcoming 2024)) on SSRN. Here is the abstract:

A resilience agenda is an essential part of protecting national security in a digital age. Digital technologies impact nearly all aspects of everyday life, from communications and medical care to electricity and government services. Societal reliance on digital tools should be paired with efforts to secure societal resilience. A resilience agenda involves preparing for, adapting to, withstanding, and recovering from disruptions in ways that advance societal interests, goals, and values. Emphasizing resilience offers several benefits: 1) It is threat agnostic or at least relatively threat neutral; 2) its inward focus emphasizes actions under the control of a targeted nation, rather than attempting to change behaviors of external adversaries; and 3) because resilience can address multiple threats simultaneously, it may be less subject to politicization. A resilience strategy is well-suited to address both disruptions to computer systems—whether from cyberattacks or natural disasters—and disruptions to the information environment from disinformation campaigns that sow discord. A resilience agenda is realistic, not defeatist, and fundamentally optimistic in its focus on how society can withstand and move forward from adverse events.

This Essay identifies tactics to bolster resilience against digitally enabled threats across three temporal phases: anticipating and preparing for disruptions, adapting to and withstanding disruptions, and recovering from disruptions. The tactics of a resilience strategy across these phases are dynamic and interconnected. Resilience tactics in the preparation phase could include creating redundancies (including low-tech or no-tech redundancies) or “pre-bunking” disinformation campaigns. Actions in the preparation phase help with adapting to and withstanding disruptions when they are ongoing. Forewarning people about cyberattacks can ensure they do not panic when crucial services cease to function. More persistent and recurrent threats like disinformation campaigns may require structural adaptations, like privacy law reform, to curb the exploitation of personal data to amplify democracy-damaging disinformation. Recovering from disruptions draws on steps taken earlier. Resilience tactics in the recovery phase could include reverting to manual controls and turning to pre-positioned hardware stockpiles that enable continuity of operations after cyberattacks and supporting and protecting journalists and researchers subject to intimidating online abuse. These are just possibilities—a resilience strategy is ours to imagine and pursue, and doing so is a crucial step to strengthen national security in a digital age.

Swire et al. on Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures

Peter Swire (Georgia Institute of Technology – Scheller College of Business; Georgia Tech School of Cybersecurity and Privacy; Cross-Border Data Forum) and others have posted “Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures” on SSRN. Here is the abstract:

This paper continues the research program begun in “The Effects of Data Localization on Cybersecurity – Organizational Effects” (“Effects”). This paper supplements Effects by organizing the risks to cybersecurity by the techniques, tactics, and procedures (“TTPs”) of threat actors and defenders. To categorize the TTPs, we rely on two authoritative approaches, the widely-known MITRE ATT&CK Framework and 2019 guidelines on “The State of the Art” for cybersecurity supported by the European Union Agency on Cybersecurity (“ENISA”).

Based on these two approaches, localization laws disrupt the defenders’ ability to determine “The Who and the What” of an attack. Details about “who” is attacking often require access to personal data. Similarly, as an attacker moves through a defender’s system, there are often account names or other personal data in tracking “what” the attacker does in the system. Threat hunting and privilege escalation are two essential defensive measures that are likely to be especially hard hit by limits on data transfer.

Similarly, localization laws can result in “Risks From Knowing Less Than the Attacker.” An essential part of good cyber defense is for the defenders to test the system through “red teaming,” including penetration (“pen”) testing. With localization, attackers can hop across borders to find holes in system defenses; defenders, however, are prohibited from using information gathered in one locality to jump to another locality. Localization thus limits defenders from effectively testing flaws in their systems.
Part II of the paper examines the tension between the European Union’s regulatory requirements for cybersecurity and data protection. Part III examines the MITRE ATT&CK Framework and ENISA guidelines, and how they identify relevant TTPs of a cybersecurity defense system.

Part IV supplements Part III by providing a quantitative model illustrating effects of data localization under plausible assumptions. In the model, halving the number of IP addresses available to a defender would more than double the likely time until a new attack is detected.

Part V extends the analysis to the cybersecurity approaches now being considered under the proposed European Union Cybersecurity Standard. That standard, written in the name of cybersecurity, would create serious risks for cybersecurity, including by undermining state-of-the-art defensive measures such as threat hunting, privilege escalation, and pen testing.

Part VI offers conclusions. The U.S., Europe, and other nations face incessant and sophisticated cyber-attacks. In the face of these threats, imagine that policymakers were considering a law that would degrade threat intelligence, leave systems open to privilege escalation, and bar effective pen testing and other red teaming. Such a proposed law would deserve great skepticism. As documented in this paper’s research, however, data localization laws appear to have such effects. This paper adds to the finding in Effects, that “until and unless proponents of localization address these concerns, scholars, policymakers, and practitioners have strong reason to expect significant cybersecurity harms from hard localization requirements.”

Crootof on AI and the Actual IHL Accountability Gap

Rebecca Crootof (U Richmond Law; Yale ISP) has posted “AI and the Actual IHL Accountability Gap” (in The Ethics of Automated Warfare and AI, Centre for Int’l Gov Innov 2022) on SSRN. Here is the abstract:

Article after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.

But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.

Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.

Hollis & Raustilia on The Global Governance of the Internet

Duncan B. Hollis (Temple University Law) and Kal Raustiala (UCLA Law) have posted “The Global Governance of the Internet” (in Duncan Snidal & Michael N. Barnnett (eds.), The Oxford Handbook of International Institutions (2023)) on SSRN. Here is the abstract:

This essay surveys Internet governance as an international institution. We focus on three key aspects of information and communication technologies. First, we highlight how, unlike natural commons such as sea or space, digital governance involves a socio-technical system with a man-made architecture reflecting particular and contingent technological choices. Second, we explore how private actors historically played a significant role in making such choices, leading to the rise of existing “multistakeholder” governance frameworks. Third, we examine how these multistakeholder structures favored by the U.S. and its technology companies have come under increasing pressure from multilateral competitors, particularly those championed by China under the banner of “internet sovereignty,” as well as more modest efforts by the European Union to employ an approach akin to “embedded liberalism” for digital governance. The future of the Internet turns on how what we term these Californian, Chinese, and Carolingian visions of Internet governance compete, evolve, and interact. Thus, this essay characterizes Internet governance as a heterogenous, dynamic, multi-layered set of principles, regimes and institutions—a regime complex—that not only governs cyberspace today, but has adapted and transformed along pathways that may serve as signposts for international institutions that regulate other global governance challenges.


Lubin on The Law and Politics of Ransomware

Asaf Lubin (Indiana U Maurer School of Law; Berkman Klein; Yale ISP; Federmann Cybersecurity Center, Hebrew U Law) has posted “The Law and Politics of Ransomware” (Vanderbilt Journal of Transnational Law, Vol. 55, 2022) on SSRN. Here is the abstract:

What do Lady Gaga, the Royal Zoological Society of Scotland, the city of Valdez in Alaska, and the court system of the Brazilian state of Rio Grande do Sul all have in common? They have all been victims of ransomware attacks, which are growing both in number and severity. In 2016, hackers perpetrated roughly 4,000 ransomware attacks a day worldwide, a figure which was already alarming. By 2020, however, “attacks leveled out at 20,000 to 30,000 per day in the US alone.” That is a ransomware attack every 11 seconds, each of which cost victims on average 19 days of network downtime and a payout of over $230,000. In 2021, global costs associated with ransomware recovery exceeded $20 billion.

This Article offers an account of the regulatory challenges associated with ransomware prevention. Situated within the broader literature on underenforcement, the Article explores the core causes for the limited criminalization, prosecution, and international cooperation that have exacerbated this wicked cybersecurity problem. In particular, the Article examines the resource allocation, forensic, managerial, jurisdictional, and informational challenges that have plagued the fight against digital extortions in the global commons.

To address these challenges the Article makes the case for the international criminalization of ransomware. Relying on existing international regimes––namely, the 1979 Hostage Taking Convention, the 2000 Convention Against Transnational Crime, and the customary prohibition against the harboring of terrorists––the Article makes the claim that most ransomware attacks are already criminalized under existing international law. In fact, the Article draws on historical analysis to portray the criminalization of ransomware as a “fourth generation” in the outlawry of Hostis Humani Generis (enemies of mankind).

The Article demonstrates the various opportunities that could arise from treating ransomware gangs as international criminals subject to universal jurisdiction. The Article focuses on three immediate consequences that could arise from such international criminalization: (1) Expanding policies for naming and shaming harboring states; (2) Authorizing extraterritorial cyber enforcement and prosecution; and (3) Advancing strategies for strengthening cybersecurity at home.

Grotto & Dempsey on Vulnerability Disclosure and Management for AI/ML Systems

AJ Grotto (Stanford University – Freeman Spogli Institute for International Studies) and James Dempsey (University of California, Berkeley – School of Law; Stanford Freeman Spogli) have posted “Vulnerability Disclosure and Management for AI/ML Systems: A Working Paper with Policy Recommendations” on SSRN. Here is the abstract:

Artificial intelligence systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attacks that involve evasion, data poisoning, model replication, and exploitation of traditional software flaws to deceive, manipulate, compromise, and render them ineffective. Yet too many organizations adopting AI/ML systems are oblivious to their vulnerabilities. Applying the cybersecurity policies of vulnerability disclosure and management to AI/ML can heighten appreciation of the technologies’ vulnerabilities in real-world contexts and inform strategies to manage cybersecurity risk associated with AI/ML systems. Federal policies and programs to improve cybersecurity should expressly address the unique vulnerabilities of AI-based systems, and policies and structures under development for AI governance should expressly include a cybersecurity component.

Taddeo & Blanchard on Ethical Principles for Artificial Intelligence in National Defense

Mariarosaria Taddeo (Oxford Internet Institute) and Alexander Blanchard (The Alan Turing Institute) have posted “Ethical Principles for Artificial Intelligence in National Defence” (Philosophy & Technology) on SSRN. Here is the abstract:

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles — justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.