Crootof on AI and the Actual IHL Accountability Gap

Rebecca Crootof (U Richmond Law; Yale ISP) has posted “AI and the Actual IHL Accountability Gap” (in The Ethics of Automated Warfare and AI, Centre for Int’l Gov Innov 2022) on SSRN. Here is the abstract:

Article after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.

But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.

Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.

Hollis & Raustilia on The Global Governance of the Internet

Duncan B. Hollis (Temple University Law) and Kal Raustiala (UCLA Law) have posted “The Global Governance of the Internet” (in Duncan Snidal & Michael N. Barnnett (eds.), The Oxford Handbook of International Institutions (2023)) on SSRN. Here is the abstract:

This essay surveys Internet governance as an international institution. We focus on three key aspects of information and communication technologies. First, we highlight how, unlike natural commons such as sea or space, digital governance involves a socio-technical system with a man-made architecture reflecting particular and contingent technological choices. Second, we explore how private actors historically played a significant role in making such choices, leading to the rise of existing “multistakeholder” governance frameworks. Third, we examine how these multistakeholder structures favored by the U.S. and its technology companies have come under increasing pressure from multilateral competitors, particularly those championed by China under the banner of “internet sovereignty,” as well as more modest efforts by the European Union to employ an approach akin to “embedded liberalism” for digital governance. The future of the Internet turns on how what we term these Californian, Chinese, and Carolingian visions of Internet governance compete, evolve, and interact. Thus, this essay characterizes Internet governance as a heterogenous, dynamic, multi-layered set of principles, regimes and institutions—a regime complex—that not only governs cyberspace today, but has adapted and transformed along pathways that may serve as signposts for international institutions that regulate other global governance challenges.


Lubin on The Law and Politics of Ransomware

Asaf Lubin (Indiana U Maurer School of Law; Berkman Klein; Yale ISP; Federmann Cybersecurity Center, Hebrew U Law) has posted “The Law and Politics of Ransomware” (Vanderbilt Journal of Transnational Law, Vol. 55, 2022) on SSRN. Here is the abstract:

What do Lady Gaga, the Royal Zoological Society of Scotland, the city of Valdez in Alaska, and the court system of the Brazilian state of Rio Grande do Sul all have in common? They have all been victims of ransomware attacks, which are growing both in number and severity. In 2016, hackers perpetrated roughly 4,000 ransomware attacks a day worldwide, a figure which was already alarming. By 2020, however, “attacks leveled out at 20,000 to 30,000 per day in the US alone.” That is a ransomware attack every 11 seconds, each of which cost victims on average 19 days of network downtime and a payout of over $230,000. In 2021, global costs associated with ransomware recovery exceeded $20 billion.

This Article offers an account of the regulatory challenges associated with ransomware prevention. Situated within the broader literature on underenforcement, the Article explores the core causes for the limited criminalization, prosecution, and international cooperation that have exacerbated this wicked cybersecurity problem. In particular, the Article examines the resource allocation, forensic, managerial, jurisdictional, and informational challenges that have plagued the fight against digital extortions in the global commons.

To address these challenges the Article makes the case for the international criminalization of ransomware. Relying on existing international regimes––namely, the 1979 Hostage Taking Convention, the 2000 Convention Against Transnational Crime, and the customary prohibition against the harboring of terrorists––the Article makes the claim that most ransomware attacks are already criminalized under existing international law. In fact, the Article draws on historical analysis to portray the criminalization of ransomware as a “fourth generation” in the outlawry of Hostis Humani Generis (enemies of mankind).

The Article demonstrates the various opportunities that could arise from treating ransomware gangs as international criminals subject to universal jurisdiction. The Article focuses on three immediate consequences that could arise from such international criminalization: (1) Expanding policies for naming and shaming harboring states; (2) Authorizing extraterritorial cyber enforcement and prosecution; and (3) Advancing strategies for strengthening cybersecurity at home.

Grotto & Dempsey on Vulnerability Disclosure and Management for AI/ML Systems

AJ Grotto (Stanford University – Freeman Spogli Institute for International Studies) and James Dempsey (University of California, Berkeley – School of Law; Stanford Freeman Spogli) have posted “Vulnerability Disclosure and Management for AI/ML Systems: A Working Paper with Policy Recommendations” on SSRN. Here is the abstract:

Artificial intelligence systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attacks that involve evasion, data poisoning, model replication, and exploitation of traditional software flaws to deceive, manipulate, compromise, and render them ineffective. Yet too many organizations adopting AI/ML systems are oblivious to their vulnerabilities. Applying the cybersecurity policies of vulnerability disclosure and management to AI/ML can heighten appreciation of the technologies’ vulnerabilities in real-world contexts and inform strategies to manage cybersecurity risk associated with AI/ML systems. Federal policies and programs to improve cybersecurity should expressly address the unique vulnerabilities of AI-based systems, and policies and structures under development for AI governance should expressly include a cybersecurity component.

Taddeo & Blanchard on Ethical Principles for Artificial Intelligence in National Defense

Mariarosaria Taddeo (Oxford Internet Institute) and Alexander Blanchard (The Alan Turing Institute) have posted “Ethical Principles for Artificial Intelligence in National Defence” (Philosophy & Technology) on SSRN. Here is the abstract:

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles — justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.

Verstraete & Zarsky on Cybersecurity Spillovers

Mark Verstraete (UCLA School of Law) & Tal Zarsky (University of Haifa – Faculty of Law) have posted “Cybersecurity Spillovers” (Brigham Young University Law Review, Forthcoming) on SSRN. Here is the abstract:

This Article identifies and analyzes a previously unrecognized source of positive externalities within cybersecurity, which we term “cybersecurity spillovers.” Most commentators have focused on negative externalities and market failures, leading to a pervasive pessimism about the possibility of adequate cybersecurity protections. In response, this Article demonstrates that unique dynamics from cloud computing—most notably, indivisibility—may force cloud service firms to generate spillovers. These spillovers are additional security protections provided to the common cloud users; clients who may have not been willing or able to acquire these security services otherwise. Furthermore, this additional source of security offsets some of the most pernicious effects of negative externalities and market failure which commonly plague the cybersecurity ecosystem.

Alongside its descriptive analysis of cybersecurity spillovers, this Article alerts policymakers about potential analytical tools which can be used to identify the most beneficial spillovers. Moreover, it offers recommendations for specific interventions that will promote spillovers and improve the state of cybersecurity generally. In particular, this Article explains that policymakers could promote indivisibility and strengthen spillovers by tailoring liability rules. Such enhanced liability might incentivize premium cloud service clients to demand robust protections across the entire platform. In addition, the Article addresses the relationship between market concentration and spillovers. It provides recommendations for preserving spillovers even without concentration in the market for cloud storage. And finally, the Article suggests how the government’s cloud services procurement and tender processes can be utilized to amplify the beneficial effects of spillovers.