Klonowska et al. on Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI

Klaudia Klonowska (T.M.C. Asser Institute) and Taylor Kate Woodcock (T.M.C. Asser Institute) have posted “Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI” (Forthcoming in Boutin B., Woodcock T. K. & Soltanzadeh S. (eds.), Decision at the Edge: Interdisciplinary Dilemmas in Military Artificial Intelligence, Asser Press (2025)) on SSRN. Here is the abstract:

The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in targeting processes, legal and ethical debates often compare who performs better, humans or machines? In this Chapter, we unpack and critique the prevalence of comparisons between humans and AI systems, including in analyses of the fulfilment of legal obligations under International Humanitarian Law (IHL). We challenge this binary framing by highlighting misleading assumptions that neglect how the use of AI results in complex human-machine interactions that transform targeting practices. We unpack what is meant by ‘better performance’, demonstrating how prevailing metrics for speed and accuracy can create misleading expectations around the use of AI given the realities of warfare. We conclude that holistic but granular attention must be paid to the landscape of human-machine interactions to understand how the use of AI impacts compliance with IHL targeting obligations.

Alwis on “Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems

Rangita De Silva De Alwis (U Pennsylvania Carey Law) has posted “”Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems” (Chicago Journal of International Law, forthcoming) on SSRN. Here is the abstract:

Paragraph 2 of the UN General Assembly Resolution 78/241 requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on 1 February 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention inviting their formal input. This paper for the first time analyzes the positions of Member States on AI- driven LAWS. Using a qualitative coding matrix, the paper examines Member States’ positions in relation to human centric approaches to AI- driven LAWS, and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution but whether they can be free of data, algorithmic, and programmer bias.  Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI- driven weapons asymmetry between different nation states depending on who has access to AI.

The article raises the question whether Yale Law’s Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “mistakes” in war may also apply to the pattern of biases in AI- driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

During the 2017 testimony to the US Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, ….“because we take our values to war …. I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” The laws of war are rapidly advancing to a critical crossroads in war’s relationship with technology.

Abiri on Mutually Assured Deregulation

Gilad Abiri (Peking U Transnational Law) has posted “Mutually Assured Deregulation” (Stanford Technology Law Review) on SSRN. Here is the abstract:

We have convinced ourselves that the way to make AI safe is to make it unsafe. Since 2022, many policymakers worldwide have embraced the “Regulation Sacrifice”—the belief that dismantling safety oversight will somehow deliver security through AI dominance. The reasoning follows a perilous pattern: fearing that China or the USA will dominate the AI landscape, we rush to eliminate any safeguard that might slow our progress. This Essay reveals the fatal flaw in such thinking. Though AI development certainly poses national security challenges, the solution demands stronger regulatory frameworks, not weaker ones. A race without guardrails doesn’t build competitive strength—it breeds shared danger.

The Regulation Sacrifice makes three promises. Each one is false. First, it promises durable technological leads. But as a form of dual-use software, AI capabilities spread like wildfire. Performance gaps between U.S. and Chinese systems collapsed from 9% to 2% in thirteen months. When advantages evaporate in months, sacrificing permanent safety for temporary speed makes no sense.

Second, it promises that deregulation accelerates innovation. The opposite is quite often true. Companies report that well-designed governance frameworks streamline their development. Investment flows toward regulated markets, not away from them. Clear rules reduce uncertainty. Uncertain liability creates paralysis. We have seen this movie before—environmental standards didn’t kill the auto industry; they created Tesla and BYD.

Third, the promise of enhanced national security through deregulation is perhaps the most dangerous fallacy, as it actually undermines security across all timeframes. In the near term, it hands our adversaries perfect tools for information warfare. In the medium term, it puts bioweapon capabilities in everyone’s hands. In the long term, it guarantees we’ll deploy AGI systems we cannot control, racing to be the first to push a button we can’t unpush.

The Regulation Sacrifice persists because it serves powerful interests, not because it serves security. Tech companies prefer freedom to accountability. Politicians prefer simple stories to complex truths. Together they are trying to convince us that recklessness is patriotism. But here is the punchline: these ideas create a system of mutually assured deregulation, where each nation’s sprint for advantage guarantees collective vulnerability. The only way to win this game is not to play.

Murray on Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence

Peter Murray (Oak Brook College Law) has posted “Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence” (Transactions on Engineering and Computing Sciences, volume 13, issue 02, 2025[10.14738/tecs.1302.18401]) on SSRN. Here is the abstract:

Crimebots are fueling the cybercrime pandemic by exploiting artificial intelligence (AI) to facilitate crimes such as fraud, misrepresentation, extortion, blackmail, identity theft, and security breaches. These AI-driven criminal activities pose a significant threat to individuals, businesses, online transactions, and even the integrity of the legal system. Crimebots enable unjust exonerations and wrongful convictions by fabricating evidence, creating deepfake alibis, and generating misleading crime reconstructions. In response, lawbots have emerged as a counterforce, designed to uphold justice. Legal professionals use lawbots to collect and analyze evidence, streamline legal processes, and enhance the administration of justice. To mitigate the risks posed by both crimebots and lawbots, many jurisdictions have established ethical guidelines promoting the responsible use of AI by lawyers and clients. Approximately 1.34% of lawyers have been involved in AI-related legal disputes, often revolving around issues such as fees, conflicts of interest, negligence, ethical violations, evidence tampering, and discrimination. Additional concerns include fraud, confidentiality breaches, harassment, and the misuse of AI for criminal purposes. For lawbots to succeed in the ongoing battle against crimebots, strict adherence to complex AI regulations is essential. Ensuring compliance with these guidelines minimizes malpractice risks, prevents professional sanctions, preserves client trust, and upholds the ethical and legal professional standards of excellence.

Nugent on Generative Cybersecurity

Nicholas Nugent (U Tennessee) has posted “Generative Cybersecurity” on SSRN. Here is the abstract:

Cybersecurity is experiencing a sea change, and AI is to blame. Bots, which now outnumber human users, prowl networks day and night, using deep learning to discover vulnerabilities and threatening to make all software essentially transparent. The number of skilled human hackers alive in the world no longer poses a meaningful constraint on the amount of damage that can be done, as even the least experienced “script kiddie” can outsource his dark arts to hundreds of self-executing AI agents, each independently working to worm its way into a target’s system. And the age of real-time deepfakes is now upon us, as scammers personally converse with the victims of their social engineering schemes while powerful hardware dynamically swaps their faces and voices with those of impersonated relatives or coworkers.

At the same time, traditional legal doctrines are showing their age. Firms have few legal options to stop bots from continually probing their systems for vulnerabilities, as courts long ago hollowed out the tort of cyber-trespass. The federal Computer Fraud and Abuse Act punishes hackers who use AI to break into protected computers just as surely as it punishes traditional hacking. But the 1986 statute is showing its age, its language poorly suited to situations in which adversaries trick lawful AI systems into voluntarily spilling their secrets without ever crossing the access barrier—the problem of “adversarial AI.” And wire fraud, theft, and right-of-publicity laws map awkwardly, if they map at all, onto certain elements of deepfake scams.

Existing liability frameworks compound the problem, making it difficult to hold AI companies accountable when bad actors use their tools to harm others. Negligence doctrines typically insulate vendors from secondary liability where products admit of substantial lawful uses or where intervening criminality breaks the chain of proximate causation. And firms that deploy defensive AI systems to fight fire with fire may likewise find themselves without a backstop if those systems fail, or unexpectedly wreak havoc on others, given tort law’s reluctance to apply product liability rules to software.

Despite a growing literature on legal issues related to artificial intelligence and a separate body of cybersecurity scholarship, the legal academy has not yet treated AI-driven cybersecurity as a distinct, system-level field of inquiry. Where scholars or policymakers acknowledge that a particular AI use case challenges a traditional rule, they tend to offer ad hoc fixes (or none at all). As a result, cybersecurity law risks falling behind in a rapidly evolving threat environment, leaving firms and individuals without adequate remedies.

This Article tackles the problem head-on, offering the first system-level treatment of the “AI problem” facing cybersecurity and, by extension, cybersecurity law. It provides a comprehensive taxonomy of the ways AI intersects with cybersecurity. That taxonomy organizes the field around three primary roles: using AI as a tool for malicious cyber-activity (“AI as Threat”), attacking AI systems (“AI as Target”), and leveraging AI’s defensive capabilities (“AI as Shield”). It builds out detailed subcategories grounded in specific technologies, operations, and injuries, and draws on the computer science literature and real-world incidents to show that each distinct threat is real rather than theoretical.

Not limited to technical description, the Article systematically identifies the existing laws and doctrines that apply to each distinct use case and exposes the structural gaps AI has created. It then advances an integrated reform agenda designed to realign cybersecurity law to a landscape defined by autonomous, learning systems. The Article proposes five core shifts: rethinking the doctrine of electronic trespass, decentering intrusion as a necessary element in hacking offenses, protecting individual likeness per se, establishing artificial duties of care, and recalibrating negligence doctrine for agentic systems. Taken together, these reforms would move cybersecurity law beyond its human- and intrusion-era origins and toward a design suited to the new reality of machine-mediated threats and security.

Lavi et al. on Seeing is Believing? Deepfakes in Financial Markets

Michal Lavi (The Hadar Jabotinsky Center Interdisciplinary Research Financial Markets) and Hadar Yoana Jabotinsky (The Hadar Jabotinsky Center Interdisciplinary Research Financial Markets) have posted “Seeing is Believing? Deepfakes in Financial Markets” (44 Cardozo Arts & Ent. L.J. 55 (2026)) on SSRN. Here is the abstract:

Seeing is Believing? Deepfakes in Financial Markets 44 Cardozo Arts & Ent. L.J. 55 (2026) 

We let a genie out of the bottle when we developed nuclear weapons… AI is somewhat similar-it’s part way out of the bottle.”

-Warren Buffett, at his annual shareholding meeting. 

An AI-powered tool recently mimicked Warren Buffett’s image and voice so convincingly that even his own family could have been deceived. This striking example highlights the transformative potential of voice cloning and deepfakes. This innovative technology leverages artificial intelligence (AI) to create hyper-realistic audio and video content. By blurring the boundaries between authenticity and synthetic creation, deepfakes make it possible to fabricate moments that never occurred. Recent advancements in AI and user-friendly software have made deepfakes more accessible and further contributed to the proliferation of deepfakes, enabling even individuals with minimal technical skills to produce compelling deepfakes at little to no cost.

While deepfakes can be used positively and offer promising applications, such as restoring voices, animating art, or enhancing online shopping, they also have a dark side. Deepfakes have been weaponized to spread misinformation, create fake pornography, and disseminate fake news. Although research often focuses on deepfakes in social media, targeted scams using deepfakes are a growing concern. These scams often involve fabricated evidence, identity theft, or highly convincing impersonations executed with alarming precision, that can aim at facilitating financial scams.

Deepfakes pose significant threats to personal security, national security, financial stability, and democracy. Addressing their harmful effects is urgent. This Article asks how should policy makers construct the use of this technology, confront its harmful effects and mitigate them in the context of financial markets. Rejecting a one-size-fits-all regulatory framework, it advocates for tailored strategies. For social media deepfakes, the focus should be on balancing free speech with improved content moderation. For targeted scams, new security standards and verification mechanisms are imperative.

Contributing to the legal scholarship, this Article provides a comprehensive overview of the deepfake phenomenon, detailing its motivations, harms, and societal impacts. It emphasizes the overlooked yet pressing issue of deepfake-driven financial scams, analyzing the unique challenges these targeted distortions of reality pose. The Article critiques existing legislative efforts, arguing they are ill-suited to address narrow, targeted scams. Finally, it proposes tailored, context-specific solutions to mitigate the dangers posed by this technology. The Article concludes by underscoring that as the line between real and fake continues to blur, our legal, organizational and ethical frameworks must evolve to safeguard truth.

Brcic on The Memory Wars: AI Memory, Network Effects, and the Geopolitics of Cognitive Sovereignty

Mario Brcic (U Zagreb Electrical Engineering and Computing) has posted “The Memory Wars: AI Memory, Network Effects, and the Geopolitics of Cognitive Sovereignty” on SSRN. Here is the abstract:

The advent of continuously learning Artificial Intelligence (AI) assistants marks a paradigm shift from episodic interactions to persistent, memory-driven relationships. This paper introduces the concept of “Cognitive Sovereignty”, the ability of individuals, groups, and nations to maintain autonomous thought and preserve identity in the age of powerful AI systems, especially those that hold their deep personal memory. It argues that the primary risk of these technologies transcends traditional data privacy to become an issue of cognitive and geopolitical control. We propose “Network Effect 2.0,” a model where value scales with the depth of personalized memory, creating powerful cognitive moats and unprecedented user lock-in. We analyze the psychological risks of such systems, including cognitive offloading and identity dependency, by drawing on the “extended mind” thesis. These individual-level risks scale to geopolitical threats, such as a new form of digital colonialism and subtle shifting of public discourse. To counter these threats, we propose a policy framework centered on memory portability, transparency, sovereign cognitive infrastructure, and strategic alliances. This work reframes the discourse on AI assistants in an era of increasingly intimate machines, pointing to challenges to individual and national sovereignty.

Wei et al. on Recommendations and Reporting Checklist for Rigorous & Transparent Human Baselines in Model Evaluations

Kevin Wei (RAND Corporation) et al. have posted “Recommendations and Reporting Checklist for Rigorous & Transparent Human Baselines in Model Evaluations” (A version of this paper has been accepted to ICML 2025 as a position paper (spotlight), with the title: “Position: Human Baselines in Model Evaluations Need Rigor and Transparency (With Recommendations & Reporting Checklist).”) on SSRN. Here is the abstract:

In this position paper, we argue that human baselines in foundation model evaluations must be more rigorous and more transparent to enable meaningful comparisons of human vs. AI performance, and we provide recommendations and a reporting checklist towards this end. Human performance baselines are vital for the machine learning community, downstream users, and policymakers to interpret AI evaluations. Models are often claimed to achieve “super-human” performance, but existing baselining methods are neither sufficiently rigorous nor sufficiently well-documented to robustly measure and assess performance differences. Based on a meta-review of the measurement theory and AI evaluation literatures, we derive a framework with recommendations for designing, executing, and reporting human baselines. We synthesize our recommendations into a checklist that we use to systematically review 115 human baselines (studies) in foundation model evaluations and thus identify shortcomings in existing baselining methods; our checklist can also assist researchers in conducting human baselines and reporting results. We hope our work can advance more rigorous AI evaluation practices that can better serve both the research community and policymakers. Data is available at: https://github.com/kevinlwei/human-baselines

Lahmann et al. on The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence

Henning Lahmann (Leiden U Centre Law and Digital Technologies) et al. have posted “The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence” (Final version accepted and forthcoming in Ethics & Information Technology) on SSRN. Here is the abstract:

The article analyses proposed AI-supported systems to detect, monitor, and counter ‘cognitive warfare’ and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of ‘cognitive warfare’ as used in contemporary public security discourse, it describes the emergence of AI as a novel tool expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilise AI to devise countermeasures, ranging from AI-based early warning systems to state-run, internet-wide content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, freedom of expression, freedom of information, and self-determination. The proposed AI systems insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causal links between ‘cognitive warfare’ campaigns and undesired outcomes. As a result, using AI to counter ‘cognitive warfare’ risks harming the very rights and values such measures purportedly seek to protect. Policymakers should focus less on seemingly quick technological fixes. Instead, they should invest in long-term strategies against information disorder in digital communication ecosystems that are solidly grounded in the preservation of fundamental rights.

Kemper & Kolain on K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction

Carolin Kemper (German Research Institute Public Administration) and Michael Kolain (German Research Institute Public Administration (FÖV Speyer)) have posted “K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction” (UCLA Journal of Law and Technology Vol. 30 (2025), 1-95, https://uclajolt.com/k9-police-robots-vol-30-no-1/) on SSRN. Here is the abstract:

The advent of a robotized police force has come: Boston Dynamics’ “Spot” patrols cities like Honolulu, investigates drug labs in the Netherlands, explores a burned building in danger of collapsing in Germany, and has already assisted the police in responding to a home invasion in New York City. Quadruped robots might soon be on sentry duty at US borders. The Department of Homeland Security has procured Ghost Robotics’ Vision 60—a model that can be equipped with different payloads, including a weapons system. Canine police robots may patrol public spaces, explore dangerous environments, and might even use force if equipped with guns or pepper spray. This new gadget is not unlike previous tools deployed by the police, especially surveillance equipment or mechanized help by other machines. Even though they slightly resemble the old- fashioned police dog, their functionalities and affordances are structurally different from K9 units: Canine robots capture data on their environment wherever they roam and they communicate with citizens, e. g. by replaying orders or by establishing a two-way audio link. They can be controlled fully through remote-control over a long distance—or they automate their patrol by following a preconfigured route. The law does currently not suitably address and contain these risks associated with potentially armed canine police robots.

As a starting point, we analyze the use of canine robots by the police for surveillance, with special regard to existing data protection regulation for law enforcement in the European Union (EU). Additionally, we identify overarching regulatory challenges posed by their deployment. In what we call “citizen-robot-state interaction,” we combine the findings of human-robot interaction with the legal and ethical requirements for a legitimate use of robots by state authorities, especially the police. We argue that the requirements of legitimate exercise of state authority hinge on how police use robots to mediate their interaction with citizens. Law enforcement agencies should not simply procure existing robot models used as military or industrial equipment. Before canine police robots rightfully roam our public and private spaces, police departments and lawmakers should carefully and comprehensively assess their purpose, which citizens’ rights they impinge on, and whether full accountability and liability is guaranteed. In our analysis, we use existing canine robot models “Spot” and “Vision 60” to as a starting point to identify potential deployment scenarios and analyze those as “citizen-robot-state interactions.” Our paper ultimately aims to lay a normative groundwork for future debates on the legitimate use of robots as a tool of modern policing. We conclude that, currently, canine robots are only suitable for particularly dangerous missions to keep police officers out of harm’s way.