Ríos on Can What an AI Produces be Understood and Unraveled

Mauro D. Ríos (The Internet Society) has posted “Can What an AI Produces be Understood and Unraveled” on SSRN. Here is the abstract:

Over the past few decades, AI has radically transformed industries as diverse as medicine and finance, providing solutions with high levels of efficiency and accuracy that were previously unattainable (Revolutionizing healthcare, 2023)

However, the sophistication of these models, which include deep neural networks with millions of parameters and sophisticated mechanisms for producing results, has led to the perception that they operate as a “black box” whose internal logic is inaccessible to human understanding (Hyperight, 2024)

Far from being an intrinsic feature of AI, this opacity is the result of both the volume and heterogeneity of training data and the lack of adequate methodologies to record and unravel each phase of the internal calculation (Stop Explaining Black Box Models, 2022).

To overcome these myths and reveal the “why” and “how” of AI decisions, various interpretability and auditing techniques have been developed. In addition, relevance propagation methodologies, such as Layer‐Wise Relevance Propagation (LRP), make it possible to track, layer by layer, the influence of each “digital neuron” on an AI’s final decision (Montavon et al., 2019).

While these tools offer an unprecedented level of visibility, their practical application involves addressing challenges of scale and computational cost. Exhaustive logging of execution traces and parameters during training demands distributed computing infrastructures and storage systems designed for metadata versioning (Unfooling Perturbation-Based Post Hoc Explainers, 2022).

A comprehensive understanding of AI processes requires not only the use of advanced interpretability techniques, but also the establishment of governance frameworks and structured documentation. Reports from organizations such as the Centre for International Governance Innovation (CIGI) underline the need for accountability policies that require detailed records of each phase of the AI model lifecycle, from data selection to production (Explainable AI Policy, 2023). Without these mechanisms, the aspiration to full interpretability will remain limited by practical and organizational barriers, not because we can claim that we do not know why an AI does what it does, but because we have failed to implement the appropriate mechanisms to know, and thus we will be compromising transparency and trust in critical AI applications. 

Knowing what an AI does and why is then a possibility that is in our hands but requires instruments, time and resources that we must decide if they are justified in each case or we will be selective when.

Levantino on Assessing the Risks of Emotion Recognition Technology in Domestic Security Settings: What Safeguards against the Rise of ‘Emotional Dominance’?

Francesco Paolo Levantino (Scuola Superiore Sant’Anna di Pisa) has posted “Assessing the Risks of Emotion Recognition Technology in Domestic Security Settings: What Safeguards against the Rise of ‘Emotional Dominance’?” on SSRN. Here is the abstract:

In light of the growing interest in biometric technologies among public authorities, civil society, and international organisations, this chapter focusses on some risks associated with the use of Emotion Recognition Technology (ERT) by Law Enforcement Agencies (LEAs). In fact, despite significant attention being directed towards Facial Recognition Technology (FRT) and its uses for analogous purposes, ERT has received comparatively limited scrutiny. The chapter argues that this imbalance is reflected in the European Union’s AI Act, which inadequately addresses the potential risks associated with ERT – including its capacity to generate forms of ‘emotional dominance’. By contextualising ERT within the broader category of biometric systems, the discussion highlights the distinctive characteristics of ERT, while detailing its similarities and differences with FRT and biometric categorisation systems. The analysis shows that, beyond the intuitive perception of ERT as more intrusive due to its focus on extracting emotional states from bodily cues, its contested scientific foundations raise substantial concerns regarding its deployment in the security sector. Before some concluding remarks, the chapter draws historical parallels between the military origins of some biometric technologies and their adaptation for law enforcement, illustrating how the integration of military technologies into LEAs’ practices can affect their relationship with fundamental rights and freedoms in democratic societies. Also, it specifies how the AI Act classifies the use of ERT in law enforcement and highlights the core issues it identifies in this respect. The chapter emphasises the need for more robust regulatory frameworks to protect against the interferences posed by ERT in the context of law enforcement, asserting that existing protections under international and European human rights law should serve as a litmus test for the deployment of modern technologies by LEAs.

Hine et al. on The Impact of Modern Big Tech Antitrust on Digital Sovereignty

Emmie Hine (Yale U Digital Ethics Center) et al. have posted “The Impact of Modern Big Tech Antitrust on Digital Sovereignty” on SSRN. Here is the abstract:

This article examines the history of antitrust cases against Big Tech companies in the United States. It highlights a shift in the attitudes of enforcers away from the economic-analysis-informed Chicago and post-Chicago schools of antitrust thought, which are informed by economic analysis, towards New Brandeisian thinking, which emphasizes structural concerns and broader consumer welfare. However, it has yet to catch on in courtrooms. By contrasting the US’s antitrust strategy with those of the European Union and China, we argue that antitrust enforcement may hinder economic and technological competitiveness in the short term, but may have long-term benefits. Regarding global digital sovereignty, the US increasing enforcement likely would not impact its global competitiveness, as it still presents a more favorable regulatory environment than the EU, and targeted economic measures prevent Chinese companies from being competitive in the US. New legislation may help address the complexities of modern digital markets so that the US can maintain its competitive edge in technology while enhancing consumer welfare.

Chaiehloudj on Musk v. OpenAI: Antitrust and the Boundaries of Strategic Litigation in the AI Sector

Walid Chaiehloudj (U Côte d’Azur) has posted “Musk v. OpenAI: Antitrust and the Boundaries of Strategic Litigation in the AI Sector” (European Competition and Regulatory Law Review (CoRe), forthcoming) on SSRN. Here is the abstract:

This paper analyzes the recent decision in Musk v. Altman (N.D. Cal., March 2025), in which the United States District Court denied a preliminary injunction sought by Elon Musk and his company xAI against OpenAI and Microsoft. The plaintiffs alleged that OpenAI and Microsoft had entered into an unlawful group boycott by pressuring investors not to fund competing AI companies, in violation of Section 1 of the Sherman Act. The court rejected the claim on both procedural and substantive grounds, notably finding that Musk lacked standing, and that the evidence presented-consisting mainly of media articles-was insufficient to establish a plausible antitrust violation or irreparable harm.

Beyond its procedural lessons, Musk v. Altman illustrates the intensifying global battle for dominance in AI markets and the legal complexities accompanying it. The court’s decision ultimately favors a model of competition based on innovation rather than speculative or strategic litigation.

Bonadio & Felisberto on Copyrightability of AI Outputs: The US Copyright Office’s Perspective European Intellectual Property Review

Enrico Bonadio (City U London) and Honor Felisberto (U Lausanne Law) have posted “Copyrightability of AI Outputs: The US Copyright Office’s Perspective European Intellectual Property Review” on SSRN. Here is the abstract:

In August 2023, the United States Copyright Office (USCO) published a Notice of Inquiry (NOI) and request for comments on the intersection between Artificial Intelligence (AI) and copyright. The USCO had earlier announced it would issue a Report in several Parts analysing the comments received. On July 31 st , 2024, the first Part of the Report, on the topic of digital replicas, had been published. The second part of this Report, available since January 29 th , 2025, addresses the copyrightability of outputs generated by AI systems. This short note offers a summary of the latter, more precisely of the USCO’s recommendations.

Takhshid on Virtual Dignitary Torts

Zahra Takhshid (U Denver Sturm College Law) has posted “Virtual Dignitary Torts” (The Journal of Tort Law forthcoming in Volume 18 Issue 1, 2025) on SSRN. Here is the abstract:

The emergence of the metaverse and spatial computing, which has enabled immersive digital interactions, raise complex legal questions. This work examines the feasibility of addressing dignitary torts-such as battery and intentional infliction of emotional distress-committed via avatars. The particular challenge for tort law is the nonphysical nature of selfrepresentations in these virtual spaces. Drawing from the historical evolutions of several dignitary torts, such as the law of battery and emotional harm, this article argues that the key in allowing for the recognition of such harms is appreciating the expansion of the protection of physical body within these torts, to the protection of a broader concept of the “self.” By this, tort law has demonstrated both its willingness and capacity to recognize new forms of wrongs without sacrificing its core principles. Accordingly, this essay lays the groundwork for recognizing harms in virtual spaces and offers several initial considerations for dignitary tort liability regime and the extension of the self in extended reality spaces. Bridging the gap between evolving technology and traditional tort law is a must in a world where virtual interactions are carrying increasingly real consequences.

Raso on Interoperable AI Regulation

Jennifer Raso (McGill U Law) has posted “Interoperable AI Regulation” (Forthcoming in the Canadian Journal of Law and Technology) on SSRN. Here is the abstract:

This article explores “interoperability” as a new goal in AI regulation in Canada and beyond. Drawing on sociotechnical, computer science, and digital government literatures, it traces interoperability’s conceptual genealogy to reveal an underlying politics that prioritizes harmony over discord and consistency over plurality. This politics, the article argues, is in tension with the distinct role of statutory law (as opposed to regulation) in a democratic society. Legislation is not simply a technology through which one achieves the smooth operation of governance. Rather, legislation is better understood as a “boundary object”: an information system through which members of different communities make sense of, and communicate about, complex phenomena. This sense-making includes and even requires disagreement, the managing and resolution of which is a vital function of both law and indeed of any information system.

Lee & Souther on Beyond Bias: AI as a Proxy Advisor

Choonsik Lee (U Rhode Island) and Matthew E. Souther (U South Carolina Darla Moore Business) have posted “Beyond Bias: AI as a Proxy Advisor” on SSRN. Here is the abstract:

After documenting a trend towards increasingly subjective proxy advisor voting guidelines, we evaluate the use of artificial intelligence as an unbiased proxy advisor for shareholder proposals. Using ISS guidelines, our AI model produces voting recommendations that match ISS in 79% of proposals and better predicts shareholder support than ISS recommendations alone. Disagreements between AI and ISS are more likely when firms disclose hiring a third-party governance consultant, suggesting these consultants-often the proxy advisor itself-may influence recommendations. These findings offer insight into proxy advisor conflicts of interest and demonstrate AI’s potential to improve transparency and objectivity in voting decisions.

Lahmann et al. on The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence

Henning Lahmann (Leiden U Centre Law and Digital Technologies) et al. have posted “The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence” (Final version accepted and forthcoming in Ethics & Information Technology) on SSRN. Here is the abstract:

The article analyses proposed AI-supported systems to detect, monitor, and counter ‘cognitive warfare’ and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of ‘cognitive warfare’ as used in contemporary public security discourse, it describes the emergence of AI as a novel tool expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilise AI to devise countermeasures, ranging from AI-based early warning systems to state-run, internet-wide content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, freedom of expression, freedom of information, and self-determination. The proposed AI systems insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causal links between ‘cognitive warfare’ campaigns and undesired outcomes. As a result, using AI to counter ‘cognitive warfare’ risks harming the very rights and values such measures purportedly seek to protect. Policymakers should focus less on seemingly quick technological fixes. Instead, they should invest in long-term strategies against information disorder in digital communication ecosystems that are solidly grounded in the preservation of fundamental rights.

Kemper & Kolain on K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction

Carolin Kemper (German Research Institute Public Administration) and Michael Kolain (German Research Institute Public Administration (FÖV Speyer)) have posted “K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction” (UCLA Journal of Law and Technology Vol. 30 (2025), 1-95, https://uclajolt.com/k9-police-robots-vol-30-no-1/) on SSRN. Here is the abstract:

The advent of a robotized police force has come: Boston Dynamics’ “Spot” patrols cities like Honolulu, investigates drug labs in the Netherlands, explores a burned building in danger of collapsing in Germany, and has already assisted the police in responding to a home invasion in New York City. Quadruped robots might soon be on sentry duty at US borders. The Department of Homeland Security has procured Ghost Robotics’ Vision 60—a model that can be equipped with different payloads, including a weapons system. Canine police robots may patrol public spaces, explore dangerous environments, and might even use force if equipped with guns or pepper spray. This new gadget is not unlike previous tools deployed by the police, especially surveillance equipment or mechanized help by other machines. Even though they slightly resemble the old- fashioned police dog, their functionalities and affordances are structurally different from K9 units: Canine robots capture data on their environment wherever they roam and they communicate with citizens, e. g. by replaying orders or by establishing a two-way audio link. They can be controlled fully through remote-control over a long distance—or they automate their patrol by following a preconfigured route. The law does currently not suitably address and contain these risks associated with potentially armed canine police robots.

As a starting point, we analyze the use of canine robots by the police for surveillance, with special regard to existing data protection regulation for law enforcement in the European Union (EU). Additionally, we identify overarching regulatory challenges posed by their deployment. In what we call “citizen-robot-state interaction,” we combine the findings of human-robot interaction with the legal and ethical requirements for a legitimate use of robots by state authorities, especially the police. We argue that the requirements of legitimate exercise of state authority hinge on how police use robots to mediate their interaction with citizens. Law enforcement agencies should not simply procure existing robot models used as military or industrial equipment. Before canine police robots rightfully roam our public and private spaces, police departments and lawmakers should carefully and comprehensively assess their purpose, which citizens’ rights they impinge on, and whether full accountability and liability is guaranteed. In our analysis, we use existing canine robot models “Spot” and “Vision 60” to as a starting point to identify potential deployment scenarios and analyze those as “citizen-robot-state interactions.” Our paper ultimately aims to lay a normative groundwork for future debates on the legitimate use of robots as a tool of modern policing. We conclude that, currently, canine robots are only suitable for particularly dangerous missions to keep police officers out of harm’s way.