Raso on Interoperable AI Regulation

Jennifer Raso (McGill U Law) has posted “Interoperable AI Regulation” (Forthcoming in the Canadian Journal of Law and Technology) on SSRN. Here is the abstract:

This article explores “interoperability” as a new goal in AI regulation in Canada and beyond. Drawing on sociotechnical, computer science, and digital government literatures, it traces interoperability’s conceptual genealogy to reveal an underlying politics that prioritizes harmony over discord and consistency over plurality. This politics, the article argues, is in tension with the distinct role of statutory law (as opposed to regulation) in a democratic society. Legislation is not simply a technology through which one achieves the smooth operation of governance. Rather, legislation is better understood as a “boundary object”: an information system through which members of different communities make sense of, and communicate about, complex phenomena. This sense-making includes and even requires disagreement, the managing and resolution of which is a vital function of both law and indeed of any information system.

Data Governance

Data governance refers to the legal, institutional, and organizational frameworks that determine how data is collected, accessed, shared, and used. In the context of artificial intelligence, data governance shapes which datasets are available for training, validation, deployment, and oversight, and under what conditions.

Legal approaches to data governance include privacy and data protection law, intellectual property and trade secret regimes, contractual restrictions, and sector-specific access rules. Institutional and market-based mechanisms, such as standards, licensing practices, and platform rules, also play a significant role in structuring data access.

In AI law and policy, data governance is often treated as a primary lever for influencing system behavior indirectly, by conditioning the inputs on which models depend rather than regulating model outputs directly.

AI Governance

AI governance refers to the set of legal, institutional, and organizational mechanisms used to shape the development, deployment, and use of artificial intelligence systems. These mechanisms include formal regulation, private law rules, standards-setting, market design, and internal governance practices within firms and public institutions.

In legal scholarship, AI governance is often contrasted with purely technical approaches to safety or alignment. Rather than focusing on how systems behave internally, governance frameworks emphasize how incentives, constraints, and accountability structures influence behavior externally.

AI governance is not limited to public regulation. It also encompasses private ordering through contracts, liability regimes, industry standards, and market institutions that condition access to data, compute, and deployment environments.

Lee & Souther on Beyond Bias: AI as a Proxy Advisor

Choonsik Lee (U Rhode Island) and Matthew E. Souther (U South Carolina Darla Moore Business) have posted “Beyond Bias: AI as a Proxy Advisor” on SSRN. Here is the abstract:

After documenting a trend towards increasingly subjective proxy advisor voting guidelines, we evaluate the use of artificial intelligence as an unbiased proxy advisor for shareholder proposals. Using ISS guidelines, our AI model produces voting recommendations that match ISS in 79% of proposals and better predicts shareholder support than ISS recommendations alone. Disagreements between AI and ISS are more likely when firms disclose hiring a third-party governance consultant, suggesting these consultants-often the proxy advisor itself-may influence recommendations. These findings offer insight into proxy advisor conflicts of interest and demonstrate AI’s potential to improve transparency and objectivity in voting decisions.

Lahmann et al. on The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence

Henning Lahmann (Leiden U Centre Law and Digital Technologies) et al. have posted “The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence” (Final version accepted and forthcoming in Ethics & Information Technology) on SSRN. Here is the abstract:

The article analyses proposed AI-supported systems to detect, monitor, and counter ‘cognitive warfare’ and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of ‘cognitive warfare’ as used in contemporary public security discourse, it describes the emergence of AI as a novel tool expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilise AI to devise countermeasures, ranging from AI-based early warning systems to state-run, internet-wide content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, freedom of expression, freedom of information, and self-determination. The proposed AI systems insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causal links between ‘cognitive warfare’ campaigns and undesired outcomes. As a result, using AI to counter ‘cognitive warfare’ risks harming the very rights and values such measures purportedly seek to protect. Policymakers should focus less on seemingly quick technological fixes. Instead, they should invest in long-term strategies against information disorder in digital communication ecosystems that are solidly grounded in the preservation of fundamental rights.

Kemper & Kolain on K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction

Carolin Kemper (German Research Institute Public Administration) and Michael Kolain (German Research Institute Public Administration (FÖV Speyer)) have posted “K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction” (UCLA Journal of Law and Technology Vol. 30 (2025), 1-95, https://uclajolt.com/k9-police-robots-vol-30-no-1/) on SSRN. Here is the abstract:

The advent of a robotized police force has come: Boston Dynamics’ “Spot” patrols cities like Honolulu, investigates drug labs in the Netherlands, explores a burned building in danger of collapsing in Germany, and has already assisted the police in responding to a home invasion in New York City. Quadruped robots might soon be on sentry duty at US borders. The Department of Homeland Security has procured Ghost Robotics’ Vision 60—a model that can be equipped with different payloads, including a weapons system. Canine police robots may patrol public spaces, explore dangerous environments, and might even use force if equipped with guns or pepper spray. This new gadget is not unlike previous tools deployed by the police, especially surveillance equipment or mechanized help by other machines. Even though they slightly resemble the old- fashioned police dog, their functionalities and affordances are structurally different from K9 units: Canine robots capture data on their environment wherever they roam and they communicate with citizens, e. g. by replaying orders or by establishing a two-way audio link. They can be controlled fully through remote-control over a long distance—or they automate their patrol by following a preconfigured route. The law does currently not suitably address and contain these risks associated with potentially armed canine police robots.

As a starting point, we analyze the use of canine robots by the police for surveillance, with special regard to existing data protection regulation for law enforcement in the European Union (EU). Additionally, we identify overarching regulatory challenges posed by their deployment. In what we call “citizen-robot-state interaction,” we combine the findings of human-robot interaction with the legal and ethical requirements for a legitimate use of robots by state authorities, especially the police. We argue that the requirements of legitimate exercise of state authority hinge on how police use robots to mediate their interaction with citizens. Law enforcement agencies should not simply procure existing robot models used as military or industrial equipment. Before canine police robots rightfully roam our public and private spaces, police departments and lawmakers should carefully and comprehensively assess their purpose, which citizens’ rights they impinge on, and whether full accountability and liability is guaranteed. In our analysis, we use existing canine robot models “Spot” and “Vision 60” to as a starting point to identify potential deployment scenarios and analyze those as “citizen-robot-state interactions.” Our paper ultimately aims to lay a normative groundwork for future debates on the legitimate use of robots as a tool of modern policing. We conclude that, currently, canine robots are only suitable for particularly dangerous missions to keep police officers out of harm’s way.

Haim & Yogev on What Do People Want from Algorithms? Public Perceptions of Algorithms in Government

Amit Haim (Tel Aviv U Buchmann Law) and Dvir Yogev (UC Berkeley Law) have posted “What Do People Want from Algorithms? Public Perceptions of Algorithms in Government” on SSRN. Here is the abstract:

Objectives: This study examines how specific attributes of Algorithmic Decision-MakingTools (ADTs), related to algorithm design and institutional governance, affect the public’s perceptions of implementing ADTs in government programs.

Hypotheses: We hypothesized that acceptability varies systematically by policy domain. Regarding algorithm design, we predicted that higher accuracy, transparency, andgovernment in-house development will enhance acceptability. Institutional features werealso expected to shape perceptions: explanations, stakeholder engagement, oversight mechanisms, and human involvement are anticipated to increase public perceptions.

Method: This study employed a conjoint experimental design with 1,213 U.S. adults.Participants evaluated five policy proposals, each featuring a proposal to implement anADT. Each proposal included randomly generated attributes across nine dimensions. Participants decided on the ADT’s acceptability, fairness, and efficiency for each proposal. The analysis focused on the average marginal conditional effects (AMCE) of ADT attributes.

Results: A combination of attributes related to process individualization significantly enhanced the perceived acceptability of using algorithms by government. Participants preferred ADTs that elevate the agency of the stakeholder (decision explanations, hearing options, notice, and human involvement in the decision-making process). The policy domain mattered most for fairness and acceptability, while accuracy mattered most for efficiency perceptions.

Conclusions: Explaining decisions made using an algorithm, giving appropriate notice, a hearing option, and maintaining the supervision of a human agent are key components for public support when algorithmic systems are being implemented.

Fitas et al. on Leveraging AI in Education: Benefits, Responsibilities, and Trends

Ricardo Fitas (Technical U Darmstadt) et al. have posted “Leveraging AI in Education: Benefits, Responsibilities, and Trends” on SSRN. Here is the abstract:

This chapter presents a review of the role of Artificial Intelligence (AI) in enhancing education outcomes for both students and teachers. This review includes the most recent papers discussing the impact of AI tools, including ChatGPT and other technologies, in the educational landscape. It explores the benefits of AI integration, such as personalized learning and increased efficiency, highlighting how these technologies contribute to the learning experiences of individual student needs and administrative processes to enhance educational delivery. Adaptive learning systems and intelligent tutoring systems are also reviewed. Nevertheless, it is known that important responsibilities and ethical considerations intrinsic to the deployment of AI technologies must be included in such an integration. Therefore, a critical analysis of AI’s ethical considerations and potential misuse in education is also carried out in the present chapter. By presenting real-world case studies of successful AI integration, the chapter offers evidence of AI’s potential to positively transform educational outcomes while cautioning against adoption without addressing these ethical considerations. Furthermore, this chapter’s novelty relates to exploring emerging trends and predictions in the fields of AI and education. This study shows that, based on the success cases, it is possible to benefit from the positive impacts of AI while implementing protection against detrimental outcomes for the users. The chapter is significantly relevant, as it provides the stakeholders, users, and policymakers with a deeper understanding of the role of AI in contemporary education as a technology that aligns with educational values and the needs of society.

Coleman on Human Confrontation

Ronald J. Coleman (Georgetown U Law Center) has posted “Human Confrontation” (Wake Forest Law Review, Vol. 61, Forthcoming) on SSRN. Here is the abstract:

The U.S. Constitution’s Confrontation Clause ensures the criminally accused a right “to be confronted with the witnesses against” them. Justice Sotomayor recently referred to this clause as “[o]ne of the bedrock constitutional protections afforded to criminal defendants[.]” However, this right faces a new and existential threat. Rapid developments in law enforcement technology are reshaping the evidence available for use against criminal defendants. When an AI or algorithmic system places an alleged perpetrator at the scene of the crime or an automated forensic process produces a DNA report used to convict an alleged perpetrator, should this type of automated evidence invoke a right to confront? If so, how should confrontation be operationalized and on what theoretical basis?

Determining the Confrontation Clause’s application to automated statements is both critically important and highly under-theorized. Existing work treating this issue has largely discussed the scope of the threat to confrontation, called for more scholarship in this area, suggested that technology might not make the types of statements that would implicate a confrontation right, or found that direct confrontation of the technology itself could be sufficient.

This Article takes a different approach and posits that human confrontation is required. The prosecution must produce a human on behalf of relevant machine statements or such statements are inadmissible. Drawing upon the dignity, technology, policing, and confrontation literatures, it offers several contributions. First, it uses automated forensics to show that certain technology-generated statements should implicate confrontation. Second, it claims that for dignitary reasons only cross-examination of live human witnesses can meet the Confrontation Clause. Third, it reframes automation’s challenge to confrontation as a “humans in the loop” problem. Finally, it proposes a “proximate witness approach” that permits a human to testify on behalf of a machine, identifies an open set of principles to guide courts as to who can be a sufficient proximate witness, notes possible supplemental approaches, and discusses certain broader implications of requiring human confrontation. Human confrontation could check the power of the prosecution, aid system legitimacy, and ultimately act as a form of technology regulation.

Tang on Creative Labor and Platform Capitalism

Xiyin Tang (UCLA Law UCLA Law) has posted “Creative Labor and Platform Capitalism” (Forthcoming, UCLA Law Review, Volume 73 (2026)) on SSRN. Here is the abstract:

The conventional account of creativity and cultural production is one of passion, free expression, and self-fulfillment, a process whereby individuals can assert their autonomy and individuality in the world. This conventional account of creativity underlies prominent theories of First Amendment and intellectual property law, including the influential “semiotic democracy” literature, which posits that new digital technologies, by providing everyday individuals the tools to create and disseminate content, results in a better and more representative democracy. In this view, digital content creation is largely (1) done by amateurs; (2) done for free; and (3) conducive of greater freedom.

This Article argues that the conventional story of creativity, honed in the early days of the Internet, fails to account for significant shifts in how creative work is extracted, monetized, and exploited in the new platform economy. Increasingly, digital creation is done neither by amateurs, nor is it done for free. Instead, and as this Article discusses, fundamental shifts in the business models of the largest Internet platforms, led by YouTube, paved a path for the class of largely professionalized creators who increasingly rely on digital platforms to make a living today. In the new digital economy, monetization—in which users of digital platforms sell their content, and themselves, for a portion of the platform’s advertising revenues—not free sharing, reigns. And far from promoting freedom, such increased reliance on large platforms brings creators closer to gig workers—the Uber drivers, DoorDash delivery workers, and millions of other part-time laborers who increasingly find themselves at the mercy of the opaque algorithms of the new platform capitalism.

This reframing—of creation not as self-realization but as work that is both precarious and exploited, most notably as surplus data value—demands that any framework for regulating informational capitalism’s exploitation of labor is incomplete without considering how creative work is extracted and datafied in the digital platform economy.