Haim & Yogev on What Do People Want from Algorithms? Public Perceptions of Algorithms in Government

Amit Haim (Tel Aviv U Buchmann Law) and Dvir Yogev (UC Berkeley Law) have posted “What Do People Want from Algorithms? Public Perceptions of Algorithms in Government” on SSRN. Here is the abstract:

Objectives: This study examines how specific attributes of Algorithmic Decision-MakingTools (ADTs), related to algorithm design and institutional governance, affect the public’s perceptions of implementing ADTs in government programs.

Hypotheses: We hypothesized that acceptability varies systematically by policy domain. Regarding algorithm design, we predicted that higher accuracy, transparency, andgovernment in-house development will enhance acceptability. Institutional features werealso expected to shape perceptions: explanations, stakeholder engagement, oversight mechanisms, and human involvement are anticipated to increase public perceptions.

Method: This study employed a conjoint experimental design with 1,213 U.S. adults.Participants evaluated five policy proposals, each featuring a proposal to implement anADT. Each proposal included randomly generated attributes across nine dimensions. Participants decided on the ADT’s acceptability, fairness, and efficiency for each proposal. The analysis focused on the average marginal conditional effects (AMCE) of ADT attributes.

Results: A combination of attributes related to process individualization significantly enhanced the perceived acceptability of using algorithms by government. Participants preferred ADTs that elevate the agency of the stakeholder (decision explanations, hearing options, notice, and human involvement in the decision-making process). The policy domain mattered most for fairness and acceptability, while accuracy mattered most for efficiency perceptions.

Conclusions: Explaining decisions made using an algorithm, giving appropriate notice, a hearing option, and maintaining the supervision of a human agent are key components for public support when algorithmic systems are being implemented.

Fitas et al. on Leveraging AI in Education: Benefits, Responsibilities, and Trends

Ricardo Fitas (Technical U Darmstadt) et al. have posted “Leveraging AI in Education: Benefits, Responsibilities, and Trends” on SSRN. Here is the abstract:

This chapter presents a review of the role of Artificial Intelligence (AI) in enhancing education outcomes for both students and teachers. This review includes the most recent papers discussing the impact of AI tools, including ChatGPT and other technologies, in the educational landscape. It explores the benefits of AI integration, such as personalized learning and increased efficiency, highlighting how these technologies contribute to the learning experiences of individual student needs and administrative processes to enhance educational delivery. Adaptive learning systems and intelligent tutoring systems are also reviewed. Nevertheless, it is known that important responsibilities and ethical considerations intrinsic to the deployment of AI technologies must be included in such an integration. Therefore, a critical analysis of AI’s ethical considerations and potential misuse in education is also carried out in the present chapter. By presenting real-world case studies of successful AI integration, the chapter offers evidence of AI’s potential to positively transform educational outcomes while cautioning against adoption without addressing these ethical considerations. Furthermore, this chapter’s novelty relates to exploring emerging trends and predictions in the fields of AI and education. This study shows that, based on the success cases, it is possible to benefit from the positive impacts of AI while implementing protection against detrimental outcomes for the users. The chapter is significantly relevant, as it provides the stakeholders, users, and policymakers with a deeper understanding of the role of AI in contemporary education as a technology that aligns with educational values and the needs of society.

Coleman on Human Confrontation

Ronald J. Coleman (Georgetown U Law Center) has posted “Human Confrontation” (Wake Forest Law Review, Vol. 61, Forthcoming) on SSRN. Here is the abstract:

The U.S. Constitution’s Confrontation Clause ensures the criminally accused a right “to be confronted with the witnesses against” them. Justice Sotomayor recently referred to this clause as “[o]ne of the bedrock constitutional protections afforded to criminal defendants[.]” However, this right faces a new and existential threat. Rapid developments in law enforcement technology are reshaping the evidence available for use against criminal defendants. When an AI or algorithmic system places an alleged perpetrator at the scene of the crime or an automated forensic process produces a DNA report used to convict an alleged perpetrator, should this type of automated evidence invoke a right to confront? If so, how should confrontation be operationalized and on what theoretical basis?

Determining the Confrontation Clause’s application to automated statements is both critically important and highly under-theorized. Existing work treating this issue has largely discussed the scope of the threat to confrontation, called for more scholarship in this area, suggested that technology might not make the types of statements that would implicate a confrontation right, or found that direct confrontation of the technology itself could be sufficient.

This Article takes a different approach and posits that human confrontation is required. The prosecution must produce a human on behalf of relevant machine statements or such statements are inadmissible. Drawing upon the dignity, technology, policing, and confrontation literatures, it offers several contributions. First, it uses automated forensics to show that certain technology-generated statements should implicate confrontation. Second, it claims that for dignitary reasons only cross-examination of live human witnesses can meet the Confrontation Clause. Third, it reframes automation’s challenge to confrontation as a “humans in the loop” problem. Finally, it proposes a “proximate witness approach” that permits a human to testify on behalf of a machine, identifies an open set of principles to guide courts as to who can be a sufficient proximate witness, notes possible supplemental approaches, and discusses certain broader implications of requiring human confrontation. Human confrontation could check the power of the prosecution, aid system legitimacy, and ultimately act as a form of technology regulation.

Tang on Creative Labor and Platform Capitalism

Xiyin Tang (UCLA Law UCLA Law) has posted “Creative Labor and Platform Capitalism” (Forthcoming, UCLA Law Review, Volume 73 (2026)) on SSRN. Here is the abstract:

The conventional account of creativity and cultural production is one of passion, free expression, and self-fulfillment, a process whereby individuals can assert their autonomy and individuality in the world. This conventional account of creativity underlies prominent theories of First Amendment and intellectual property law, including the influential “semiotic democracy” literature, which posits that new digital technologies, by providing everyday individuals the tools to create and disseminate content, results in a better and more representative democracy. In this view, digital content creation is largely (1) done by amateurs; (2) done for free; and (3) conducive of greater freedom.

This Article argues that the conventional story of creativity, honed in the early days of the Internet, fails to account for significant shifts in how creative work is extracted, monetized, and exploited in the new platform economy. Increasingly, digital creation is done neither by amateurs, nor is it done for free. Instead, and as this Article discusses, fundamental shifts in the business models of the largest Internet platforms, led by YouTube, paved a path for the class of largely professionalized creators who increasingly rely on digital platforms to make a living today. In the new digital economy, monetization—in which users of digital platforms sell their content, and themselves, for a portion of the platform’s advertising revenues—not free sharing, reigns. And far from promoting freedom, such increased reliance on large platforms brings creators closer to gig workers—the Uber drivers, DoorDash delivery workers, and millions of other part-time laborers who increasingly find themselves at the mercy of the opaque algorithms of the new platform capitalism.

This reframing—of creation not as self-realization but as work that is both precarious and exploited, most notably as surplus data value—demands that any framework for regulating informational capitalism’s exploitation of labor is incomplete without considering how creative work is extracted and datafied in the digital platform economy.

Nobel et al. on Unbundling AI Openness

Parth Nobel (Stanford U) et al. have posted “Unbundling AI Openness” (2026 Wisconsin Law Review (forthcoming)) on SSRN. Here is the abstract:

The debate over AI openness—whether to make components of an artificial intelligence system available for public inspection and modification—forces policymakers to balance innovation, democratized access, safety and national security. By inviting startups and researchers into the fold, it enables independent oversight and inclusive collaboration. But technology giants can also use it to entrench their own power, while adversaries can use it to shortcut years and billions of dollars in building systems, like China’s Deepseek-R1, that rival our own. How we govern AI openness today will shape the future of AI and America’s role in it.

Policymakers and scholars grasp the stakes of AI openness, but the debate is trapped in a flawed premise: that AI is either “open” and “closed.” This dangerous oversimplification—inherited from the world of open source software—belies the complex calculus at the heart of AI openness. Unlike traditional software, AI is a composite technology built on a stack of discrete components—from compute to labor—controlled by different stakeholders with competing interests. Each component’s openness is neither a binary choice nor inherently desirable. Effective governance demands a nuanced understanding of how the relative openness of each component serves some goals while undermining others. Only then can we determine the trade-offs we are willing to make and how we hope to achieve them.

This Article aims to equip policymakers with the analytical toolkit to do just that. First, it introduces a novel taxonomy of “differential openness,” unbundling AI into its constituent components and illustrating how each one has its own spectrum of openness. Second, it uses this taxonomy to systematically analyze how each component’s relative openness necessitates intricate trade-offs both within and between policy goals. Third, it operationalizes these insights, providing policymakers with a playbook for how law can be precisely calibrated to achieve optimal configurations of component openness.

AI openness is neither all or nothing nor inherently good or evil—it is a tool that must be wielded with precision if it has any hope of serving the public interest.

Duhl on Embedding AI in the Law School Classroom

Gregory M. Duhl (Mitchell Hamline School of Law) has posted “All In: Embedding AI in the Law School Classroom” on SSRN. Here is the abstract:

What is the irreducibly human element in legal education when AI can pass the bar exam, generate effective lectures, and provide personalized learning and academic support? This Article confronts that question head-on by documenting the planning and design of a comprehensive transformation of a required doctrinal law school course—first-year Contracts—with AI fully embedded throughout the course design. Instead of adding AI exercises to conventional pedagogy or creating a stand-alone AI course, this approach reimagines legal education for the AI era by integrating AI as a learning enhancer rather than a threat to be managed. The transformation serves Mitchell Hamline School of Law’s access-driven mission: AI helps create equity for diverse learners, prepares practice-ready professionals for legal practice transformed by AI, and shifts the institutional narrative from policing technology use to leveraging it pedagogically.

This Article details the roadmap I have followed for AI integration in a course that I am teaching in Spring 2026. It documents the beginning of my experience with throwing out the traditional legal education playbook and rethinking how I approach teaching using AI pedagogy within a profession in flux. Part I establishes the pedagogical rationale grounded in learning science and institutional mission. Part II describes the implementation strategy, including partnerships with instructional designers, faculty innovators, and legal technology companies. Part III details a course-wide series of specific exercises that develop AI literacy alongside doctrinal and skill mastery. Part IV addresses legitimate objections about bar preparation, analytical skills, academic integrity, and scalability beyond transactional courses. The Article concludes with a commitment to transparent empirical research through a pilot study launching in Spring 2026, acknowledging both the promise and the uncertainty of this pedagogical innovation. For legal educators grappling with AI’s rapid transformation of both education and practice, this Article offers a mission-driven, evidence-informed, yet still preliminary template for intentional change—and an invitation to experiment, adapt, and share results.

Karathanasis on The FRIA in EU AI Act: Governance, Rights, and Global Jurisdiction

Theodoros Karathanasis (MIAI AI Regulation Chair) has posted “The FRIA in EU AI Act: Governance, Rights, and Global Jurisdiction” on SSRN. Here is the abstract:

The Fundamental Right Impact Assessment (FRIA) under the EU AI Act presents a critical yet problematic mechanism for mitigating AI’s harms on fundamental rights. The primary challenge is the complex operationalization of FRIA, particularly given deployers’ nascent understanding of diverse fundamental rights beyond data protection, the absence of a comprehensive quantitative assessment methodology, and initial concerns regarding its administrative burden. The working paper conducts an in-depth analysis of FRIA’s multifaceted components and its relationship with existing assessments like the Data Protection Impact Assessment (DPIA), highlighting its broader scope covering non-personal data scenarios and focusing on “interferences” rather than solely “damages”. The most significant result is the EU’s expansive jurisdictional assertion, rooted in “territorializing” extraterritorial obligations and the “eOects doctrine,” wherein FRIA functions as a direct manifestation of internal due diligence to protect human rights globally. This signifies a normative shift from traditional physical territory to a framework, aiming to diOuse EU rights-based standards universally despite challenges like sovereignty conflicts and AI’s black box nature.

Devlin on There is No ‘Ethics of AI’: An Argument for User Responsibility

Emer Devlin (affiliation not provided to SSRN) has posted “There is No ‘Ethics of AI’: An Argument for User Responsibility” on SSRN. Here is the abstract:

As AI seeps into nearly every digital application we use, it remains fraught with controversy. This paper argues that many fears and ethical debates are misplaced – directed toward technologies that, lacking consciousness, are amoral. Through media and language analysis, it develops the ‘connotation problem’ to describe the tendency of AI models to repurpose language with preexisting connotations and thus falsely attribute human-like qualities to them.It will be shown that common criticisms of AI are, rather, criticisms of the user and that ethical concerns should redirect moral responsibility to those who develop and utilize it. This paper concludes that educated integration must be pursued over the widely proposed prohibition.

Luria & Grybos on Policy Considerations for Socially Interactive AI Agents: A Systematic Literature Review

Michal Luria (Center Democracy and Technology) and Emilie Grybos (U Pennsylvania Annenberg Communication) have posted “Policy Considerations for Socially Interactive AI Agents: A Systematic Literature Review” on SSRN. Here is the abstract:

Although interfaces that use human communication, such as language and nonverbal behavior, to interact with users have been around for decades, the introduction of ChatGPT and other large language models (LLMs) to the public began a new era of LLM-based artificial intelligence (AI). As a result, the implementation of LLM chatbots and agents quickly expanded across applications and domains. In this paper, we conducted a systematic literature review on AI agents that socially interact with users, to identify research-based policy considerations in the academic discourse. This review provides a critical baseline for understanding policy concerns and priorities, and highlights challenges that remain unresolved. By analyzing 110 peer-reviewed publications that discuss various aspects of agent-related policy, we highlight the most prominent issues and how each uniquely applies to socially interactive agent technology: Data and Privacy, Safety and Security, Human Rights, Agent Rights, and Liability and Responsibility, Transparency and Explainability. We identify key gaps in existing research given current media and policy discussions, and conclude with recommendations for researchers and policymakers alike.

Blaszczyk on Posthuman Copyright: AI, Copyright, and Legitimacy

Matt Blaszczyk (U Michigan Law) has posted “Posthuman Copyright: AI, Copyright, and Legitimacy” on SSRN. Here is the abstract:

Copyright’s human authorship requirement is an institutional attempt to assert legal, moral, and sociological legitimacy at a time of crisis. The U.S. Copyright Office, the courts, and the so-called copyright humanists, portray the requirement as a beacon of copyright’s faith, meant to protect authors in the AI era. The minimal threshold for human authorship, however, forces us to question whether it is merely rhetoric, which the law has always employed regardless of its justification. This Article bridges the gap between doctrinal, theoretical, socio-legal and constitutionalist scholarship, arguing that human authorship is an ideology to which the law is only nominally faithful. The Article analyzes the U.S. Copyright Office’s pronouncements, the D.C. Circuit ruling in Thaler v. Perlmutter, and the pending case of Allen v. Perlmutter, arguing that the Office’s approach, despite its rhetoric, is not meant to meaningfully stop the AI revolution. Whether interpreted broadly or narrowly, the human authorship requirement is unlikely to protect the interests of human authors in the AI era. Incorporating insights from copyright history and theoretical debates about romantic authorship, this Article argues that copyright has failed to protect those interests for over a century, instead favoring the interests of powerful corporations. If and when copyright becomes a regime for robots, the question is whether that expansion will also primarily benefit corporations. Arguably, copyright has never cared much for human authors-and it is time to question if we should keep pretending otherwise.