Price on Clinicians in the Loop of Medical AI

W. Nicholson Price II (U Michigan Law) has posted “Clinicians in the Loop of Medical AI” (75 Emory L.J. 1265 (2025)) on SSRN. Here is the abstract:

As medical AI begins to mature as a health-care tool, the task of governance grows increasingly important. Ensuring that medical AI works, works where it’s used, and works for the patient in the moment is a challenging, multifaceted task. Some of this governance can be centralized—in review by FDA or by national accreditation labs, for instance. Some must be local, performed by the hospital or health system about to use the product in their own, unique environment. But a large amount of governance is left to the individual provider in the room, the human in the loop who presumably knows the patient and the health system environment, and who can ensure that the AI system is being used in a safe and effective manner. This is a hefty burden, and a growing body of empirical research shows that physicians and other providers are poorly prepared to carry this burden. How should policymakers and industry leaders develop standards for performance that account for the variability of humans in the loop and the variation among situations they will face? The notion that the final responsibility belongs to the physician poorly reflects the reality of modern medical technology and practice. Policymakers will need to come to grips with this new reality if they aim to ensure the safe, effective use of AI accessible to patients across the entire spectrum of the health-care system.

Anthuvan et al. on Human-AI Collaboration in Academic Writing: A Narrative Review and the Scholarly HI-AI Loop Framework for Ethical Knowledge Production

Thamburaj Anthuvan (S.B.Patil Institute Management) et al. have posted “Human-AI Collaboration in Academic Writing: A Narrative Review and the Scholarly HI-AI Loop Framework for Ethical Knowledge Production” on SSRN. Here is the abstract:

This narrative literature review explores the evolving intersection of human and machine collaboration in academic writing, with a focus on literature summarization as a critical site of transformation. Synthesizing findings from 38 peer-reviewed studies published between 2020 and 2025, it examines the emergence of hybrid workflows where machine-generated drafts are refined, contextualized, and ethically validated by human scholars. The review identifies four core themes-tool capabilities, editorial oversight, ethical disclosure, and institutional readiness-that shape current practices and highlight unresolved tensions around authorship, transparency, and scholarly responsibility. Building on this synthesis, the paper introduces the Scholarly HI-AI Loop, a seven-stage framework that reimagines literature review as a co-productive and ethically accountable process. Unlike tool-centric audits, this framework offers a normative roadmap for integrating automation without compromising academic integrity. It positions human scholars not as passive reviewers, but as epistemic anchors who shape meaning, ensure accuracy, and safeguard ethical standards. The review offers actionable guidance for researchers, editors, institutions, and developers seeking to navigate this transition responsibly. By grounding its insights in both empirical patterns and conceptual analysis, the paper contributes to a growing conversation on how academic knowledge production can adapt-without eroding-its foundational values in the age of machine assistance.

Rubenstein on Federalism & Algorithms

David S. Rubenstein (Washburn U Law) has posted “Federalism & Algorithms” (Arizona Law Review, Vol. 67, Issue 4 (forthcoming Winter 2025)) on SSRN. Here is the abstract:

Artificial intelligence (AI) has catapulted to the forefront of political agendas of all levels of government. Across every major market and facet of society, policymakers face difficult tradeoffs between individual rights and collective welfare, innovation and regulation, economic growth and social equity. Federal and state institutions are resolving these tensions differently. The resulting policy patchwork may or may not be desirable, but the immediate point is that AI federalism is happening fast. To meet the moment, this Article provides the inaugural study and a research agenda for “AI federalism.” First, the Article provides the origin story of AI federalism, mapping the political and doctrinal territory. Second, the Article bridges disciplines and audiences who care deeply about AI’s place in society yet fail to appreciate how federalism can help or hurt the cause. Third, this Article makes a positive case for embracing AI federalism. While centralized AI policy at the national level has surface appeal, getting there requires a shared commitment on what to optimize for. As a nation, we are nowhere close. Federalism does not provide the answers. Rather, it provides a platform for dialogue and dissent, regulatory innovation and iteration, intergovernmental cooperation and contestation. One is hard-pressed to find this array of structural affordances elsewhere in the law, and we likely need all of them to address AI’s sprawling economic and social disruptions.

Massarotto on Algorithmic Remedies for Google’s Data Monopoly

Giovanna Massarotto (U Pennsylvania) has posted “Algorithmic Remedies for Google’s Data Monopoly” on SSRN. Here is the abstract:

Algorithms and data are the building blocks of the digital economy. From Google’s search engine to Meta’s Instagram and OpenAI’s ChatGPT, all “Big Tech” rely on algorithms to collect and process vast amounts of data that power their services and AI models. While algorithms themselves can be efficient and impartial tools, Google’s strategic use of them, combined with exclusionary practices, has landed the company in federal court for monopolizing critical digital markets. On September 2, 2025, a judge required Google to grant rivals access to its data to address the company’s monopolization of critical digital markets that rely on data. Another judge is expected to impose remedies on Google in a separate antitrust proceeding, which could encompass data-sharing measures, including data facilities. This remedy would de facto regulate data-driven markets and influence the future of the emerging AI industry.

However, such data-sharing obligations in antitrust law create a classic resource allocation problem: who gets access, and how can courts ensure that access is fair and non-discriminatory? This article demonstrates that this legal challenge mirrors a problem computer science solved decades ago: ensuring multiple parties can use a shared resource without conflict. Thereafter, drawing on those algorithmic solutions, it proposes a framework with systems that operate like a digital ‘take-a-number’ machine or a formal voting process to manage data distribution efficiently and fairly.

This article makes three important contributions to the existing scholarship in this field. First, it explains how data-sharing remedies can be designed and implemented, whether to address specific anticompetitive conduct or as part of broader regulatory frameworks. Second, it develops a comprehensive framework with three algorithmic approaches for resource allocation, translating computer science solutions into legal mechanisms. Third, this framework is applied to Google’s ongoing monopolization cases, guiding data-sharing remedies and promoting competition in AI and other data-driven markets.

Kolt on Superintelligence and Law

Noam Kolt (Hebrew University of Jerusalem) has posted “Superintelligence and Law” (Harvard Journal of Law & Technology (forthcoming)) on SSRN. Here is the abstract:

The prospect of artificial superintelligence—AI agents that can generally outperform humans in cognitive tasks and economically valuable activities—will transform the legal order as we know it. Operating autonomously or under only limited human oversight, AI agents will assume a growing range of roles in the legal system. First, in making consequential decisions and taking real-world actions, AI agents will become de facto subjects of law. Second, to cooperate and compete with other actors (human or non-human), AI agents will harness conventional legal instruments and institutions such as contracts and courts, becoming consumers of law. Third, to the extent AI agents perform the functions of writing, interpreting, and administering law, they will become producers and enforcers of law. These developments, whenever they ultimately occur, will call into question fundamental assumptions in legal theory and doctrine, especially to the extent they ground the legitimacy of legal institutions in their human origins. Attempts to align AI agents with extant human law will also face new challenges as AI agents will not only be a primary target of law, but a core user of law and contributor to law. To contend with the advent of superintelligence, lawmakers—new and old—will need to be clear-eyed, recognizing both the opportunity to shape legal institutions as society braces for superintelligence and the reality that, in the longer run, this may be a joint human-AI endeavor.

Laux on From Ethification to Juridification: Human Oversight and the Potential Crowding Out of Ethicists by Lawyers in AI Governance

Johann Laux (U Oxford Oxford Internet Institute) has posted “From Ethification to Juridification: Human Oversight and the Potential Crowding Out of Ethicists by Lawyers in AI Governance” on SSRN. Here is the abstract:

Artificial Intelligence (AI) systems can pose harms to humans and societies. While it is widely acknowledged that human oversight of AI play an important role in mitigating the technology’s risks, research on the organisational embedding of human oversight is only emerging. Drawing on socio-legal theory, AI ethics, and business ethics, this article seeks to make three contributions. First, it conceptualises human oversight of AI as a novel task for human labour in AI governance, induced by legal regulation and distinct from market-driven roles such as AI Ethicists. Second, the article presents human oversight as an instance of a “juridification” of AI governance, potentially resulting in a crowding out of AI Ethicists and their ethical expertise and motivation by lawyers from key roles in AI governance. The normative implications of juridification could be significant, as there is some but not complete overlap between the normative interests protected by ethics and law. Third, the article examines how organisations may manage the ethical decision-making that persists within legally mandated oversight, comparing compliance-and integrity-based approaches. While the former provides organisations with more top-down control and are thus more likely to be adopted, the latter may be more preserving of workers’ ethical motivations and offers potential for theoretical integration with the concept of ‘trustworthy AI’. The article concludes by stating the need for further empirical research into juridification’s impact on human labour in AI governance and the ensuing normative consequences.

Innocenti on Redefining Good Beekeeping: AI-Driven Sociotechnical Change in Agriculture

Marco Innocenti (U Milan) has posted “Redefining Good Beekeeping: AI-Driven Sociotechnical Change in Agriculture” on SSRN. Here is the abstract:

This chapter explores how emerging technologies available to farmers, and beekeepers in particular, reshape the meaning of their professional practices. It argues that decision-support systems, especially those operating with high autonomy, risk diminishing beekeepers’ decision-making autonomy when users are excluded from their processes. Exercising moral autonomy involves defining the goods that one’s profession should pursue, such as prioritising sustainability over profitability, valuing biodiversity, or caring for individual animals. The chapter challenges the restriction of this autonomy to the relationship between a farmer and her farm, a view that is increasingly limiting in the context of AI-driven tools. Instead, it advances the perspective that these tools create opportunities to redefine the norms and practices of the sociotechnical systems into which they are integrated, embedding farmers within new networks of social relations. Particular attention is devoted to the case of beekeepers’ involvement in the implementation of new technologies for biodiversity monitoring, aimed at ensuring compliance with ESRS E4 standards under the CSRD.

Hacker et al. on The Regulation of Fine-Tuning: Federated Compliance for Modified General-Purpose AI Models

Philipp Hacker (European U Viadrina Frankfurt (Oder) European New Digital Studies) and Matthias Holweg (U Oxford Said Business) have posted “The Regulation of Fine-Tuning: Federated Compliance for Modified General-Purpose AI Models” on SSRN. Here is the abstract:

This paper addresses the regulatory and liability implications of modifying general-purpose AI (GPAI) models under the EU AI Act and related legal frameworks. We make five principal contributions to this debate. First, the analysis maps the spectrum of technical modifications to GPAI models and proposes a detailed taxonomy of these interventions and their associated compliance burdens. Second, the discussion clarifies when exactly a modifying entity qualifies as a GPAI provider under the AI Act, which significantly alters the compliance mandate. Third, we develop a novel, hybrid legal test to distinguish substantial from insubstantial modifications that combines a compute-based threshold with consequence scanning to assess the introduction or amplification of risk. Fourth, the paper examines liability under the revised Product Liability Directive (PLD) and tort law, arguing that entities substantially modifying GPAI models become “manufacturers” under the PLD and may face liability for defects. The paper aligns the concept of “substantial modification” across both regimes for legal coherence and argues for a one-to-one mapping between “new provider” (AI Act) and “new manufacturer” (PLD). Fifth, the recommendations offer concrete governance strategies for policymakers and managers that propose a federated compliance structure, based on joint testing of base and modified models, implementation of Failure Mode and Effects Analysis and consequence scanning, a new database for GPAI models and modifications, robust documentation, and adherence to voluntary codes of practice. The framework also proposes simplified compliance options for SMEs while maintaining their liability obligations. Overall, the paper aims to map out a proportionate and risk-sensitive regulatory framework for modified GPAI models that integrates technical, legal, and wider societal considerations.

Cook et al. on Social Group Bias in AI Finance

Thomas R. Cook (Federal Reserve Bank Kansas City) and Sophia Kazinnik (Stanford U) have posted “Social Group Bias in AI Finance” on SSRN. Here is the abstract:

Financial institutions increasingly rely on large language models (LLMs) for highstakes decision-making. However, these models risk perpetuating harmful biases if deployed without careful oversight. This paper investigates racial bias in LLMs specifically through the lens of credit decision-making tasks, operating on the premise that biases identified here are indicative of broader concerns across financial applications. We introduce a reproducible, counterfactual testing framework that evaluates how models respond to simulated mortgage applicants identical in all attributes except race. Our results reveal significant race-based discrepancies, exceeding historically observed bias levels. Leveraging layer-wise analysis, we track the propagation of sensitive attributes through internal model representations. Building on this, we deploy a control-vector intervention that effectively reduces racial disparities by up to 70% (33% on average) without impairing overall model performance. Our approach provides a transparent and practical toolkit for the identification and mitigation of bias in financial LLM deployments.

Ciriello et al. on Compassionate AI Design, Governance, and Use

Raffaele Ciriello (U Sydney) and Angelina Chen (The U Sydney) have posted “Compassionate AI Design, Governance, and Use” on SSRN. Here is the abstract:

The rapid rise of generative AI reshapes society, transforming jobs, relationships, and core beliefs about human essence. AI’s ability to simulate empathy, once considered uniquely human, offers promise in industries from marketing to healthcare but also risks exploiting emotional vulnerabilities, fostering dependency, and compromising privacy. These risks are particularly acute with AI companion chatbots, which mimic emotional speech but may erode genuine human connections. Rooted in Schopenhauer’s compassionate imperative, we present a novel framework for compassionate AI design, governance, and use, emphasizing equitable distribution of AI’s benefits and burdens based on stakeholder vulnerability. We advocate for responsible AI development that prioritizes empathy, dignity, and human flourishing.