Tschider on Humans Outside the Loop

Charlotte Tschider (Loyola U (Chicago) Law) has posted “Humans Outside the Loop” (Yale J. L. & Tech., Forthcoming) on SSRN. Here is the abstract:

Artificial Intelligence is not all artificial. After all, despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create it. Through data selection, decisional design, training, testing, and tuning to managing AI’s developments as it is used in the human world, humans exert agency and control over these choices and practices. AI is now ubiquitous: it is part of every sector and, for most people, their everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to actually remedy any wrongs.

This paper introduces the myriad of choices humans make to create safe and effective AI products, then explores key issues in existing liability models. Significant issues in negligence and products liability negligence schemes, including contractual limitations on liability, separate organizations creating AI products from the actual harm, obscure the origin of issues, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the relative limits of tort law in these types of technologies, challenging long-held divisions and theoretical constructs, frustrating its goals. From the perspectives of both businesses licensing AI and AI users, this paper identifies key impediments to realizing tort goals and proposes an alternative regulatory scheme that reframes liability from the human in the loop to the humans outside the loop.

Swisher on The Right to (Human) Counsel

Keith Swisher (U Arizona Law) has posted “The Right to (Human) Counsel: Real Responsibility for Artificial Intelligence” (74 S.C. L. Rev. 823 (2023)) on SSRN. Here is the abstract:

The bench and bar have created and enforced a comprehensive system of ethical rules and regulation. In many respects, it is a unique and laudable system for regulating and guiding lawyers, and it has taken incremental measures to account for the wave of new technology involved in the practice of law. But it is not ready for the future. It rests on an assumption that humans will practice law. Although humans might tinker at the margins, review work product, or serve some other useful purposes, they likely will not be the ones doing most of the legal work in the future. Instead, AI counsel will be serving the public. For the system of ethical regulation to serve its core functions in the future, it needs to incorporate and regulate AI counsel. This will necessitate, among other things, bringing on new disciplines in the drafting of ethical guidelines and in the disciplinary process, along with a careful review and update of the ethical rules as applied to AI practicing law.

Freeman Engstrom & Haim on Regulating Government AI

David Freeman Engstrom (Stanford Law School) and Amit Haim (same) have posted “Regulating Government AI and the Challenge of Sociotechnical Design” (Annual Review of Law and Social Science, Vol. 19, pp. 277-298, 2023) on SSRN. Here is the abstract:

Artificial intelligence (AI) is transforming how governments work, from distribution of public benefits, to identifying enforcement targets, to meting out sanctions. But given AI’s twin capacity to cause and cure error, bias, and inequity, there is little consensus about how to regulate its use. This review advances debate by lifting up research at the intersection of computer science, organizational behavior, and law. First, pushing past the usual catalogs of algorithmic harms and benefits, we argue that what makes government AI most concerning is its steady advance into discretion-laden policy spaces where we have long tolerated less-than-full legal accountability. The challenge is how, but also whether, to fortify existing public law paradigms without hamstringing government or stymieing useful innovation. Second, we argue that sound regulation must connect emerging knowledge about internal agency practices in designing and implementing AI systems to longer-standing lessons about the limits of external legal constraints in inducing organizations to adopt desired practices. Meaningful accountability requires a more robust understanding of organizational behavior and law as AI permeates bureaucratic routines.

Guggenberger on Moderating Monopolies

Nikolas Guggenberger (University of Houston Law Center) has posted “Moderating Monopolies” (Berkeley Technology Law Journal, Vol. 38, No. 1, 2023) on SSRN. Here is the abstract:

Industrial organization predetermines content moderation online. At the core of today’s dysfunctions in the digital public sphere is a market power problem. Meta, Google, Apple, and a few other digital platforms control the infrastructure of the digital public sphere. A tiny group of corporations governs online speech, causing systemic problems to public discourse and individual harm to stakeholders. Current approaches to content moderation build on a deeply flawed market structure, addressing symptoms of systemic failures at best and cementing ailments at worst.

Market concentration creates monocultures for communication susceptible to systemic failures and raises the stakes for individual content moderation decisions, like takedowns of posts or bans of individuals. As these decisions are inherently prone to errors, those errors are magnified by the platforms’ scale and market power. Platform monopolies also harm individual stakeholders: persisting monopolies lead to higher prices, lower quality, or less innovation. As platforms’ services include content moderation, degraded services may increase the error rate of takedown decisions and over-expose users to toxic content, misinformation, or harassment. Platform monopolies can also get away with discriminatory and exclusionary conduct more easily because users lack voice and exit opportunities.

Stricter antitrust enforcement is imperative, but contemporary antitrust doctrine alone cannot hope to provide sufficient relief to the digital public sphere. First, a narrowly understood consumer welfare standard overemphasizes easily quantifiable, short-term price effects. Second, the levels of concentration necessary to trigger antitrust scrutiny far exceed those of a market conducive to pluralistic discourse. Third, requiring specific anticompetitive conduct, the focal point of current antitrust doctrine, ignores structural dysfunction mighty bottlenecks create in public discourse, irrespective of the origins or even benevolent exercise of their power.

In this Article, I suggest three types of remedies to address the market power problem behind the dysfunctions in the digital public sphere. First, mandating active interoperability between platforms would drastically reduce lock-in effects. Second, scaling back quasi-property exclusivity online would spur follow-on innovation. Third, no-fault liability and broader objectives in antitrust doctrine would establish more effective counterweights to concentrating effects in the digital public sphere. While these pro-competitive measures cannot provide a panacea to all online woes, they would lower the stakes of inevitable content moderation decisions, incentivize investments in better decision-making processes, and contribute to healthier pluralistic discourse.

Frazier on Administrative Law and AI Risk

Kevin Frazier (St. Thomas University – School of Law) has posted “Administrative X-Risk: Pinpointing the Flaws of an AI Regulatory Scheme Reliant on Administrative Action” (Washburn Law Journal, Vol. 63, Forthcoming) on SSRN. Here is the abstrat:

In light of the fact that no existing entity has all the resources and attributes required to exclusively or predominantly regulate AI, advocates should foster regulatory resiliency and innovation by empowering a litany of entities to play a part in mitigating AI risk. In other words, advocates should focus federal resources on seeding regulatory resilience and innovations while placing plans to build a centralized, potentially-fragile, and likely-inadequate AI Agency on the backburner until more is known about AI, its risks, and the best means of mitigating those risks.

A good first step would be the creation of an “AI Smithsonian” that offers two improvements on an AI Agency: first, it would be structured in a way that avoids administrative x-risks such as the Congressional Review Act; and, second, it would have a responsible regulatory footprint–rather than crowd-out regulatory resilience by luring talent, resources, and authority from other entities, it would provide those entities with a source of reliable information and provide them with opportunities to collaborate.

The polycentric regulatory system called for by this essay has emerged in other regulatory contexts, such as automobile safety testing and standard development. Given the complex and shifting nature of AI development, however, waiting for a similar system to “emerge” will introduce an unacceptable amount of delay into this endeavor. That’s why this essay urges a proactive polycentric approach that assigns the U.S. federal government a regulatory role that reflects its institutional capacity, current and forthcoming legal limitations, and dynamic social license to take on certain tasks.

Rodriguez Maffioli on Copyright in Generative AI Training

Daniel Rodriguez Maffioli (Duke University School of Law) has posted “Copyright in Generative AI training: Balancing Fair Use through Standardization and Transparency” on SSRN. Here is the abstract:

The rapid evolution of Generative Artificial Intelligence (GAI) has brought about transformative changes across industries, often raising challenging questions surrounding data rights, especially within the context of copyrighted content. This paper delves into the nuances of the relationship between GAI and the fair use doctrine, highlighting the complexities that emerge when copyrighted data serves as the backbone for the development of large-scale AI models. By combining Benjamin Sobel’s training data taxonomy with the distinct stages of the Generative AI cycle, a hybrid framework is presented, offering a more granulated perspective on the applicability of fair use in GAI contexts. Recognizing the inherent limitations of the current legal paradigms, the paper introduces actionable proposals, emphasizing the need for enhanced transparency, data provenance measures, and the implementation of Standardized Data Licensing Agreements (SDLAs). Such measures aim to bridge the gap between AI developers and copyright holders, facilitating smoother negotiations and fostering trust. While the core discussion revolves around the interplay of GAI and fair use, the paper acknowledges broader policy challenges in the AI domain, urging for continuous exploration. Overall, this work underscores the necessity of adaptive, collaborative, and transparent strategies in harmonizing the objectives of innovation with the imperatives of intellectual property rights in the GAI landscape.