Fagan on How Language Models Will Transform Law

Frank Fagan (South Texas College of Law Houston) has posted “A View of How Language Models Will Transform Law” (Tennessee Law Review, Forthcoming) on SSRN. Here is the abstract:

This Article considers the influence of Large Language Models (LLMs) on legal practice and the legal services industry. In the near term, LLMs will spur new legal work. Lawyers will be called upon to help litigate new questions over property rights in data, language model output, and lawyer-engineered prompts. Lawyers will additionally help judges decide what to do about new forms of torts, including legal malpractice, enabled by the casual and lightly supervised use of large language models. As legal rules governing the use of generative A.I. begin to clarify and settle, and as the technology fully matures, future lawyers faced with routine work will engage language models to save time and costs. Consequently, legal tasks will take less time to complete, and language models will enhance lawyer productivity.

While most commentators have focused exclusively on how LLMs will transform day-to-day law practice, a substantial structural change could be afoot within the legal sector as a whole. Large increases in productivity and attendant cost savings could encourage law firms and corporate legal departments to develop large language models in-house. A ten percent increase in attorney productivity would encourage an average sized “Big Law” firm to reduce its associate headcount by 300 to 400 lawyers. This represents cost savings of $60 to $120 million—more than enough to pay for the development of a specialized LLM. Consider a senior partner who relies heavily on a proprietary language model to service a client. If the model is owned and controlled by the firm, then clients will be more strongly tied to the firm and may wish to remain there—even if the partner departs. Generative A.I. thus portends a shift in the balance of power between partners and firms. To what extent, it remains to be seen.

Eventually, LLMs will push lawyers into highly specialized and nuanced roles. After fully mature LLMs arrive, the lawyer will continue to play a central role in legal practice, but only in non-routine legal tasks. These tasks will primarily involve value judgments, such as the development of precedent or its reversal, or the allocation of property and other scarce resources. This new mix of lawyer-machine labor, where machines primarily carry out routine legal tasks, and lawyers handle the non-routine, will give rise to a growing demand for lawyers who can exercise good judgment and empathize with the winners and losers of social change. Overall, the Article suggests a possible future where there are fewer lawyers and greater consolidation of the legal sector.

Sunstein and Reisch on Liking Algorithms

Cass R. Sunstein (Harvard Law) & Lucia A. Reisch (El-Erian Institute; Copenhagen Business School) have posted “On Liking Algorithms” (Environmental and Resource Economics (symposium on Daniel Kahneman)) on SSRN. Here is the abstract:

A great deal of work in behavioral science emphasizes that statistical predictions often outperform clinical predictions. Formulas tend to do better than people do, and algorithms tend to outperform human beings, including experts. One reason is that algorithms do not show inconsistency or “noise”; another reason is that they are often free from cognitive biases. These points have broad implications for risk assessment in domains that include health, safety, and the environment. Still, there is evidence that many people distrust algorithms and would prefer a human decisionmaker. We offer a set of preliminary findings about how a tested population chooses between a human being and an algorithm. In a simple choice between the two across diverse settings, people are about equally divided in their preference. We also find that that a significant number of people are willing to shift in favor of algorithms when they learn something about them, but also that a significant number of people are unmoved by the relevant information. These findings have implications for unruly current findings about “algorithm aversion” and “algorithm appreciation.”

Wu on AI Whistleblowers

Henry Wu (Yale Law School) has posted “AI Whistleblowers” on SSRN. Here is the abstract:

Advocates, technologists, and public officials have increasingly warned about the potential risks from the rapid deployment of artificial intelligence (AI). AI has been developed for use in biotech and life sciences, critical infrastructure, healthcare, social services, and lethal autonomous weapons. In these domains, researchers and policymakers have raised questions about transparency, bias, and accountability for potential harms. Much of the attention towards AI governance, however, has focused on public regulatory regimes focused on standards, guidance, and mandated audits. Less attention has been paid to private regimes of AI governance, including corporate self-regulation. And little has been said about the possibilities for public-private AI governance, such as through the public regulation of private enforcement as in the financial services context. This essay broadens our understanding of AI governance by focusing on whistleblower protection.

Whistleblower protection is often discussed but rarely explained. Many technology companies have internal whistleblower policies. But activists and former whistleblowers have argued that these internal policies are not enough and have publicly called for regulation that protects whistleblowing. In the context of AI governance, researchers have long called for incentives and protections for whistleblowers. Despite these references to whistleblower protection regimes, little has been theorized about the scope and limits of AI whistleblowing. Can whistleblowing help address the myriad of risks that are potentially posed by AI? How might existing whistleblower regimes inform the regulatory design around AI whistleblowers? What incentives, if any, should there be for AI whistleblowing? And how might we design a whistleblower regime in light of cybersecurity concerns from the potential leak of sensitive information?

This essay argues for enhanced whistleblower protections in the context of AI governance. Building upon recent scholarship identifying the challenges of whistleblowing in the technology sector, I discuss several problems unique to algorithmic whistleblowers. Algorithmic whistleblowers could range from scientists working on machine learning models in the biosciences context, defense contractors implementing AI in weapons systems, or private employees at AI labs. I canvas an array of AI-related risks and discuss recent governance proposals, explaining the need for whistleblowing as a governance mechanism. I consider how whistleblowing might work to address AI-associated risks in various sectors and sketch a novel regulatory scheme.

Gunder on Rule 11 and Generative AI

Jessica R. Gunder (U Idaho Law) has posted “Rule 11 Is No Match for Generative AI” (Stanford Technology Law Review, Forthcoming) on SSRN. Here is the abstract:

In a series of high-profile ethics debacles, attorneys who used generative AI technology found themselves in hot water after they negligently relied on fictitious cases and false statements of law crafted by the technology. These attorneys mistakenly relied upon the output they received from a generative AI product without verifying and validating that output. Their embarrassing ethical breaches made national news, and spurred judges to implement standing orders that require attorneys to disclose their use of AI technology.

Scholars were quick to criticize these standing orders and the standing orders are rife with problems. But are they needed? Or are the standing orders redundant because Civil Rule of Procedure 11 can address this problem?

Generative AI, and the filing of briefs that contain fictitious cases and false statements of law is testing the reach of Rule 11, which is coming up lacking. This article is the first to study and evaluate whether Rule 11 can effectively address litigant use of generative AI output that contains fictitious cases and false statements of law. In this article, I contend that, while the failure to perform adequate research is conduct that can be reached through Rule 11, the rule is not well-suited to the task of regulating this behavior, and Rule 11’s inadequacy is likely spurring the creation of these standing orders. I then analyze the benefits and detriments that inure from these standing orders, setting forth various considerations for judges and jurisdictions to weigh when evaluating whether to impose their own standing orders, revise current standing orders, or promulgate local rules to regulate litigant use of generative AI technology.

G’Sell on An Overview of the European Union Framework Governing Generative AI Models and Systems

Florence G’sell (Stanford Cyber Policy Center; University of Lorraine; Sciences Po) has posted “An Overview of the European Union Framework Governing Generative AI Models and Systems” on SSRN. Here is the abstract:

This study is a work in progress examining the legal framework governing generative AI models in the European Union. First, it studies the rules already applicable to generative AI models (GDPR, Copyright Law, Civil Liability, Digital Services Act). Second, it examines the latest version of the AI Act as it was voted by the EU Parliament on March 13, 2024. Lastly, it studies the two Directives dealing with civil liability: the new Product Liability Directive voted on March 12, 2024 and the proposal for an AI Liability Directive.

Stucke & Ezrachi on Antitrust & AI Supply Chains

Maurice E. Stucke (U Tennessee Law) and Ariel Ezrachi (Oxford Law) have posted “Antitrust & AI Supply Chains” on SSRN. Here is the abstract:

Will AI technology disrupt the current Big Tech Barons, foster competition, and ensure future disruptive innovation that improves our well-being? Or might the technology help a few ecosystems become even more powerful?

To explore this issue, our paper outlines the current digital market dynamics that lead to winner-take-most-or-all ecosystems. After examining the emerging AI foundation model supply chain, we consider several potential antitrust risks that may emerge should certain layers of the supply chain become concentrated and firms extend their power across layers. But the anticompetitive harms are not inevitable, as several countervailing factors might lessen or prevent these antitrust risks. We conclude with suggestions for the policy agenda to promote both healthy competition and innovation in the AI supply chain.

Haque, Rose & DeSetto on Patenting Crowdsourced Gen AI

Raina Haque (Wake Forest U Law), Simone A. Rose (same), and Nick DeSetto (same) have posted “The Non-Obvious Razor & Generative AI” (25 N.C. J.L. & TECH. 399 (2024) on SSRN. Here is the abstract:

This article examines the challenges and prospects of crowd-sourcing generative AI systems (“GenAI”) in patent law as human and machine creativity become seamless. As GenAI technology like ChatGPT-4 become ubiquitous, AI-generated solutions will be less innovative and will complicate tenets about patentability. An evolution of patent law’s non-obviousness standard provides an elegant solution – borrowing from philosophy, a “razor” – to addressing the impact of advanced AI in the innovation process.

This article’s thesis is distinct from the USPTO’s emphasis on whether or not AI systems can be inventors, because it assumes that human and artificial creativity will become indistinguishable. This article focuses on a reevaluation of utility patent law’s non-obviousness standard in light of the steady societal shift toward broad information and technological empowerment. By exploring GenAI’s role in augmenting creativity and its implications for the standard of “ordinary creativity,” this article suggests factors for a revised patentability examination methodology.

This reevaluation seeks to balance AI’s rapid advances with patent law’s goals to promote progress. There is precedent for the non-obviousness standard to absorb advancements in artificial intelligence that rely on crowd-sourced information. GenAI challenges traditional notions of invention and creativity. The legal construct against which “non-obviousness” is determined – the ordinary creativity of the “person of ordinary skill in the art” (PHOSITA) – should be recalibrated to account for GenAI and to encourage innovation while protecting public access to tools of creativity.

Sharkey on A Products Liability Framework for AI

Catherine M. Sharkey (NYU Law) has posted “A Products Liability Framework for AI” (Columbia Science and Technology Law Review, Vol. 25, No. 2, 2024) on SSRN. Here is the abstract:

A products liability framework, drawing inspiration from the regulation of FDA-approved medical products—which includes federal regulation as well as products liability—holds great promise for tackling many of the challenges artificial intelligence (AI) poses. Notwithstanding the new challenges that sophisticated AI technologies pose, products liability provides a conceptual framework capable of responding to the learning and iterative aspects of these technologies. Moreover, this framework provides a robust model of the feedback loop between tort liability and regulation.
The regulation of medical products provides an instructive point of departure. The FDA has recognized the need to revise its traditional paradigm for medical device regulation to fit adaptive AI/Machine Learning (ML) technologies, which enable continuous improvements and modifications to devices based on information gathered during use. AI/ML technologies should hasten an even more significant regulatory paradigm shift at the FDA away from a model that puts most of its emphasis (and resources) on ex ante premarket approval to one that highlights ongoing postmarket surveillance. As such a model takes form, tort (products) liability should continue to play a significant information-production and deterrence role, especially during the transition period before a new ex post regulatory framework is established.

Cosenza on Litigating Government Use of AI

Giulia G. Cusenza (U Udine) has posted “Litigating Governmental use of AI” on SSRN. Here is the abstract:

In the last decade US courts have ruled cases related to the use of AI by governmental bodies. But while legal disputes have served as trailblazer for relevant policy documents and have been used by scholars to support specific arguments, this litigation has not been the subject of a systematic analysis. This paper fills this gap and provides a quantitative and qualitative study of how courts deal with litigation on the use of AI by governmental bodies. The analysis leads to an overarching conclusion, namely that judicial decisions almost exclusively rely on procedural grounds – specifically those concerning due process infringements – thus suggesting that substantial issues are typically addressed through procedural solutions. In turn, these procedural issues consist of six violations: lack of adequate notice and explanation, lack of contestability, lack of human oversight, lack of notice and comment procedures, lack of assessment procedures, and denial of the right to access information. By revealing this tendency and by identifying the six procedural violations, the analysis ultimately provides a taxonomy of the minimum requirements that any governmental body should comply with to shield their use of algorithmic systems from judicial review.

Salib on Abolition by Algorithm

Peter Salib (U Houston Law) has posted “Abolition by Algorithm” (Michigan Law Review, Forthcoming) on SSRN. Here is the abstract:

In one sense, America’s newest Abolitionist movement—advocating the elimination of policing and prison—has been a success. Following the 2020 Black Lives Matter protests, a small group of self-described radicals convinced a wide swath of ordinary liberals to accept a radical claim: Mere reforms cannot meaningfully reduce prison and policing’s serious harms. Only elimination can. On the other hand, Abolitionists have failed to secure lasting policy change. The difficulty is crime. In 2021, following a nationwide uptick in homicides, liberal support for Abolitionist proposals collapsed. Despite being newly “abolition curious,” left-leaning voters consistently rejected concrete abolitionist policies. Faced with the difficult choice between reducing prison and policing and controlling serious crime, voters consistently chose the latter.

This Article presents a policy approach that could accomplish both goals simultaneously: “Algorithmic Abolitionism.” Under Algorithmic Abolitionism, powerful machine learning algorithms would allocate policing and incarceration. They would abolish both maximally, up to the point at which crime would otherwise begin to rise. Results would be dramatic. Using existing technology, Algorithmic Abolitionist policies could: eliminate at least 42% and as many as 86% of Terry stops; free between 40 and 80% of incarcerated persons; eradicate nearly all traffic stops; and remove police patrols from between 50 and 85% of city blocks. All without causing more crime.

Beyond these practical effects, Algorithmic Abolitionist thinking generates new and important normative insights in the debate over algorithmic discrimination. In short, in an Algorithmic Abolitionist world, traditional frameworks for understanding and measuring such discrimination fall apart. They sometimes rate Algorithmic Abolitionist policies as unfair, even when those policies massively reduce the number of people mistreated because of their race. And they rate other policies as fair, even when those policies would cause far more discriminatory harm. To overcome these problems, this Article introduces a new framework for understanding—and a new quantitative tool for measuring—algorithmic discrimination: “bias-impact.” It then explores the complex array of normative trade-offs that bias-impact analyses reveal. As the Article shows, bias-impact analysis will be vital not just in the criminal enforcement context, but in the wide range of settings—healthcare, finance, employment—where Algorithmic Abolitionist designs are possible.