Lior on A Quantum of Privacy

Anat Lior (Drexel; Yale ISP) has posted “A Quantum of Privacy” (Nevada Law Journal, Vol. 25, 2024) on SSRN. Here is the abstract:

Quantum technologies are on the brink of becoming the leading technologies in the upcoming years, with forecasts suggesting their imminent commercial availability. In August 2023, Google unveiled a quantum-resistant security key, while NIST is progressing a post-quantum cryptography protocol. This Article sheds light on these highly anticipated advancements, particularly in quantum computing, and explores the potential privacy implications that could arise alongside their development and integration into the commercial market. It proposes a comprehensive, long-term approach to addressing these privacy challenges early on, given the technology’s nascent stage and current lack of readiness for widespread commercial use. This timeframe allows regulators ample opportunities to thoroughly deliberate on an appropriate legislative framework—one that is viable, effective, enforceable, and impactful.

The conversation surrounding the potential privacy risks posed by quantum technologies is in its initial stages. This Article seeks to expand and deepen this discourse, with a particular emphasis on quantum computing. It aims to achieve this by offering an overview of various quantum technologies, outlining the potential harms—focusing on privacy concerns—that these technologies might introduce, and highlighting the deficiencies in the current US framework to counteract these risks. Furthermore, it will delineate three phases of a policy framework that the US could presently adopt to address the potential privacy risks associated with quantum technologies.

The proposed framework consists of three interconnected stages that form a continuous feedback loop. This loop facilitates the implementation of the framework and enhances our understanding of these emerging technologies. The first stage focuses on quantum education to ensure widespread understanding within society and equip present and future policymakers with the essential expertise needed to develop meaningful legislation. Subsequently, as the first stage endeavors to cultivate a diverse cohort of experts in this domain, the second stage concentrates on promoting robust industry standards through standardization, promoting compatibility and transparency among the few companies operating in this sector. This phase also fosters the emergence of optimal privacy practices and norms in the quantum sphere. Building upon the insights gained from the preceding phases, the third stage advocates for the formulation of nuanced regulations, supplemented by soft regulatory tools such as cloud-based quantum computing and insurance policies. This phase of ‘hard and soft regulation’ does not necessarily entail a comprehensive ‘Quantum Act’ but rather involves refining existing regulations or crafting specific legislation that addresses particular aspects of quantum technologies. Unlike comprehensive AI bills currently under discussion, an all-encompassing quantum act seems impractical in the intricate and unpredictable landscape of quantum technologies.

Ohm on Understanding LLM Fine-Tuning

Paul Ohm (Georgetown Law) has posted “Focusing on Fine-Tuning: Understanding the Four Pathways for Shaping Generative AI” (25 Colum. Sci & Tech. L. Rev. 214 (2024)) on SSRN. Here is the abstract:

Those who design and deploy generative-AI models, such as Large Language Models like GPT-4 or image diffusion models like Stable Diffusion, can shape model behavior in four distinct stages: pretraining, fine-tuning, in-context learning, and input-and-output filtering. The four stages differ among many dimensions, including cost, access, and persistence of change. Pretraining is always very expensive and in-context learning is nearly costless. Pretraining and fine-tuning change the model in a more persistent manner, while in-context learning and filters make less durable alterations. These are but two of many such distinctions reviewed in this Essay.

Legal scholars, policymakers, and judges need to understand the differences between the four stages as they try to shape and direct what these models do. Although legal and policy interventions can (and probably will) occur during all four stages, many will best be directed at the fine-tuning stage. Fine-tuning will often represent the best balance between power, precision, and disruption of the approaches.

Madison on Knowledge Commons

Michael J. Madison (U Pittsburgh Law) has posted “Knowledge Commons Past, Present, and Future” (28 Lewis & Clark L. Rev. 303 (2024)) on SSRN. Here is the abstract:

The project now known as Governing Knowledge Commons, or GKC, was launched more than 15 years ago on the intuition that skepticism of intellectual property law and information exclusivity was grounded in anecdote and ideology rather than in empiricism. Structured, systematic, empirical research on mechanisms of knowledge sharing was needed. GKC aimed to help scholars produce it. Over multiple books, case studies, and other work, the scope of GKC has expanded considerably, from innovation to governance; from invention and creativity to data, privacy, and markets; and from social dilemmas focused on things to governance strategies directed to communities and collectives. This short Article describes the origins, functions, successes, limitations, and ambitions of GKC research, aligning it with questions of law as well as with the many roles of information in 21st century society.

Pasquale & Malgieri on Gen AI and Administrative Law

Frank Pasquale (Cornell Law School; Cornell Tech) and Gianclaudio Malgieri (U Leiden Law; Free Uni Brussels) have posted “Generative AI, Explainability, and Score-Based Natural Language Processing in Benefits Administration” (J. Cross-Disciplinary Research in Computational Law (forthcoming 2024)) on SSRN. Here is the abstract:

Administrative agencies have developed computationally-assisted processes to speed benefits to persons with particularly urgent and obvious claims. One proposed extension of these programs would score claims based on the words that appear in them (and relationships between these words), identifying some sets of claims as particularly like known, meritorious claims, without understanding the meaning of any of these legal texts. This score-based natural language processing (SBNLP) may expand the range of claims categorized as urgent and obvious, but as its complexity advances, its practitioners may not be able to offer a narratively intelligible rationale for how or why it does so. At that point, practitioners may utilize the new textual affordances of generative AI to attempt to fill this explanatory gap, offering a rationale for decision that is a plausible imitation of past, human-written explanations of judgments in cases with similar sets of words in their claims.

This article explains why such generative AI should not be used to justify SBNLP decisions in this way. Due process and other core principles of administrative justice require humanly intelligible identification of the grounds for administrative action. Given that ‘next-token prediction’ is distinct from understanding a text, generative AI cannot perform such identification reliably. Moreover, given current opacity and potential bias in leading chatbots – which are based on large language models – as well as deep ethical concerns raised by the databases they are built on, there is a strong case for excluding these automated outputs from administrative decision-making. Nevertheless, SBNLP may legitimately be established parallel or external to justification-based legal proceedings for humanitarian purposes.

Geslevich Packin on Paywalling Humans

Nizan Geslevich Packin (U Haifa Law; CUNY School of Business; CUNY Law) has posted “Paywalling Humans” (Theoretical Inquiries in Law, Forthcoming) on SSRN. Here is the abstract:

This Article addresses the trend of relegating human customer service to a premium-service in the wake of advancing automation and AI technologies, underscoring the ethical, social, and legal challenges. It emphasizes the need for keeping human interaction accessible and affordable for all, particularly for vulnerable populations, amidst this digital shift. The convenience and efficiency of automated systems such as IVR, chatbots, and virtual-agents have transformed customer support, introducing significant cultural and moral challenges, notably the erosion of personal touch and empathy vital for customer satisfaction and loyalty.

The Article explores customer service automation’s evolution and its impact on workforce dynamics, consumers, and the quality of service. It highlights the hidden costs of diminished human interaction, particularly its adverse effects on disadvantaged, elderly, and disabled groups. Through case studies and examples, it showcases this trend’s negative consequences. Further, it discusses the Human-In-The-Loop concept, advocating for an approach that enhances customer experience with automation without sacrificing human interaction. It explores the considerations surrounding automated customer service, emphasizing the enforcement roles of agencies like the Federal Trade Commission (FTC) and Consumer Financial Protection Bureau (CFPB) in upholding consumer protection laws, and the need for regulations to ensure fairness, transparency, accessibility, and consent.

Concluding, the Article calls for technology to augment rather than replace human service, stressing the importance of clear regulations on the affordability of human interaction in customer support. It urges policymakers and businesses to ensure that automation does not marginalize those that need human assistance, advocating for equitable access to services.

Arango on A Legislative Foundation for Foundation Models

Steven Arango (George Washington U Law) has posted “A Legislative Foundation for Foundation Models” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is not some futuristic technology—it exists in everyday products like your Uber app or the Siri on your nightstand. Its development is meteoric; foundation models are the latest AI advancement: these models are a type of AI that is able to not only produce a range of products but also be integrated into other AI models. This AI Swiss-army knife is proving to be an incredible asset for economic development and national security. But, like other world-altering technology, there is a pernicious side of foundation models. Their flexibility offers adversaries, such as state and non-state actors, the ability to level the global playing field and shift the global order in terms of defensive capabilities. As the conflict in Ukraine has shown, AI is not the future of war—it is the present. Four-hundred-dollar AI fueled drones have been used to disable and, at times, destroy million-dollar war-machines.

To address this Jekyll and Hyde potential of foundation models, President Joe Biden issued Executive Order (EO) 14110. But this EO is the starting-gun, not the end of the race. In fact, EO 14110 even admits that that more understanding and information is required for this nascent technology. To properly legislate foundation models, Congress will need to act. Legislation, however, must be measured and thoughtful with a burgeoning technology like foundation models. Otherwise, innovation will be stamped out, giving way to inflexible, misguided laws.

This paper offers a path forward to balance the scale between innovation and legislation. This paper first provides an outline of President Biden’s EO, with a focus on Section 4.6. It then turns the underlying technology of foundation models, explaining how labeling foundation models as “open” versus “closed” creates a false dichotomy. Next, the paper compares proposed U.S. legislation to the EU’s recent approach for legislating foundation models. National security implications of AI are then outlined, providing real-world examples of AI usage from the current conflict in Ukraine and across the globe. This paper concludes with recommendations on how to legislate foundation models. By balancing legislation with innovation, this paper develops a novel approach to regulating foundation models—the “fastest-growing consumer technology in U.S. history”.

Soh on NLP in the Legal World

Jerrold Soh (Singapore Management University Law) has posted “NLP in the Legal World” on SSRN. Here is the abstract:

This talk situates the rising field of NLLP in the context of legal scholarship and emerging trends in legal AI practice and regulation. It centrally suggests that, as NLP’s domain of competence expands, it would have to undergo, and is in several ways already undergoing, a fundamental transformation we might refer to as “growing up”. In particular, to succeed in the legal world, NLP technology has to contend with three key aspects of adulthood: new attitudes, new consequences, and new responsibilities. Lawyers have gone from complete AI skepticism to actively exploring use cases. Encroaching into fields like medicine, law, and finance means technologists cannot avoid dealing with difficult questions around protecting life, liberty, and money. An entire new AI rulebook is currently being written by regulators and courts around the world. Against this backdrop, the talk examines how NLLP relates to existing inquiries in computational law, AI and Law, and computational/empirical legal studies and identifies opportunities for inter-field discourse. It concludes by identifying the unique role that NLLP researchers can play in the increasingly controversial (and seemingly, decreasingly scientific) global debate on the use and regulation of large language models.

Arbel on Time & Contract Interpretation: Lessons from Machine Learning

Yonathan A. Arbel (U Alabama Law) has posted “Time & Contract Interpretation: Lessons from Machine Learning” (in Research Handbook on Law & Time, F. Fagan & S. Levmore eds. 2025) on SSRN. Here is the abstract:

Contract interpretation is the task of estimating what distant in time parties meant to say or would have said about a specific contingency. For at least a century, scholars and courts have been debating how to best carry out this task.

Conceiving of the interpretative task as one of prediction, I suggest that there are some valuable lessons to be drawn from a field devoted to building prediction models: machine learning. From this viewpoint, this chapter makes four contributions to the study of contract interpretation. It first defends the view of interpretation-as-prediction against the common linguistic view. The linguistic view perceives interpretation as establishing meaning in the philosophy of language sense. But as applied to contract interpretation, such arguments often employ motte-and-bailey argumentation. The second is in explaining a puzzling aspect of the debate about interpretative methods. Both textualists and contextualists insist that their method is more accurate. They can do so because they conflate two senses of the term, precision and accuracy. Third, it brings the hard problem of bias-variance tradeoff to the choice of interpretative methods. Finally, and most speculatively, the chapter distinguishes between interpretation and simulation, and argues that the latter is far more important but far less understood in legal theory. With advances in modeling techniques, the idea of simulation demands serious reconsideration.

Sunstein on AI, Reducing Internalities and Externalities

Cass R. Sunstein (Harvard Law) has posted “AI, Reducing Internalities and Externalities” on SSRN. Here is the abstract:

Many consumers suffer from inadequate information and behavioral biases, which can produce internalities, understood as costs that people impose on their future selves. In these circumstances, “Choice Engines,” powered by Artificial Intelligence (AI), might produce significant savings in terms of money, health, safety, or time. Consider, for example, choices among motor vehicles or appliances. AI-powered Choice Engines might also take account of externalities, and they might nudge or require consumers to do so as well. Different consumers care about different things, of course, which is a reason to insist on a high degree of freedom of choice, even in the presence of internalities and (to some extent) externalities. But it is important to emphasize that AI might be enlisted by insufficiently informed or self-interested actors, who might exploit inadequate information or behavioral biases, and thus reduce consumer welfare. AI might increase internalities or externalities. It is also important to emphasize that AI might show behavioral biases, perhaps the same ones that human beings are known to show, perhaps others that have not been named yet, or perhaps new ones, not shown by human beings, that cannot be anticipated.

Davis on Legal Writing Faculty and Gen AI Scholarship

Kirsten K. Davis (Stetson Law) has posted “A New Parlor is Open: Legal Writing Faculty Must Develop Scholarship on Generative AI and Legal Writing” (Stetson Law Review Forum 2024) on SSRN. Here is the abstract:

Generative artificial intelligence likely represents a paradigm shift in legal communication teaching, learning, and practice. What we know (so far) about generative AI suggests that law school legal writing courses will need to teach generative AI skills to be used as part of a hybrid human-generative AI legal writing process. Accordingly, legal writing faculty will need to understand how generative AI works, its implications for legal writing practices, and how to teach legal writers the knowledge and skills needed to use generative AI ethically and effectively in their work.

As a community of scholars, legal writing faculty should lead the inquiry into the connections between generative AI and legal writing products, processes, and practices. This is an exciting time; there are many unanswered questions to explore about the relationships between human writers and machine writing tools.