Schrepel on Decoding the AI Act: A Critical Guide for Competition Experts

Thibault Schrepel (Vrije Universiteit Amsterdam; Stanford Codex; Sorbonne; Sciences Po) has posted “Decoding the AI Act: A Critical Guide for Competition Experts” on SSRN. Here is the abstract:

The AI Act is poised to become a pillar of modern competition law. The present article seeks to provide competition practitioners with a practical yet critical guide to its key provisions. It concludes with suggestions for making the AI Act more competition friendly.

Smith on Generative AI in the Attorney-Client Relationship

Michael L. Smith (St. Mary’s U School of Law) has posted “Generative AI in the Attorney-Client Relationship: An Exercise in Critical Revision and Client Management” on SSRN. Here is the abstract:

Discussions of generative AI in legal practice and education often assert that this technology will lead to a sea change in legal writing, research, and revision. While some of the more breathless proclamations deserve skepticism, there’s little doubt that this technology may generate new forms of headaches for those in the legal field – particularly in the hands of clients or opposing counsel who attempt to use this technology to save the time, money, and effort required for complex legal tasks.

To that end, this essay proposes an exercise template for law students which illustrates how generative AI technology may be misused or abused. Presenting students with an AI-generated motion and asking them to reason through a scenario in which a hypothetical client demands that they file the motion tests a number of skills. First, students must critically read and revise the motion – noting shortcomings in AI-generated legal writing and identifying the confident mistakes that permeate the output. Second, and perhaps even more importantly, students must think through how to communicate these mistakes to a stubborn client, requiring them to consider client relationships and motivations and to communicate complex information in a simple, concise, and diplomatic manner. Doing so takes the exercise beyond a practical test of doctrine and legal writing, and engages students with deeper questions of empathizing with client needs, developing their professional identity, and preparing for a world in which generative AI will not only be used, but also abused.

Keats Citron on A More Perfect Privacy

Danielle Keats Citron (U Virginia Law) has posted “A More Perfect Privacy” (Boston U Law Review, Forthcoming) on SSRN. Here is the abstract:

Fifty years ago, federal and state lawmakers called for the regulation of a criminal justice “databank” connecting federal, state, and local agencies. There was bipartisan concern that the system imperiled constitutional commitments and people’s crucial life opportunities, including jobs, education, housing, and licenses. Bipartisan congressional concerns of the 1970s should be cause for re-invigoration, not resignation. Recounting the insights of members of the 93rd and 94th Congresses should embolden us. Their concerns clarify the headwinds that reformers face. Then, as now, powerful interests want us to think that privacy and public safety are incompatible. They want us to view diminished expectations of privacy as acceptable, even valuable. Revisiting this history should remind the public that totalizing surveillance is neither acceptable nor desirable. Privacy can and should be ours.

Van Loo on The Public Stakes of Consumer Law

Rory Van Loo (Boston U Law; Yale ISP) has posted “The Public Stakes of Consumer Law: The Environment, the Economy, Health, Disinformation, and Beyond” (107 Minnesota Law Review 2039 (2023)) on SSRN. Here is the abstract:

Consumer law has a conflicted and narrow identity. It is most immediately a form of business law, governing market transactions between people and companies. Accordingly, the microeconomic analysis of markets is the dominant influence on consumer law. On the other hand, consumer law is often described as, and assumed to be about, protecting the consumer, which implicates small instances of individual injustice. Both of these lenses are valuable but reflect limited awareness of the field’s importance among lawmakers, scholars, and the public. We are all consumers. Exchanges between consumers and corporations contribute to global warming when people buy energy-inefficient household appliances; drive public health epidemics, like obesity, due to harmful food purchases; and widen wealth gaps, when low-income or minority households are subjected to predatory sales practices. Yet despite these stakes, consumer law has struggled to gain intellectual or popular appeal, in contrast to the explosion in antitrust interest that has resulted from the growing interest in holding large technology companies accountable. Unlike workers, veterans, and businesses, consumers have neither a department at the federal level nor a committee focused on them in either the House or the Senate. Many law schools do not even offer a consumer law course. This Article reveals the risks of consumer law’s invisibility and calls for an institutional and conceptual reconstruction of the field. Consumer law always mattered, but recent shifts in legal institutions, markets, and technologies have further elevated its importance. To reflect that societal importance, and to return the economic analysis to its roots, a public priority principle should serve as consumer law’s analytic loadstone. Legal institutions can also help by shifting from marginalizing the field to featuring it. At a minimum, it is time to recognize that consumer law has a meaningful role to play in the struggles to strengthen democracy, preserve the environment, foster health, and promote prosperity.

Tobia on Algorithmic Legal Interpretation

Kevin Tobia (Georgetown U Law Center) has posted “Algorithmic Legal Interpretation” (University of Chicago Law Review Online (forthcoming 2024)) on SSRN. Here is the abstract:

Legal interpretation has taken an empirical turn, with scholars and judges debating the use of corpus linguistics, surveys, and experiments in interpretation. Professor Choi’s Measuring Clarity in Legal Text offers a new proposal: interpretation by artificial intelligence. The Article impressively and thoughtfully considers contributions from word embeddings, representations of naturally occurring language in a multi-dimensional vector space, driven by machine learning algorithms.

The Article expresses some caution and some optimism about its proposal. This Response endorses the caution: Words’ proximity in vector space (measured by cosine similarity) is not conclusive of a legal text’s clarity or ambiguity, and judges should not rely on such outputs of algorithmic tools to settle interpretation. Nor should judges look to the outputs of ChatGPT or other LLMs as answers to legal interpretation. Nevertheless, the Article’s new empirical approach usefully illuminates central assumptions and tensions in legal interpretive theories. In sum, Measuring Clarity in Legal Text is an important contribution, opening new, timely, and rich debates about artificial intelligence’s contributions to legal interpretation.

Sarid & Ben-Zvi on Machine Learning and the Re-Enchantment of the Administrative State

Eden Sarid (U Essex Law) and Omri Ben-Zvi (Hebrew U Law) have posted “Machine Learning and the Re-Enchantment of the Administrative State” (Modern Law Review) on SSRN. Here is the abstract:

Machine learning algorithms present substantial promise for more effective decision-making by administrative agencies. However, some of these algorithms are inscrutable, namely, they produce predictions that humans cannot understand or explain. This trait is in tension with the emphasis on reason-giving in administrative law. The article explores this tension, advancing two interrelated arguments. First, providing adequate reasons is a significant facet of respecting individuals’ agency. Incorporating inscrutable algorithmic predictions into administrative decision-making compromises this normative ideal. Second, as a long-term concern, the use of inscrutable algorithms by administrative agencies may generate systemic effects by gradually reducing the realm of the humanly explainable in public life, a phenomenon Max Weber termed “re-enchantment.” As a result, the use of inscrutable machine learning algorithms might trigger a special kind of re-enchantment, making us comprehend less rather than more of shared human experience, and consequently altering the way we understand the administrative state and experience public life.