Obiefuna on The Coming Age of Abundance: An Epic Battle Between a Visionary AI Future and the Past of Human Acquisitional Systems

Peter Obiefuna (Arizen Corporation) has posted “The Coming Age of Abundance: An Epic Battle Between a Visionary AI Future and the Past of Human Acquisitional Systems” on SSRN. Here is the abstract:

This paper explores the paradox of technological abundance in a world still governed by systems of scarcity. While AI and robotics promise a post-labor future where material needs are easily met, historical patterns of resource hoarding, exclusion, and structural inequality suggest that abundance alone will not guarantee justice. Using land distribution and wealth concentration as analogues, the paper argues that systemic and cultural forces must evolve alongside technological progress. Without an ethical re-imagining of access, ownership, and value, the benefits of automation may replicate-and even deepen-the inequities of the past.

Grimmelmann et al. on Generative Misinterpretation

James Grimmelmann (Cornell Law) et al. have posted “Generative Misinterpretation” (Harvard Journal on Legislation, Vol. 63.1 (forthcoming)) on SSRN. Here is the abstract:

In a series of provocative experiments, a loose group of scholars, lawyers, and judges has endorsed generative interpretation: asking large language models (LLMs) like ChatGPT and Claude to resolve interpretive issues from actual cases. With varying degrees of confidence, they argue that LLMs are (or will soon be) able to assist-or even replace-judges in performing interpretive tasks like determining the meaning of a term in a contract or statute. A few go even further and argue for using LLMs to decide entire cases and to generate opinions supporting those decisions.

We respectfully dissent. In this Article, we show that LLMs are not yet fit for purpose for use in judicial chambers. Generative interpretation, like all empirical methods, must bridge two gaps to be useful and legitimate. The first is areliability gap: are its methods consistent and reproducible enough to be trusted in high-stakes, real-world settings? Unfortunately, as we show, LLM proponents’ experimental results are brittle and frequently arbitrary. The second is anepistemic gap: do these methods measure what they purport to? Here, LLM proponents have pointed to (1) LLMs’ training processes on large datasets, (2) empirical measures of LLM outputs, (3) the rhetorical persuasiveness of those outputs, and (4) the assumed predictability of algorithmic methods. We show, however, that all of these justifications rest on unstated and faulty premises about the nature of LLMs and the nature of judging.

The superficial fluency of LLM-generated text conceals fundamental gaps between what these models are currently capable of and what legal interpretation requires to be methodologically and socially legitimate. Put simply, any human or computer can put words on a page, but it takes something more to turn those words into a legitimate act of legal interpretation. LLM proponents do not yet have a plausible story of what that “something more” comprises.

Price II & Freilich on Data as Policy

W. Nicholson Price Ii (U Michigan Law) and Janet Freilich (Boston U Law) have posted “Data as Policy” (66 Boston College Law Review (forthcoming 2025)) on SSRN. Here is the abstract:

A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse. 

We build on the broad literature on regulation by introducing data as a regulatory tool. We also join-and diverge from-the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be.  Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox.

Merane & Stremitzer on Automated Private Enforcement: Evidence from the Google Fonts Case

Jakob Merane (ETH Zürich) and Alexander Stremitzer (ETH Zurich) have posted “Automated Private Enforcement: Evidence from the Google Fonts Case” on SSRN. Here is the abstract:

Plaintiffs often have little incentive to detect and enforce small claims, which reduces defendants’ incentives to comply. With advances in artificial intelligence, can automated private enforcement increase compliance? The Google Fonts Case offers a unique opportunity to explore this question. After a German court ruled that the dynamic embedding of Google Fonts violated the GDPR, an entrepreneurial lawyer in Austria used automated tools to detect violations and threaten website operators with lawsuits. Drawing on a comprehensive sample of 1,517,429 websites across 32 European countries over a two-year period, we use a difference-in-difference approach to show a significant compliance effect in Austria. Within three months, non-compliance dropped by 22.7 percentage points, a nearly 50% reduction. These findings suggest that automated private enforcement can be highly disruptive, pressuring policymakers to recalibrate legal rules.

Delacroix on Transitional Conversational Spaces? LLMs and the Future of Collective Moral Perception

Sylvie Delacroix (King’s College London) has posted “Transitional Conversational Spaces? LLMs and the Future of Collective Moral Perception” on SSRN. Here is the abstract:

As large language models become regular interlocutors, they influence the conversational infrastructure through which communities collectively interpret their world. This mediation notably impacts moral understanding, which mostly develops through shared dialogic practices rather than abstract theorizing. Within these practices, ‘sense-making conversations’-wherein participants navigate the liminal space between felt ethical disquiet and its eventual conceptual articulation-function as a crucial yet under-theorized infrastructure for ethical development.

While routine exchanges treat uncertainty as a deficit to eliminate, these sensemaking conversations hinge on maintaining productive uncertainty long enough for new perceptions to emerge through patient attention. The systematic presence of LLMs as conversational partners stands to reshape how future generations engage with such productive uncertainty, potentially transforming the mechanisms through which communities recognize emerging moral challenges. Rather than treating technological participation as merely instrumental to system optimization, we argue for a substantive reconceptualization of the relationship between technological development and democratic practice, exploring how LLMs might constitute novel, transitional conversational spaces that serve democratic ends.

Martínez on Traditional and Computational Canons

Eric Martínez (U Chicago Law) has posted “Traditional and Computational Canons” (Harvard Journal of Law & Technology, Vol. 39 (forthcoming 2026)) on SSRN. Here is the abstract:

As part of the rise of modern textualism, dictionaries and linguistic canons have become a ubiquitous part of legal interpretation. One longstanding question is whether judges successfully use these tools to arrive at the plain meaning of a legal text, or merely as window-dressing for their preferred policy outcome. The practical significance of this question extends across all major doctrinal areas, and with the Supreme Court’s overturning of Chevron deference, its importance is only to grow, as courts are now instructed to use every tool at their disposal to resolve ambiguity when interpreting a law. This Article is the first to show, contrary to longstanding academic speculation, that courts by-and-large align with linguistic consensus—as judged by both ordinary and expert readers—when invoking dictionaries and linguistic canons to uncover the plain meaning of a term at issue in a legal dispute.

After documenting the rise of plain meaning, linguistic canons, and dictionaries in a sample of over 2 million published opinions across the federal and state judiciaries, the Article presents the results of an experiment examining how lawyers (n=2,373) and non-lawyers (n=4,533) interpret the words at issue in 180 real-world plain-meaning cases. The experiment revealed that lawyers and laypeople tended to strongly converge on one interpretation over another, even in cases where there appeared to be two equally applicable canons leading to opposite results, and that this interpretation coincided with that of the court in a supermajority of cases.  These findings suggest that courts use canons and dictionaries not merely as a smokescreen but as part of a good-faith (and largely successful) attempt to uncover the consensus meaning of a legal text.

With the advent of large language models purportedly equipped with legal and linguistic competence, a second question concerns whether novel computational tools might offer a useful supplement to judges’ use of traditional tools to determine the best reading of a legal text. Prompting state-of-the-art AI models such as GPT-4o and o1 on the aforementioned materials, this Article is the first to show that their predictions of linguistic consensus reliably match, though do not exceed, those of human judges invoking canons and dictionaries in real-world cases, even when controlling for possible data contamination and potential knowledge of prior cases. These findings suggest that some computational tools may offer an efficient, if not more effective, supplement to traditional tools in uncovering plain meaning.

Noguer I Alonso on AGENTS: A Historical Perspective 1948-2024

Miquel Noguer I Alonso (Artificial Intelligence Finance Institute) has posted “AGENTS: A Historical Perspective 1948-2024” on SSRN. Here is the abstract:

This paper provides an in-depth analysis of three fundamental types of agents in computational systems: Agent-Based Models (ABM), Reinforcement Learning (RL) agents, and Large Language Model (LLM) agents. We explore their theoretical foundations, mathematical formulations, and practical applications while examining their historical development. Through detailed mathematical analysis and case studies, we demonstrate how these agent paradigms can be integrated to create hybrid systems capable of addressing complex real-world challenges. Special attention is given to recent developments in multi-agent systems, emergence phenomena, and the convergence of different agent architectures. This work contributes to the growing body of research on intelligent agents by providing a unified framework for understanding and comparing different agent types, highlighting their strengths and limitations.

Wilde et al. on Recommendations on the Use of Synthetic Data to Train AI Models

Philippe De Wilde (U Kent) et al. have posted “Recommendations on the Use of Synthetic Data to Train AI Models” (De Wilde, P., Arora, P., Buarque, F., Chin, Y., Thinyane, M., Stinckwich, S., Fournier-Tombs, E., Marwala, T., Recommendations on the Use of Synthetic Data to Train AI Models. Tokyo: United Nations University, 2024) on SSRN. Here is the abstract:

Using synthetic or artificially generated data in training Artificial Intelligence (AI) algorithms is a burgeoning practice with significant potential to affect society directly. It can address data scarcity, privacy, and bias issues but does raise concerns about data quality, security, and ethical implications. While some systems use only synthetic data, most times synthetic data is used together with real-world data to train AI models. Our recommendations in this document are for any system where some synthetic data are used. The use of synthetic data has the potential to enhance existing data to allow for more efficient and inclusive practices and policies. However, we cannot assume synthetic data to be automatically better or even equivalent to data from the physical world. There are many risks to using synthetic data, including cybersecurity risks, bias propagation, and increasing model error. This document sets out recommendations for the responsible use of synthetic data in AI training.

Toparlak on Between a Subject and an Object: Addressing the Social Valence of Robots

Rüya Tuna Toparlak (U Lucerne) has posted “Between a Subject and an Object: Addressing the Social Valence of Robots” on SSRN. Here is the abstract:

This article concentrates on the social valence of robots as a factor in contributing to our collaboration with robots and facilitating these connections. Chapter I starts with establishing the properties of social robots and what constitutes social valence. The paper describes the emerging association built on human-robot collaboration. The paper then moves on to describe the dangers of manipulation and how it can affect liability considerations. The social valence of robots causes them to push the boundaries of the traditional object-subject paradigm. The tensions this causes are inspected under Chapter II. Discussion surrounding the legal subjectivity of robots has so far differentiated these technologies mainly by autonomy, function, and sophistication. This paper aims to concentrate on the appearance of the robot and how it is experienced by the human. Humans do not interact in the same way with different robots. The paper holds the position that this should be an important consideration in how we approach regulation. For this purpose, the draft AI Act of the EU is inspected under section III for provisions that might be relevant to or affected by the social valence of robots.

Kaal on The Future of Law – Dynamic Web3 Governance

Wulf A. Kaal (U St. Thomas Law (Minnesota)) has posted “The Future of Law – Dynamic Web3 Governance” on SSRN. Here is the abstract:

This article proposes a novel web3 governance model using Weighted Directed Acyclic Graphs (WDAGs) and validation pools with reputation staking, combined with a federated communications protocol, to address the negative externalities of continuous legal growth. The traditional methods of legal garbage removal, such as sunset provisions and periodic legal reviews, are hindered by inefficiencies, political manipulation, and resource demands. In contrast, the WDAG system enables a dynamic, self-enforcing, and community-driven approach to legal remediation, ensuring the continuous relevance, efficiency, and adaptability of legal frameworks in a decentralized environment. This model utilizes real-time data analysis and community input to organically adjust legal norms, minimizing political resistance and unintended consequences while preserving legal history and promoting transparency.

The integration of the WDAG framework into web3 governance aligns with the needs of a rapidly evolving technological and social landscape. By facilitating a more responsive and equitable legal system, the WDAG model supports economic growth, fosters democratic engagement, and ensures that legal rules and regulations remain aligned with contemporary societal values. This approach represents a significant advancement in legal governance, demonstrating how emerging technologies can create sustainable and adaptive legal frameworks that better reflect community consensus and adapt to technological and societal changes.