Fagan on The Un-Modeled World: Law and the Limits of Machine Learning

Frank Fagan (South Texas College of Law; EDHEC Augmented Law Institute) has posted “The Un-Modeled World: Law and the Limits of Machine Learning” (MIT Computational Law Report, Vol. 4 (Forthcoming 2022)) on SSRN. Here is the abstract:

There is today a pervasive concern that humans will not be able to keep up with accelerating technological process in law and will become objects of sheer manipulation. For those who believe that human objectification is on the horizon, they offer solutions that require humans to take control, mostly by means of self-awareness and development of will. Among others, these strategies are present in Heidegger, Marcuse, and Habermas as presently discussed. But these solutions are not the only way. Technology itself offers a solution on its own terms. Machines can only learn if they can observe patterns, and those patterns must occur in sufficiently stable environments. Without detectable regularities and contextual invariance, machines remain prone to error. Yet humans innovate and things change. This means that innovation operates as a self-corrective—a built-in feature that limits the ability of technology to fully objectify human life and law error-free. Fears of complete technological ascendance in law and elsewhere are therefore exaggerated, though interesting intermediate states are likely to obtain. Progress will proceed apace in closed legal domains, but models will require continual adaptation and updating in legal domains where human innovation and openness prevail.

Murtazashvili et al. on Blockchain Networks as Knowledge Commons

Ilia Murtazashvili (U Pitt – GSPIA), Jennifer Brick Murtazashvili (same), Martin B. H. Weiss (U Pitt – School of Computing and Information), and Michael J. Madison (U Pitt Law) have posted “Blockchain Networks as Knowledge Commons” (International Journal of the Commons, Vol. 16, p. 108, 2022) on SSRN. Here is the abstract:

Researchers interested in blockchains are increasingly attuned to questions of governance, including how blockchains relate to government, the ways blockchains are governed, and ways blockchains can improve prospects for successful self-governance. Our paper joins this research by exploring the implications of the Governing Knowledge Commons (GKC) framework to analyze governance of blockchains. Our novel contributions are making the case that blockchain networks represent knowledge commons governance, in the sense that they rely on collectively-managed technologies to pool and manage distributed information, illustrating the usefulness and novelty of the GCK methodology with an empirical case study of the evolution of Bitcoin, and laying the foundation for a research program using the GKC approach.

Kaminski on Technological ‘Disruption’ of the Law’s Imagined Scene

Margot E. Kaminski (U Colorado Law; Yale ISP; U Colorado – Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Technological ‘Disruption’ of the Law’s Imagined Scene: Some Lessons from Lex Informatica” (Berkeley Technology Law Journal, Vol. 36, 2022) on SSRN. Here is the abstract:

Joel Reidenberg in his 1998 Article Lex Informatica observed that technology can be a distinct regulatory force in its own right and claimed that law would arise in response to human needs. Today, law and technology scholarship continues to ask: does technology ever disrupt the law? This Article articulates one particular kind of “legal disruption”: how technology (or really, the social use of technology) can alter the imagined setting around which policy conversations take place—what Jack Balkin and Reva Siegal call the “imagined regulatory scene.” Sociotechnical change can alter the imagined regulatory scene’s architecture, upsetting a policy balance and undermining a particular regulation or regime’s goals. That is, sociotechnical change sometimes disturbs the imagined paradigmatic scenario not by departing from it entirely but by constraining, enabling, or mediating actors’ behavior that we want the law to constrain or protect. This Article identifies and traces this now common move in recent law and technology literature, drawing on Reidenberg’s influential and prescient work.

Desai & Lemley on Scarcity, Regulation, and the Abundance Society

Deven R. Desai (Georgia Institute of Technology – Scheller College of Business) and Mark A. Lemley (Stanford Law School) have posted “Scarcity, Regulation, and the Abundance Society” on SSRN. Here is the abstract:

New technologies continue to democratize, decentralize, and disrupt production, offering the possibility that scarcity will be a thing of the past for many industries. We call these technologies of abundance. But our economy and our legal institutions are based on scarcity.

Abundance lowers costs. When that happens, the elimination of scarcity changes the economics of how goods and services are produced and distributed. This doesn’t just follow a normal demand curve pattern – consumption increases as price declines. Rather, special things happen when costs approach zero.

Digitization and its effects on the production, organization, and distribution of information provide early examples of changes to markets and industries. Copyright industries went through upheaval and demands for new protections. But they are not alone. New technologies such as 3D printing, CRISPR, artificial intelligence, synthetic biology, and more are democratizing, decentralizing, and disrupting production in food and alcohol production, biotechnologies, and more, and even the production of innovation itself, opening the prospect of an abundance society in which people can print or otherwise obtain the things they want, including living organisms, on-demand.

Abundance changes the social as well as economic context of markets. How will markets and legal institutions based on scarcity react when it is gone? Will we try to replicate that scarcity by imposing legal rules, as IP law does? Will the abundance of some things just create new forms of scarcity in others – the raw materials that feed 3D printers, for instance, or the electricity needed to feed AIs and cryptocurrency? Will we come up with new forms of artificial scarcity, as brands and non-fungible tokens (NFTs) do? Or will we reorder our economics and our society to focus on things other than scarcity? If so, what will that look like? And how will abundance affect the distribution of resources in society? Will we reverse the long-standing trend towards greater income inequality? Or will society find new ways to distinguish the haves from the have-nots?

Society already has examples of each type of response. The copyright industries survived the end of scarcity, and indeed thrived, not by turning to the law but by changing business practices, leveraging the scarcity inherent to live performances and using streaming technology to remove the market structures that fed unauthorized copying, and by reorganizing around distribution networks rather than content creators. Newsgathering, reporting, and distribution face challenges flowing from democratized, decentralized, and disrupted production. Luxury brands and NFTs offer examples of artificial scarcity created to reinforce a sort of modern sumptuary code. And we have seen effective, decentralized production based on economics of abundance in examples ranging from open-source software to Wikipedia.

In this introductory essay, we survey the potential futures of a post-scarcity society and offer some thoughts as to more (and less) socially productive ways to respond to the death of scarcity.

Shope on NGO Engagement in the Age of Artificial Intelligence

Mark Shope (National Yang Ming Chiao Tung University; Indiana University Robert H. McKinney School of Law) has posted “NGO Engagement in the Age of Artificial Intelligence” (Buffalo Human Rights Law Review, Vol. 28, pp. 119-158, 2022) on SSRN. Here is the abstract:

From AI and human rights focused NGOs to thematic NGOs whose subjects are impacted by AI, the AI and human rights discourse within NGOs has moved from simply keeping an eye on AI to being an integral part of NGO work. At the same time, the issue of AI and human rights is being addressed by governments in their policymaking and rulemaking to, for example, protect human rights and remain compliant with their responsibilities under international human rights instruments. When governments are reporting to United Nations treaty bodies as required under international human rights instruments, and the reports and communications include topics of artificial intelligence, how and to what extent are NGOs engaging in this dialogue? This article explores how artificial intelligence can impact rights under the nine core human rights instruments and how NGOs should monitor States parties under these instruments, providing suggestions to guide NGO engagement in the reporting process.

Papakonstantinou & de Hert on The Regulation of Digital Technologies in the EU

Vagelis Papakonstantinou (Faculty of Law and Criminology, Vrije Universiteit Brussel) and Paul De Hert
(Free University of Brussels (VUB)- LSTS; Tilburg University – TILT) have posted “The Regulation of Digital Technologies in the EU: The Law-Making Phenomena of ‘Act-ification’, ‘GDPR Mimesis’ and ‘EU Law Brutality'” (Technology and Regulation Journal 2022) on SSRN. Here is the abstract:

EU regulatory initiatives on technology-related topics has spiked over the past few years. On the basis of its Priorities Programme 2019-2024, while creating “Europe fit for the Digital Age”, the EU Commission has been busy releasing new texts aimed at regulating a number of technology topics, including, among others, data uses, online platforms, cybersecurity, or artificial intelligence. This paper identifies three basic phenomena common to all, or most, EU new technology-relevant regulatory initiatives, namely (a) “act-ification”, (b) “GDPR mimesis”, and (c) “regulatory brutality”. These phenomena divulge new-found confidence on the part of the EU technology legislator, who has by now asserted for itself the right to form policy options and create new rules in the field for all of Europe. These three phenomena serve as indicators or early signs of a new European technology law-making paradigm that by now seems ready to emerge.

Cheong on Avatars in the Metaverse: Potential Legal Issues and Remedies

Ben Chester Cheong (Singapore University) has posted “Avatars in the Metaverse: Potential Legal Issues and Remedies” (International Cybersecurity Law Review 2022) on SSRN. Here is the abstract:

This article discusses some of the issues surrounding an avatar of a real-life person in a metaverse. Given that the anticipated rise of metaverse is a developing area, the first part of the article discusses what the metaverse would entail, some suggestions on what these avatars would be like, why such avatars’ rights should be protected and whether consciousness should be a defining characteristic before these rights are granted. The second part of the article analyses incorporation techniques to grant legal personality to avatars in the metaverse as well as some of the potential harms that avatars could cause there, potentially leading to an extension of the real world. The third part of the article deals with imposing liability on a real-life person by lifting the ‘veil’ of the avatar to identify the real person behind the avatar through four foreseeable scenarios, i.e. fraud, defamation, identity theft and crime. The article briefly explores other potential legal issues in the metaverse. The article makes two final recommendations, the possibility of statutory remedies and judicial interpretation to rectify torts committed by avatars.

Schrepel on Law + Technology

Thibault Schrepel (University Paris 1 Panthéon-Sorbonne; VU University Amsterdam; Stanford University’s Codex Center; Sciences Po) has posted “Law + Technology” (Stanford University CodeX Research Paper Series 2022) on SSRN. Here is the abstract:

The classical “law & technology” approach focuses on harms created by technology. This approach seems to be common sense; after all, why be interested—from a legal standpoint—in situations where technology does not cause damage? On close inspection, another approach dubbed “law + technology” can better increase the common good.

The “+” approach builds on complexity science to consider both the issues and positive contributions technology brings to society. The goal is to address the negative ramifications of technology while leveraging its positive regulatory power. Achieving this double objective requires policymakers and regulators to consider a range of intervention methods and choose the ones that are most suitable.

Mayson on Personalized Law

Sandra G. Mayson (University of Pennsylvania Carey Law School) has posted “But What Is Personalized Law?” (University of Chicago Law Review Online, 2021) on SSRN. Here is the abstract:

In Personalized Law: Different Rules for Different People, Omri Ben-Shahar and Ariel Porat undertake to ground a burgeoning field of legal thought. The book imagines and thoughtfully assesses an array of personalized legal rules, including individualized speed limits, fines calibrated to income, and medical disclosure requirements responsive to individual health profiles. Throughout, though, the conceptual parameters of “personalized law” remain elusive. It is clear that personalized law involves more data, more machine-learning, and more direct communication to individuals. But it is not clear how deep these changes go. Are they incremental—just today’s law with better tech—or do they represent a change in the very nature of legal rules, as the authors claim?

This Essay aims to help clarify the concept of “personalized law” by starting from the nature of legal rules. Drawing on the scholarship of Frederick Schauer, it argues that all rules must generalize in some way, then offers a taxonomy of different forms of generalization. With the framework in place, it becomes clear that personalized law entails two shifts: (1) from rules that prescribe specific conduct toward rules that prescribe a social outcome, like a risk or efficiency target, and (2) toward greater ex ante specification of what rules require of individuals. Each shift has different practical and normative implications. Lastly, the Essay raises two potential costs of such shifts that Personalized Law does not address at length: the likely disparate racial impact of social-outcome rules driven by big data, and the possible loss of communal experience in shared-rule compliance.

Chesterman on The Robot Century

Simon Chesterman (National University of Singapore – Faculty of Law) has posted “The Robot Century” on SSRN. Here is the abstract:

The word ‘robot’ entered the modern lexicon a hundred years ago with the première at Prague’s National Theatre of Karel Čapek’s play R.U.R. in 1921. Set on an island ‘somewhere on our planet’, Rossum’s Universal Robots recounts the creation of roboti. Not so much mechanical creatures as stripped down versions of humans, they were biological entities created to be strong and intelligent, but without souls. Though dated in many ways — the limited humour derives from six men on the island vying for the hand of the only woman — the play was prescient in its vision of a world in which automatons are entrusted with serving ever more of humanity’s needs and, eventually, fighting its wars. Reviews of the New York production called it a ‘brilliant satire on our mechanized civilization; the grimmest yet subtlest arraignment of this strange, mad thing we call the industrial society of today.’ A century later, debates over the place of robots in society still echo themes in the play: how to take advantage of the benefits of technology without unacceptable risk; what entitlements are owed to entities that at least mimic and perhaps embody human qualities; what place is left for humanity if and when we are surpassed by our creations.