Tasioulas on The Rule of Algorithm and the Rule of Law

John Tasioulas (Oxford) has posted “The Rule of Algorithm and the Rule of Law” (Vienna Lectures on Legal Philosophy (2023) on SSRN. Here is the abstract:

Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This article argues that answers to this question have been excessively focussed on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.

Kumar & Choudhury on Cognitive Moral Development in AI Robots

Shailendra Kumar (Sikkim University) and Sanghamitra Choudhury (University of Oxford) have posted “Cognitive Moral Development in AI Robots” on SSRN. Here is the abstract:

The widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality, and human rights. This manuscript explores the possibility and means of cognitive moral development in AI bots, and while doing so, it floats a new concept for the characterization and development of artificially intelligent and ethical robotic machines. It proposes the classification of the order of evolution of ethics in the AI bots, by making use of Lawrence Kohlberg’s study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI robots in accordance with the proposed concept, humans may assist in the development of an ideal robotic creature that is morally responsible.

Katyal on Five Principles of Policy Reform for the Technological Age

Sonia Katyal (U California, Berkeley – School of Law) has posted “Lex Reformatica: Five Principles of Policy Reform for the Technological Age” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Almost twenty five years ago, beloved former colleague Joel Reidenberg penned an article that argued that law and government regulation were not the only source of authority and rulemaking in the Information Society. Rather, he argued that technology itself, particularly system design choices like network design and system configurations, can also impose similar regulatory norms on communities. These rules and systems, he argued, comprised a Lex Informatica—a term that Reidenberg coined in historical reference to “Lex Mercatoria,” a system of international, merchant-driven norms in the Middle Ages that emerged independent of localized sovereign control.

Today, however, we confront a different phenomenon, one that requires us to draw upon the wisdom of Reidenberg’s landmark work in considering the repercussions of the previous era. As much as Lex Informatica provided us with a descriptive lens to analyze the birth of the internet, we are now confronted with the aftereffects of decades of muted, if not absent, regulation. When technological social norms are allowed to develop outside of clear legal restraints, who wins? Who loses? In this new era, we face a new set of challenges—challenges that force us to confront a critical need for infrastructural reform that focuses on the interplay between public and private forms of regulation (and self-regulation), its costs, and its benefits.

Instead of demonstrating the richness, complexity, and promise of yesterday’s internet age, today’s events show us what precisely can happen in an age of information libertarianism, underscoring the need for a new approach to information regulation. The articles in this Issue are taken from two separate symposiums—one on Lex Informatica and another on race and technology law. At present, a conversation between them could not be any more necessary. Taken together, these papers showcase what I refer to as the Lex Reformatica of today’s digital age. This collection of papers demonstrates the need for scholars, lawyers, and legislators to return to Reidenberg’s foundational work and to update its trajectory towards a new era that focuses on the design of a new approach to reform.

Gervais on How Courts Can Define Humanness in the Age of Artificial Intelligence

Daniel J. Gervais (Vanderbilt University – Law School) has posted “Human as a Matter of Law: How Courts Can Define Humanness in the Age of Artificial Intelligence” on SSRN. Here is the abstract:

This Essay considers the ability of AI machines to perform intellectual functions long associated with human higher mental faculties as a form of sapience, a notion that more fruitfully describes their abilities than either intelligence or sentience. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, the essay aims to situates the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.

G’sell on AI Judges

Florence G’sell (Science Po Law) has posted “AI Judges” (Larry A. Dimatteo, Cristina Poncibo, Michal Cannarsa (edit.), The Cambridge Handbook of Artificial Intelligence, Global Perspectives on Law and Ethics, Cambridge University Press, 2022) on SSRN. Here is the abstract:

The prospect of a “robot judge” raises many fantasies and concerns. Some argue that only humans are endowed with the modes of thought, intuition and empathy that would be necessary to analyze or judge a case. As early as 1976, Joseph Weizenbaum, creator of Eliza, one of the very first conversational agents, strongly asserted that important decisions should not be left to machines, which are sorely lacking in human qualities such as compassion and wisdom. On the other hand, it could be argued today that the courts would be wrong to deprive themselves of the possibilities opened up by artificial intelligence tools, whose capabilities are expected to improve greatly in the future. In reality, the question of the use of AI in the judicial system should probably be asked in a nuanced way, without considering the dystopian and highly unlikely scenario of the “robot judge” portrayed by Trevor Noah in a famous episode of The Daily Show. Rather, the question is how courts can benefit from increasingly sophisticated machines. To what extent can these tools help them render justice? What is their contribution in terms of decision support? Can we seriously consider delegating to a machine the entire power to make a judicial decision?

This chapter proceeds as follow. Section 23.2 is devoted to the use of AI tools by the courts. It is divided into three subsections. Section 23.2.1 deals with the use of risk assessment tools, which are widespread in the United States but highly regulated in Europe, particularly in France. Section 23.2.2 presents the possibilities opened by machine learning algorithms trained on databases composed of judicial decisions, which are able to anticipate court decisions or recommend solutions to judges. Section 23.2.3 considers the very unlikely eventuality of full automation of judicial decision making.

Fagan on The Un-Modeled World: Law and the Limits of Machine Learning

Frank Fagan (South Texas College of Law; EDHEC Augmented Law Institute) has posted “The Un-Modeled World: Law and the Limits of Machine Learning” (MIT Computational Law Report, Vol. 4 (Forthcoming 2022)) on SSRN. Here is the abstract:

There is today a pervasive concern that humans will not be able to keep up with accelerating technological process in law and will become objects of sheer manipulation. For those who believe that human objectification is on the horizon, they offer solutions that require humans to take control, mostly by means of self-awareness and development of will. Among others, these strategies are present in Heidegger, Marcuse, and Habermas as presently discussed. But these solutions are not the only way. Technology itself offers a solution on its own terms. Machines can only learn if they can observe patterns, and those patterns must occur in sufficiently stable environments. Without detectable regularities and contextual invariance, machines remain prone to error. Yet humans innovate and things change. This means that innovation operates as a self-corrective—a built-in feature that limits the ability of technology to fully objectify human life and law error-free. Fears of complete technological ascendance in law and elsewhere are therefore exaggerated, though interesting intermediate states are likely to obtain. Progress will proceed apace in closed legal domains, but models will require continual adaptation and updating in legal domains where human innovation and openness prevail.

Murtazashvili et al. on Blockchain Networks as Knowledge Commons

Ilia Murtazashvili (U Pitt – GSPIA), Jennifer Brick Murtazashvili (same), Martin B. H. Weiss (U Pitt – School of Computing and Information), and Michael J. Madison (U Pitt Law) have posted “Blockchain Networks as Knowledge Commons” (International Journal of the Commons, Vol. 16, p. 108, 2022) on SSRN. Here is the abstract:

Researchers interested in blockchains are increasingly attuned to questions of governance, including how blockchains relate to government, the ways blockchains are governed, and ways blockchains can improve prospects for successful self-governance. Our paper joins this research by exploring the implications of the Governing Knowledge Commons (GKC) framework to analyze governance of blockchains. Our novel contributions are making the case that blockchain networks represent knowledge commons governance, in the sense that they rely on collectively-managed technologies to pool and manage distributed information, illustrating the usefulness and novelty of the GCK methodology with an empirical case study of the evolution of Bitcoin, and laying the foundation for a research program using the GKC approach.

Kaminski on Technological ‘Disruption’ of the Law’s Imagined Scene

Margot E. Kaminski (U Colorado Law; Yale ISP; U Colorado – Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Technological ‘Disruption’ of the Law’s Imagined Scene: Some Lessons from Lex Informatica” (Berkeley Technology Law Journal, Vol. 36, 2022) on SSRN. Here is the abstract:

Joel Reidenberg in his 1998 Article Lex Informatica observed that technology can be a distinct regulatory force in its own right and claimed that law would arise in response to human needs. Today, law and technology scholarship continues to ask: does technology ever disrupt the law? This Article articulates one particular kind of “legal disruption”: how technology (or really, the social use of technology) can alter the imagined setting around which policy conversations take place—what Jack Balkin and Reva Siegal call the “imagined regulatory scene.” Sociotechnical change can alter the imagined regulatory scene’s architecture, upsetting a policy balance and undermining a particular regulation or regime’s goals. That is, sociotechnical change sometimes disturbs the imagined paradigmatic scenario not by departing from it entirely but by constraining, enabling, or mediating actors’ behavior that we want the law to constrain or protect. This Article identifies and traces this now common move in recent law and technology literature, drawing on Reidenberg’s influential and prescient work.

Desai & Lemley on Scarcity, Regulation, and the Abundance Society

Deven R. Desai (Georgia Institute of Technology – Scheller College of Business) and Mark A. Lemley (Stanford Law School) have posted “Scarcity, Regulation, and the Abundance Society” on SSRN. Here is the abstract:

New technologies continue to democratize, decentralize, and disrupt production, offering the possibility that scarcity will be a thing of the past for many industries. We call these technologies of abundance. But our economy and our legal institutions are based on scarcity.

Abundance lowers costs. When that happens, the elimination of scarcity changes the economics of how goods and services are produced and distributed. This doesn’t just follow a normal demand curve pattern – consumption increases as price declines. Rather, special things happen when costs approach zero.

Digitization and its effects on the production, organization, and distribution of information provide early examples of changes to markets and industries. Copyright industries went through upheaval and demands for new protections. But they are not alone. New technologies such as 3D printing, CRISPR, artificial intelligence, synthetic biology, and more are democratizing, decentralizing, and disrupting production in food and alcohol production, biotechnologies, and more, and even the production of innovation itself, opening the prospect of an abundance society in which people can print or otherwise obtain the things they want, including living organisms, on-demand.

Abundance changes the social as well as economic context of markets. How will markets and legal institutions based on scarcity react when it is gone? Will we try to replicate that scarcity by imposing legal rules, as IP law does? Will the abundance of some things just create new forms of scarcity in others – the raw materials that feed 3D printers, for instance, or the electricity needed to feed AIs and cryptocurrency? Will we come up with new forms of artificial scarcity, as brands and non-fungible tokens (NFTs) do? Or will we reorder our economics and our society to focus on things other than scarcity? If so, what will that look like? And how will abundance affect the distribution of resources in society? Will we reverse the long-standing trend towards greater income inequality? Or will society find new ways to distinguish the haves from the have-nots?

Society already has examples of each type of response. The copyright industries survived the end of scarcity, and indeed thrived, not by turning to the law but by changing business practices, leveraging the scarcity inherent to live performances and using streaming technology to remove the market structures that fed unauthorized copying, and by reorganizing around distribution networks rather than content creators. Newsgathering, reporting, and distribution face challenges flowing from democratized, decentralized, and disrupted production. Luxury brands and NFTs offer examples of artificial scarcity created to reinforce a sort of modern sumptuary code. And we have seen effective, decentralized production based on economics of abundance in examples ranging from open-source software to Wikipedia.

In this introductory essay, we survey the potential futures of a post-scarcity society and offer some thoughts as to more (and less) socially productive ways to respond to the death of scarcity.

Shope on NGO Engagement in the Age of Artificial Intelligence

Mark Shope (National Yang Ming Chiao Tung University; Indiana University Robert H. McKinney School of Law) has posted “NGO Engagement in the Age of Artificial Intelligence” (Buffalo Human Rights Law Review, Vol. 28, pp. 119-158, 2022) on SSRN. Here is the abstract:

From AI and human rights focused NGOs to thematic NGOs whose subjects are impacted by AI, the AI and human rights discourse within NGOs has moved from simply keeping an eye on AI to being an integral part of NGO work. At the same time, the issue of AI and human rights is being addressed by governments in their policymaking and rulemaking to, for example, protect human rights and remain compliant with their responsibilities under international human rights instruments. When governments are reporting to United Nations treaty bodies as required under international human rights instruments, and the reports and communications include topics of artificial intelligence, how and to what extent are NGOs engaging in this dialogue? This article explores how artificial intelligence can impact rights under the nine core human rights instruments and how NGOs should monitor States parties under these instruments, providing suggestions to guide NGO engagement in the reporting process.