Nemitz on Culture, Democracy, the Rule of Law and the New Vienna School of Critical Practice of AI

Paul Nemitz (European Commission) has posted “Culture, Democracy, the Rule of Law and the New Vienna School of Critical Practice of AI” on SSRN. Here is the abstract:

From a European perspective, Austria is the hotbed of a new democratic, critical practice of AI. In contrast to the Frankfurt School and its critical theory, this new Vienna School is not primarily concerned with theory, but with the practice of shaping the new digital world, with its power imbalances and new risks for fundamental rights of people, democracy and the rule of law. The practice to be shaped is the practice of technology and business models of AI and the digital.

Singh on GenAI and Religion: Creation, Agency, and Meaning

Dr Preet Deep Singh (Invest India) has posted “GenAI and Religion: Creation, Agency, and Meaning” on SSRN. Here is the abstract:

This paper explores the parallels between Generative Artificial Intelligence (GenAI) and religious systems in three domains: creation, agency, and meaning-making. Both offer frameworks for human engagement but differ in intent, autonomy, and moral accountability. Despite these differences, GenAI and religion share roles as creators, influencers, and meaning facilitators. We address and counter rebuttals to these parallels, highlighting GenAI’s co-constructed outputs and its impact on modern meaning-making. The paper concludes with the societal implications of these parallels in shaping future thought and action.

Jurcys et al. on Who Owns My AI Twin? Rethinking Ownership and Rights in a New World of Hybrid Identities

Paul Jurcys (Prifina) et al. have posted “Who Owns My AI Twin? Rethinking Ownership and Rights in a New World of Hybrid Identities” on SSRN. Here is the abstract:

This paper explores the legal and ethical complexities surrounding the ownership and rights of AI-powered digital twins—digital replicas of individuals that encapsulate their knowledge, behavior, and identity. As AI twins become increasingly integral to our digital lives, the authors argue for recognizing individuals as the moral and legal owners of these entities, given that they are constructed from personal data. The paper critiques current legal frameworks, which often prioritize technological infrastructure over personal data, and advocates for a new social contract that centers on human-centric data models. This approach aims to empower individuals, ensuring they have greater control over their digital identities while addressing the broader implications of AI-driven personal representation.

Tobia on Algorithmic Legal Interpretation

Kevin Tobia (Georgetown U Law Center) has posted “Algorithmic Legal Interpretation” (University of Chicago Law Review Online (forthcoming 2024)) on SSRN. Here is the abstract:

Legal interpretation has taken an empirical turn, with scholars and judges debating the use of corpus linguistics, surveys, and experiments in interpretation. Professor Choi’s Measuring Clarity in Legal Text offers a new proposal: interpretation by artificial intelligence. The Article impressively and thoughtfully considers contributions from word embeddings, representations of naturally occurring language in a multi-dimensional vector space, driven by machine learning algorithms.

The Article expresses some caution and some optimism about its proposal. This Response endorses the caution: Words’ proximity in vector space (measured by cosine similarity) is not conclusive of a legal text’s clarity or ambiguity, and judges should not rely on such outputs of algorithmic tools to settle interpretation. Nor should judges look to the outputs of ChatGPT or other LLMs as answers to legal interpretation. Nevertheless, the Article’s new empirical approach usefully illuminates central assumptions and tensions in legal interpretive theories. In sum, Measuring Clarity in Legal Text is an important contribution, opening new, timely, and rich debates about artificial intelligence’s contributions to legal interpretation.

Tasioulas on The Rule of Algorithm and the Rule of Law

John Tasioulas (Oxford) has posted “The Rule of Algorithm and the Rule of Law” (Vienna Lectures on Legal Philosophy (2023) on SSRN. Here is the abstract:

Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This article argues that answers to this question have been excessively focussed on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.

Kumar & Choudhury on Cognitive Moral Development in AI Robots

Shailendra Kumar (Sikkim University) and Sanghamitra Choudhury (University of Oxford) have posted “Cognitive Moral Development in AI Robots” on SSRN. Here is the abstract:

The widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality, and human rights. This manuscript explores the possibility and means of cognitive moral development in AI bots, and while doing so, it floats a new concept for the characterization and development of artificially intelligent and ethical robotic machines. It proposes the classification of the order of evolution of ethics in the AI bots, by making use of Lawrence Kohlberg’s study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI robots in accordance with the proposed concept, humans may assist in the development of an ideal robotic creature that is morally responsible.

Katyal on Five Principles of Policy Reform for the Technological Age

Sonia Katyal (U California, Berkeley – School of Law) has posted “Lex Reformatica: Five Principles of Policy Reform for the Technological Age” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Almost twenty five years ago, beloved former colleague Joel Reidenberg penned an article that argued that law and government regulation were not the only source of authority and rulemaking in the Information Society. Rather, he argued that technology itself, particularly system design choices like network design and system configurations, can also impose similar regulatory norms on communities. These rules and systems, he argued, comprised a Lex Informatica—a term that Reidenberg coined in historical reference to “Lex Mercatoria,” a system of international, merchant-driven norms in the Middle Ages that emerged independent of localized sovereign control.

Today, however, we confront a different phenomenon, one that requires us to draw upon the wisdom of Reidenberg’s landmark work in considering the repercussions of the previous era. As much as Lex Informatica provided us with a descriptive lens to analyze the birth of the internet, we are now confronted with the aftereffects of decades of muted, if not absent, regulation. When technological social norms are allowed to develop outside of clear legal restraints, who wins? Who loses? In this new era, we face a new set of challenges—challenges that force us to confront a critical need for infrastructural reform that focuses on the interplay between public and private forms of regulation (and self-regulation), its costs, and its benefits.

Instead of demonstrating the richness, complexity, and promise of yesterday’s internet age, today’s events show us what precisely can happen in an age of information libertarianism, underscoring the need for a new approach to information regulation. The articles in this Issue are taken from two separate symposiums—one on Lex Informatica and another on race and technology law. At present, a conversation between them could not be any more necessary. Taken together, these papers showcase what I refer to as the Lex Reformatica of today’s digital age. This collection of papers demonstrates the need for scholars, lawyers, and legislators to return to Reidenberg’s foundational work and to update its trajectory towards a new era that focuses on the design of a new approach to reform.

Gervais on How Courts Can Define Humanness in the Age of Artificial Intelligence

Daniel J. Gervais (Vanderbilt University – Law School) has posted “Human as a Matter of Law: How Courts Can Define Humanness in the Age of Artificial Intelligence” on SSRN. Here is the abstract:

This Essay considers the ability of AI machines to perform intellectual functions long associated with human higher mental faculties as a form of sapience, a notion that more fruitfully describes their abilities than either intelligence or sentience. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, the essay aims to situates the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.

G’sell on AI Judges

Florence G’sell (Science Po Law) has posted “AI Judges” (Larry A. Dimatteo, Cristina Poncibo, Michal Cannarsa (edit.), The Cambridge Handbook of Artificial Intelligence, Global Perspectives on Law and Ethics, Cambridge University Press, 2022) on SSRN. Here is the abstract:

The prospect of a “robot judge” raises many fantasies and concerns. Some argue that only humans are endowed with the modes of thought, intuition and empathy that would be necessary to analyze or judge a case. As early as 1976, Joseph Weizenbaum, creator of Eliza, one of the very first conversational agents, strongly asserted that important decisions should not be left to machines, which are sorely lacking in human qualities such as compassion and wisdom. On the other hand, it could be argued today that the courts would be wrong to deprive themselves of the possibilities opened up by artificial intelligence tools, whose capabilities are expected to improve greatly in the future. In reality, the question of the use of AI in the judicial system should probably be asked in a nuanced way, without considering the dystopian and highly unlikely scenario of the “robot judge” portrayed by Trevor Noah in a famous episode of The Daily Show. Rather, the question is how courts can benefit from increasingly sophisticated machines. To what extent can these tools help them render justice? What is their contribution in terms of decision support? Can we seriously consider delegating to a machine the entire power to make a judicial decision?

This chapter proceeds as follow. Section 23.2 is devoted to the use of AI tools by the courts. It is divided into three subsections. Section 23.2.1 deals with the use of risk assessment tools, which are widespread in the United States but highly regulated in Europe, particularly in France. Section 23.2.2 presents the possibilities opened by machine learning algorithms trained on databases composed of judicial decisions, which are able to anticipate court decisions or recommend solutions to judges. Section 23.2.3 considers the very unlikely eventuality of full automation of judicial decision making.

Fagan on The Un-Modeled World: Law and the Limits of Machine Learning

Frank Fagan (South Texas College of Law; EDHEC Augmented Law Institute) has posted “The Un-Modeled World: Law and the Limits of Machine Learning” (MIT Computational Law Report, Vol. 4 (Forthcoming 2022)) on SSRN. Here is the abstract:

There is today a pervasive concern that humans will not be able to keep up with accelerating technological process in law and will become objects of sheer manipulation. For those who believe that human objectification is on the horizon, they offer solutions that require humans to take control, mostly by means of self-awareness and development of will. Among others, these strategies are present in Heidegger, Marcuse, and Habermas as presently discussed. But these solutions are not the only way. Technology itself offers a solution on its own terms. Machines can only learn if they can observe patterns, and those patterns must occur in sufficiently stable environments. Without detectable regularities and contextual invariance, machines remain prone to error. Yet humans innovate and things change. This means that innovation operates as a self-corrective—a built-in feature that limits the ability of technology to fully objectify human life and law error-free. Fears of complete technological ascendance in law and elsewhere are therefore exaggerated, though interesting intermediate states are likely to obtain. Progress will proceed apace in closed legal domains, but models will require continual adaptation and updating in legal domains where human innovation and openness prevail.