Wagner on Liability Rules for the Digital Age – Aiming for the Brussels Effect

Gerhard Wagner (Humboldt University School of Law; University of Chicago Law School) has posted “Liability Rules for the Digital Age – Aiming for the Brussels Effect” on SSRN. Here is the abstract:

With legislative proposals for two directives published in September 2022, the EU Commission aims to adapt the existing liability system to the challenges posed by digitalization. One of the proposals remains related and limited to liability for artificially intelligent systems, but the other contains nothing less than a full revision of the 1985 Product Liability Directive which lies at the heart of European tort law. Whereas the current Product Liability Directive largely followed the model of U.S. law, the revised version breaks new ground. It does not limit itself to the expansion of the concept of product to include intangible digital goods such as software and data as well related services, important enough in itself, but also targets the new intermediaries of e-commerce as liable parties. With all of that, the proposal for a new product liability directive is a great leap forward and has the potential to grow into a worldwide benchmark in the field. In comparison, the proposal of a directive on AI liability is much harder to assess. It remains questionable whether a second directive is actually needed at this stage of the technological development.

Tasioulas on The Rule of Algorithm and the Rule of Law

John Tasioulas (Oxford) has posted “The Rule of Algorithm and the Rule of Law” (Vienna Lectures on Legal Philosophy (2023) on SSRN. Here is the abstract:

Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This article argues that answers to this question have been excessively focussed on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.

Soh on Legal Dispositionism and Artificially-Intelligent Attributions

Jerrold Soh (Singapore Management University – Yong Pung How School of Law) has posted “Legal Dispositionism and Artificially-Intelligent Attributions” (Legal Studies, forthcoming) on SSRN. Here is the abstract:

It is often said that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be faulted should the system’s actions harm. Since the system cannot be held liable on its own account either, existing laws expose victims to accountability gaps and require reform. Drawing on attribution theory, however, this article argues that the ‘autonomy’ that law tends to ascribe to AI is premised less on fact than science fiction. Specifically, the folk dispositionism which demonstrably underpins the legal discourse on AI liability, personality, publications, and inventions, leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the article contends that AI systems are better conceptualised as situational characters whose actions remain constrained by their programming, and that properly viewing AI as such illuminates how existing legal doctrines could be sensibly applied to AI. In this light, the article advances a framework for re-conceptualising AI.