Wagner on Liability Rules for the Digital Age – Aiming for the Brussels Effect

Gerhard Wagner (Humboldt University School of Law; University of Chicago Law School) has posted “Liability Rules for the Digital Age – Aiming for the Brussels Effect” on SSRN. Here is the abstract:

With legislative proposals for two directives published in September 2022, the EU Commission aims to adapt the existing liability system to the challenges posed by digitalization. One of the proposals remains related and limited to liability for artificially intelligent systems, but the other contains nothing less than a full revision of the 1985 Product Liability Directive which lies at the heart of European tort law. Whereas the current Product Liability Directive largely followed the model of U.S. law, the revised version breaks new ground. It does not limit itself to the expansion of the concept of product to include intangible digital goods such as software and data as well related services, important enough in itself, but also targets the new intermediaries of e-commerce as liable parties. With all of that, the proposal for a new product liability directive is a great leap forward and has the potential to grow into a worldwide benchmark in the field. In comparison, the proposal of a directive on AI liability is much harder to assess. It remains questionable whether a second directive is actually needed at this stage of the technological development.

Soh on Legal Dispositionism and Artificially-Intelligent Attributions

Jerrold Soh (Singapore Management University – Yong Pung How School of Law) has posted “Legal Dispositionism and Artificially-Intelligent Attributions” (Legal Studies, forthcoming) on SSRN. Here is the abstract:

It is often said that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be faulted should the system’s actions harm. Since the system cannot be held liable on its own account either, existing laws expose victims to accountability gaps and require reform. Drawing on attribution theory, however, this article argues that the ‘autonomy’ that law tends to ascribe to AI is premised less on fact than science fiction. Specifically, the folk dispositionism which demonstrably underpins the legal discourse on AI liability, personality, publications, and inventions, leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the article contends that AI systems are better conceptualised as situational characters whose actions remain constrained by their programming, and that properly viewing AI as such illuminates how existing legal doctrines could be sensibly applied to AI. In this light, the article advances a framework for re-conceptualising AI.

Recommended.

Hacker on The European AI Liability Directives

Philipp Hacker (European University Viadrina Frankfurt (Oder) – European New School of Digital Studies) has posted “The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future” on SSRN. Here is the abstract:

The optimal liability framework for AI systems remains an unsolved problem across the globe. With ChatGPT and other large models taking the technology to the next level, solutions are urgently needed. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive (AILD) and a revision of the Product Liability Directive (PLD). They constitute the final cornerstone of AI regulation in the EU. Crucially, the liability proposals and the AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a “Brussels effect” in AI regulation, with significant consequences for the US and other countries.

Against this background, this paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article makes suggestions for amendments to the proposed AI liability framework. They are collected in a concise Annex at the end of the paper. I argue, inter alia, that the dichotomy between the fault-based AILD Proposal and the supposedly strict liability PLD Proposal is fictional and should be abandoned; that an EU framework for AI liability should comprise one fully harmonizing regulation instead of two insufficiently coordinated directives; and that the current proposals unjustifiably collapse fundamental distinctions between social and individual risk by equating high-risk AI systems in the AI Act with those under the liability framework.

Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. More specifically, I make four key proposals. Effective compensation should be ensured by combining truly strict liability for certain high-risk AI systems with general presumptions of defectiveness, fault and causality in cases involving SMEs or non-high-risk AI systems. The paper introduces a novel distinction between illegitimate- and legitimate-harm models to delineate strict liability’s scope. Truly strict liability should be reserved for high-risk AI systems that, from a social perspective, should not cause harm (illegitimate-harm models, e.g., autonomous vehicles or medical AI). Models meant to cause some unavoidable harm by ranking and rejecting individuals (legitimate-harm models, e.g., credit scoring or insurance scoring) may only face rebuttable presumptions of defectiveness and causality. General-purpose AI systems should only be subjected to high-risk regulation, including liability for high-risk AI systems, in specific high-risk use cases for which they are deployed. Consumers ought to be liable based on regular fault, in general.

Furthermore, innovation and legal certainty should be fostered through a comprehensive regime of safe harbours, defined quantitatively to the best extent possible. Moreover, trustworthy AI remains an important goal for AI regulation. Hence, the liability framework must specifically extend to non-discrimination cases and provide for clear rules concerning explainability (XAI).

Finally, awareness for the climate effects of AI, and digital technology more broadly, is rapidly growing in computer science. In diametrical opposition to this shift in discourse and understanding, however, EU legislators thoroughly neglect environmental sustainability in both the AI Act and the proposed liability regime. To counter this, I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).

Gunawan, Santos & Kamara on Redress for Dark Patterns Privacy Harms

Johanna Gunawan (Northeastern University Khoury College of Computer Sciences), Cristiana Santos (Utrecht University), and Irene Kamara (Tilburg University – Tilburg Institute for Law, Technology, and Society (TILT); Free University of Brussels (LSTS)) have posted “Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions” on SSRN. Here is the abstract:

Internet users are constantly subjected to incessant demands for attention in a noisy digital world. Countless inputs compete for the chance to be clicked, to be seen, and to be interacted with, and they can deploy tactics that take advantage of behavioral psychology to ‘nudge’ users into doing what they want. Some nudges are benign; others deceive, steer, or manipulate users, as the U.S. FTC Commissioner says, “into behavior that is profitable for an online service, but often harmful to [us] or contrary to [our] intent”. These tactics are dark patterns, which are manipulative and deceptive interface designs used at-scale in more than ten percent of global shopping websites and more than ninety-five percent of the most popular apps in online services.

Literature discusses several types of harms caused by dark patterns that includes harms of a material nature, such as financial harms, or anticompetitive issues, as well as harms of a non-material nature, such as privacy invasion, time loss, addiction, cognitive burdens, loss of autonomy, and emotional or psychological distress. Through a comprehensive literature review of this scholarship and case law analysis conducted by our interdisciplinary team of HCI and legal scholars, this paper investigates whether harms caused by such dark patterns could give rise to redress for individuals subject to dark pattern practices using consent interactions and the GDPR consent requirements as a case study.

Marchant on Swords and Shields: Impact of Private Standards in Technology-Based Liability

Gary E. Marchant (Arizona State University – College of Law) has posted “Swords and Shields: Impact of Private Standards in Technology-Based Liability” on SSRN. Here is the abstract:

Private voluntary standards are playing an ever greater role in the governance of many emerging technologies, including autonomous vehicles. Government regulation has lagged due to the ‘pacing problem’ in which technology moves faster than government regulation, and regulators lack the first-hand information that is mostly in the hands of industry and other experts in the field who often participate in standard-setting activities. Consequently, private standards have moved beyond historical tasks such as inter-operability to now produce quasi-governmental policy specifications that address the risk management, governance, privacy risks of emerging technologies. As the federal government has prudently concluded that promulgating government standards for autonomous vehicles would be premature at this time and may do more harm than good, private standards have become the primary governance tool for these vehicles. A number of standard-setting organizations, including the SAE, ISO, UL and IEEE have stepped forward to adopt a series of inter-locking private standards that collectively govern autonomous vehicle safety. While these private standards were not developed with litigation in mind, they could provide a useful benchmark for judge and juries to use in evaluating the safety of autonomous vehicles and whether compensatory and punitive damages are appropriate after an injury-causing accident involving an autonomous vehicle. Drawing on several decades of relevant case law, this paper argues that a manufacturer’s conformance with private standards for autonomous vehicle safety should be a partial shield against liability, whereas failure to conform to such standards should be a partial sword used by plaintiffs tor lack of due care.

Husovec & Roche Laguna on the Digital Services Act: A Short Primer

Martin Husovec (London School of Economics – Law School) and Irene Roche Laguna (European Commission) have posted “Digital Services Act: A Short Primer” (in Principles of the Digital Services Act (Oxford University Press, Forthcoming 2023)) on SSRN. Here is the abstract:

This article provides a short primer on the forthcoming Digital Services Act. DSA is an EU Regulation aiming to assure fairness, trust, and safety in the digital environment. It preserves and upgrades the liability exemptions for online intermediaries that exist in the European framework since 2000. It exempts digital infrastructure-layer services, such as internet access providers, and application-layer services, such as social networks and file-hosting services, from liability for third-party content. Simultaneously, DSA imposes due diligence obligations concerning the design and operation of such services, in order to ensure a safe, transparent and predictable online ecosystem. These due diligence obligations aim to regulate the general design of services, content moderation practices, advertising, and transparency, including sharing of information. The due diligence obligations focus mainly on the process and design rather than the content itself, and usually correspond to the size and social relevance of various services. Very large online platforms and very large online search engines are subject to the most extensive risk mitigation responsibilities, which are subject to independent auditing.

Stein on Assuming the Risks of Artificial Intelligence

Amy L. Stein (University of Florida Levin College of Law) has posted “Assuming the Risks of Artificial Intelligence” (102 Boston University Law Review 2022) on SSRN. Here is the abstract:

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-risk- consent trilemma.

Sharkey on Personalized Damages

Catherine M. Sharkey (NYU School of Law) has posted “Personalized Damages” (U. Chi. L. Rev. Online 2022) on SSRN. Here is the abstract:

In Personalized Law: Different Rules for Different People, Professors Omri Ben-Shahar and Ariel Porat imagine a brave new tort world wherein the ubiquitous reasonable person standard is replaced by myriad personalized “reasonable you” commands. Ben-Shahar’s and Porat’s asymmetrical embrace of personalized law—full stop for standards of care, near rejection for damages—raises four issues, not sufficiently taken up in the book. First, the authors equivocate too much with regard to the purposes of tort law; ultimately, if and when forced to choose, law-and-economics deterrence-based theory holds the most promise for modern tort law. Second, the damage-uniformity approach clearly dominates the status quo of “crude” personalization. Third, via a deterrence lens that eschews “misalignments” in tort law, a personalized standard of care necessitates personalized damages. Fourth, the true benefit of an ideal personalized damages regime might be further uncovering the root cause of racial and gender disparities in status quo tort damages. Paradoxically, ideal personalization might then reinforce the damage-uniformity approach.

Haber & Harel Ben-Shahar on Algorithmic Parenting

Eldar Haber (University of Haifa – Faculty of Law) and Tammy Harel Ben-Shahar (same) have posted “Algorithmic Parenting” (32 Fordham Intell. Prop. Media & Ent. L.J. 1 (2021)) on SSRN. Here is the abstract:

Growing up in today’s world involves an increasing amount of interaction with technology. The rise in availability, accessibility, and use of the internet, along with social norms that encourage internet connection, make it nearly impossible for children to avoid online engagement. The internet undoubtedly benefits children socially and academically and mastering technological tools at a young age is indispensable for opening doors to valuable opportunities. However, the internet is risky for children in myriad ways. Parents and lawmakers are especially concerned with the tension between important advantages and risks technology bestows on children.

New technological developments in artificial intelligence are beginning to alter the ways parents might choose to safeguard their children from online risks. Recently, emerging AI-based devices and services can automatically detect when a child’s online behavior indicates that their well-being might be compromised or when they are engaging in inappropriate online communication. This technology can notify parents or immediately block harmful content in extreme cases. Referred to as algorithmic parenting in this Article, this new form of parental control has the potential to cheaply and effectively protect children against digital harms. If designed properly, algorithmic parenting would also ensure children’s liberties by neither excessively infringing their privacy nor limiting their freedom of speech and access to information.

This Article offers a balanced solution to the parenting dilemma that allows parents and children to maintain a relationship grounded in trust and respect, while simultaneously providing a safety net in extreme cases of risk. In doing so, it addresses the following questions: What laws should govern platforms with respect to algorithms and data aggregation? Who, if anyone, should be liable when risky behavior goes undetected? Perhaps most fundamentally, relative to the physical world, do parents have a duty to protect their children from online harm? Finally, assuming that algorithmic parenting is a beneficial measure for protecting children from online risks, should legislators and policymakers use laws and regulations to encourage or even mandate the use of such algorithms to protect children? This Article offers a taxonomy of current online threats to children, an examination of the potential shift toward algorithmic parenting, and a regulatory toolkit to guide policymakers in making such a transition.

Ebers on Civil Liability for Autonomous Vehicles in Germany

Martin Ebers (Humboldt University of Berlin – Faculty of Law; University of Tartu, School of Law) has posted “Civil Liability for Autonomous Vehicles in Germany” on SSRN. Here is the abstract:

This paper deals with civil liability for autonomous driving under German law, and is structured as follows: After an introduction (I.) the paper provides an overview of the current legal framework (II.), followed by an analysis of the liability of drivers (III.), technical supervisors (IV.), vehicle keepers (V.), manufacturers (VI.) and IT service providers (VII.). An additional section deals with the question of how autonomous vehicles would be integrated into the insurance system (VIII.), whereas the last section draws some final conclusions (IX.).