Segura et al. on Car Accidents in the Age of Robots

Adrian Segura (Department of Law, Universitat Pompeu Fabra) et al. have published “Car accidents in the age of robots” with the International Review of Law & Economics. Here is the abstract:

In this paper, we compare liability rules in a world where human-driven and fully-autonomous cars coexist. We develop a model where a manufacturer can invest to improve the safety of autonomous cars. Human drivers may decide to purchase a fully-autonomous car to save precaution costs to avoid road accidents and shift liability to the car manufacturer. As compared to the negligence rule, a strict liability regime on both human drivers and car manufacturers is proved to be a superior policy. In particular, strict liability leads to more efficient R&D investments to enhance the benefits of the technology and favors the adoption of fully-autonomous cars. We also recommend that users of fully-autonomous cars make a technology-dependent payment to a third-party if there is an accident to discipline their activity levels.

Widen on Autonomous Vehicles, Moral Hazards & the ‘AV Problem’

William H. Widen (Miami Law) has posted “Autonomous Vehicles, Moral Hazards & the ‘AV Problem'” on SSRN. Here is the abstract:

The autonomous vehicle (“AV”) industry faces the following ethical question: “How do we know when our AV technology is safe enough to deploy at scale?” The search for an answer to this question is the “AV Problem.” This essay examines that question through the lens of the July 15, 2021 filing on Form S-4 with the Securities and Exchange Commission in the going public transaction for Aurora Inventions, Inc.

The filing reveals that successful implementation of Aurora’s business plan in the long term depends on the truth of the following proposition: A vehicle controlled by a machine driver is safer than a vehicle controlled by a human driver (the “Safety Proposition”).

In a material omission for which securities law liability may attach, the S-4 fails to state Aurora’s position on deployment: will Aurora delay deployment until such time as it believes the Safety Proposition is true to a reasonable certainty or will it deploy at scale earlier in the hope that increased current losses will be offset by anticipated future safety gains?

The Safety Proposition is a statement about physical probability which is either true or false. For success, AV companies need the public to believe the Safety Proposition, yet belief is not the same as truth. The difference between truth and belief creates tension in the S-4 because the filing both fosters a belief in the Safety Proposition while at the same time making clear there is insufficient evidence to support the truth of the Safety Proposition.

A moral hazard results when financial pressures push for early deployment of AV systems before evidence shows that the Safety Proposition is true to a reasonable certainty. This problem is analyzed by comparison with the famous trolley problem in ethics and consideration of corporate governance techniques which an AV company might use to ensure the integrity of its decision process for deployment. The AV industry works to promote belief in the safety proposition in the hope that the public will accept that AV technology has benefits, thus avoiding the need to confront the truth of the Safety Proposition directly. This hinders a meaningful public debate about the merits and timing of deployment of AV technology, raising the question of whether there is a place for meaningful government regulation.

Recommended.

Peng on Autonomous Vehicle Standards under the TBT Agreement

Shin-yi Peng (National Tsing Hua University) has posted “Autonomous Vehicle Standards under the TBT Agreement: Disrupting the Boundaries?” in Shin-yi Peng, Ching-Fu Lin and Thomas Streinz (eds) Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (Cambridge University Press, 2021) on SSRN. Here is the abstract:

Products that incorporate AI will require the development of a range of new standards. This chapter uses the case of connected and autonomous vehicles (CAVs) standards as a window to explore how this “disruptive innovation” may alter the boundaries of international trade agreements. Amid the transition to a driverless future, the transformative nature of disruptive innovation renders the interpretation and application of trade rules challenging. This chapter offers a critical assessment of the two systematic issues – the goods/services boundaries, and the public/private sector boundaries. Looking to the future, regulations governing CAVs will become increasingly complex, as the level of systemic automation evolves into levels 3-5. The author argues that disruptive technologies have a greater fundamental and structural impact on the existing trade disciplines.

McPeak on Platform Immunity Redefined

Agnieszka McPeak (Gonzaga University School of Law) has posted “Platform Immunity Redefined” (William & Mary Law Review, Vol. 62, No. 5, 2021) on SSRN. Here is the abstract:

Section 230 of the Communications Decency Act (CDA) immunizes “interactive computer services” from most claims arising out of third-party content posted on the service. Passed in 1996, section 230 is a vital law for allowing free expression online, but it is ill-suited for addressing some of the harms that arise in the modern platform-based economy.

This Article proposes to redefine section 230 immunity for sharing economy platforms and online marketplaces by tying internet platform immunity to the economic relationship between the platform and the third party. It primarily focuses on one key flaw of section 230: its binary classification of online actors as either “interactive computer services” (who are immune under the statute) or “information content providers” (who are not immune). This binary classification, while perhaps adequate for the internet that existed in 1996, fails to account for the full range of economic activities in which modern platforms now engage.

This Article argues that courts applying section 230 should incorporate joint enterprise liability theory to better define the contours of platform immunity. A platform should lose immunity when there exists a common business purpose, specific pecuniary interest, and shared right of control in the underlying transaction giving rise to liability. Sharing economy platforms, such as Airbnb and Uber, and online marketplaces, such as Amazon, are primary examples of platforms that may function as joint enterprises. By using joint enterprise theory to redefine platform immunity, this Article seeks to promote greater fairness to tort victims while otherwise retaining section 230’s core free expression purpose.

Lin on Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems

Ching-Fu Lin (National Tsing Hua University) has posted “Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems” (Forthcoming in Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz (eds.), Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (Cambridge University Press 2021)) on SSRN. Here is the abstract:

Automated driving systems (ADS) is growing exponentially as one of the most promising AI applications. While governments worldwide have been promoting ADS, they have also been contemplating rules and standards in response to its legal, economic, and social ramifications. ADS promises to transform ways in which people commute between places and connect with one another, altering the conventional division of labor, social interactions, and provision of services. Regulatory issues such as testing and safety, cybersecurity, connectivity, liability, and insurance are driving governments to establish comprehensive and consistent policy frameworks. Of key importance is ADS’s ethical challenges, which play a key role in building trust and confidence among consumers, societies, and governments. How to align ADS development with fundamental ethical principles embedded in a society—with its own values and cultural contexts—remains a difficult question. The “Trolley Problem” aptly demonstrates such tension. While it seems essential to have rules and standards reflecting local values and contexts, potential conflicts and duplication may have serious trade implications in terms of how ADS is designed, manufactured, distributed, serviced, and driven across borders. This chapter examines the multifaceted, complex, and fluid regulatory issues related to ADS and uses the most controversial, ethical dimension to analyze the tensions between the protection of public morals and trade secrets under the WTO. This chapter unpacks three levels of challenges that may translate into a regulatory dilemma in light of WTO Members’ rights and obligations under GATT, TBT Agreement, and TRIPS Agreement and identifies possible venues of reconfiguration.

Buiten, de Streeel & Peitz on EU Liability Rules for the Age of Artificial Intelligence

Miriam Buiten (University of St.Gallen), Alexandre de Streel (University of Namur, and Martin Peitz (University of Mannheim – Department of Economics) have posted “EU Liability Rules for the Age of Artificial Intelligence” on SSRN. Here is the abstract:

When Artificial Intelligence (AI) systems possess the characteristics of unpredictability and autonomy, they present challenges for the existing liability framework. Two questions about the liability of AI deserve attention from policymakers: 1) Do existing civil liability rules adequately cover risks arising in the context of AI systems? 2) How would modified liability rules for producers, owners, and users of AI play out? This report addresses the two questions for EU non-contractual liability rules. It considers how liability rules affect the incentives of producers, users, and others that may be harmed by AI. The report provides concrete recommendations for updating the EU Product Liaiblity Directive and for the possible legal standard and scope of EU liability rules for owners and users of AI.

Recommended.

Rachlinski & Wistrich on Judging Autonomous Vehicles

Jeffrey J. Rachlinski (Cornell Law School) & Andrew J. Wistrich (California Central District Court) have posted “Judging Autonomous Vehicles” on SSRN. Here is the abstract:

The introduction of any new technology challenges judges to determine how it into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough. The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges, as this technology will alter the foundation of the largest source of civil liability in the United States. Although regulatory agencies will determine when and how autonomous cars may be placed into service, judges will likely play a central role in defining the standards for liability for them. How will judges treat this new technology? People commonly exhibit biases against innovations such as a naturalness bias, in which people disfavor injuries arising from artificial sources. In this paper we present data from 933 trial judges showing that judges exhibit bias against self-driving vehicles. They both assigned more liability to a self-driving vehicle than they would to a human-driven vehicle and treated injuries caused by a self-driving vehicle as more serious than injuries caused by a human-driven vehicle.

Wansley on The End of Accidents

Matthew Wansley (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “The End of Accidents” on SSRN. Here is the abstract:

In the next decade, humans will increasingly share the roads with autonomous vehicles (AVs). The deployment of AVs has the potential to dramatically reduce the frequency and severity of motor vehicle crashes. Existing liability rules give companies developing AVs insufficient incentives to develop that potential. Data from real-world autonomous driving indicates that today’s most advanced AVs rarely cause crashes, but often fail to avoid preventable crashes caused by other road users’ errors. A growing number of scholars have proposed reforms that would make it easier for plaintiffs injured in crashes with AVs to hold AV companies liable. These reform proposals either ignore the issue of comparative negligence or would preserve some form of the defense. If AV companies avoid liability for crashes in which a human road user was negligent, they will not invest in developing technology that could prevent those crashes. This Article proposes a solution: AV companies should be held responsible for all crashes in which their AVs come into contact with other vehicles, persons, or property—regardless of fault, cause, or comparative negligence. Contact responsibility would cause AV companies to internalize the costs of all preventable crashes and lead them to make all cost-justified investments in developing safer technology. Crashes would no longer be treated as regrettable but inevitable accidents, but as engineering problems to be solved.

Kesan & Zhang on When Is A Cyber Incident Likely to Be Litigated and How Much Will It Cost? An Empirical Study 

Jay P. Kesan (University of Illinois College of Law) and Linfeng Zhang (University of Illinois Department of Mathematics) have posted “When Is A Cyber Incident Likely to Be Litigated and How Much Will It Cost? An Empirical Study” (Connecticut Insurance Law Journal, Forthcoming) on SSRN. Here is the abstract:

Numerous cyber incidents have shown that there are substantial legal risks associated with these events. However, empirical analysis of the legal aspects of cyber risk is largely missing in the existing literature. Based on a dataset of historical cyber incidents and cyber-related litigation cases, we provide one of the earliest quantitative studies on the likelihood of cyber incidents being litigated and the cost of settling a cyber-related case. Using regression models, we showed that some company and incident characteristics play an important role in determining the litigation probability and settlement costs, and the models proposed in the paper display good explanatory power. Our findings show that the lack of Article III standing is commonplace in cyber-related cases and that solely relying on the common law system makes it difficult for victims of malicious data breaches to sue and receive legal remedies. In addition, we demonstrate that our findings have valuable implications for enterprise risk management in terms of how the legal risk associated with different types of cyber risk should be properly addressed.

Abbot on the Reasonable Robot

Ryan Abbot (University of Surrey School of Law, University of California, Los Angeles – David Geffen School of Medicine) has posted an excerpt from his book “The Reasonable Robot: Artificial Intelligence and the Law” on SSRN. Here is the abstract:

AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law.