Pretelli on Internet Platform Users as Weaker Parties

Ilaria Pretelli (Swiss Institute of Comparative Law; University of Urbino) has posted “A Humanist Approach to Private International Law and the Internet a Focus on Platform Users as Weaker Parties” (Yearbook of Private International Law, Volume 22 (2020/2021), pp. 201-243) on SSRN. Here is the abstract:

The apps and platforms that we use on a daily basis have increased the effective enjoyment of many fundamental rights enshrined in our constitutions and universal declarations. These were drafted to guarantee a fairer distribution of the benefits of human progress among the population. The present article argues that a humanist approach to private international law can bring just solutions to disputes arising from digital interactions. It analyses cases where platform users are pitted against a digital platform and cases where platform users are pitted against each other. For the first set of cases, an enhanced protection of digital platform users, as weaker parties, points to an expansion of the principle of favor laesi in tortious liability and to a restriction of the operation of party autonomy by clickwrapping, in consideration that a gross inequality of bargaining power also exists in business to platform contracts. In the second set of cases, reliable guidance is offered by the principles of effectiveness and of protection of vulnerable parties. Exploiting the global reach of the internet to improve the situation of crowdworkers worldwide is also considered as a task for the ILO to seriously commit upon. In line with the most recent achievements in human rights due diligence, protection clauses pointing to destination-based labour standards will be a welcome step forward. The principle of effectiveness justifies the enforcement of court decisions in cyberspace, which has become a political and juridical necessity.

Guerra, Parisi & Pi on Liability for Robots II: An Economic Analysis

Alice Guerra (University of Bologna – Department of Economics), Francesco Parisi (University of Minnesota – Law School), and Daniel Pi (University of Maine – School of Law) have posted “Liability for Robots II: An Economic Analysis” (Journal of Institutional Economics 2021) on SSRN. Here is the abstract:

This is the second of two companion papers that discuss accidents caused by robots. In the first paper (Guerra et al., 2021), we presented the novel problems posed by robot accidents, and assessed the related legal approaches and institutional opportunities. In this paper, we build on the previous analysis to consider a novel liability regime, which we refer to as “manufacturer residual liability” rule. This makes operators and victims liable for accidents due to their negligence—hence, incentivizing them to act diligently; and makes manufacturers residually liable for non-negligent accidents—hence, incentivizing them to make optimal investments in R&D for robots’ safety. In turn, this rule will bring down the price of safer robots, driving unsafe technology out of the market. Thanks to the percolation effect of residual liability, operators will also be incentivized to adopt optimal activity levels in robots’ usage.

Recommended.

Pazos on The Case for a (European?) Law of Reputational Feedback Systems

Ricardo Pazos (Universidad Autónoma de Madrid – Faculty of Law) has posted “The Case for a (European?) Law of Reputational Feedback Systems” (InDret, Vol. 3, 2021) on SSRN. Here is the abstract:

Reputational feedback systems are essential in the digital economy, as tools to build trust among traders and consumers and help the latter to make better choices. Although the number of platforms using such systems is growing, some aspects undermine their reliability, endangering the proper functioning of the market. In this context, it might be convenient to create a “law of reputational feedback systems” – a comprehensive set of rules specifically aimed at online reviews and ratings, and possibly at the European Union level with the goal of contributing to develop the digital single market. This paper aims at fostering a debate on the matter. First, it presents how important reputational feedback systems are and the weaknesses they are affected by. Then, it addresses the fragmentation argument that favours legal harmonisation, without forgetting that harmonisation has downsides, too. Afterwards, some possible rules are envisaged, considering academic or institutional initiatives and norms that already exist. Finally, to balance the discussion, this paper also offers arguments to support that further regulating reputational feedback systems, or at least doing it at the European level, could be a step in the wrong direction.

Interesting expansion of common law of reputational torts.

Seng on Artificial Intelligence and Information Intermediaries

Daniel Kiat Boon Seng (Director, Centre for Technology, Robotics, AI and the Law, Faculty of Law, National University of Singapore) has posted “Artificial Intelligence and Information Intermediaries” (Artificial Intelligence and Private Law 2021) on SSRN. Here is the abstract:

The explosive growth of the Internet was supported by the Communications Decency Act (CDA) and the Digital Millennium Copyright Act (DMCA). Together, these pieces of legislation have been credited with shielding Internet intermediaries from onerous liabilities, and, in doing so, enabled the Internet to flourish. However, the use of machine learning systems by Internet intermediaries in their businesses threatens to upend this delicate legal balance. Would this affect the intermediaries’ CDA and DMCA immunities, or expose them to greater liability for their actions? Drawing on both substantive and empirical research, this paper concludes that automation used by intermediaries largely reinforces their immunities. In the consequence of this is that intermediaries are left with little incentive to exercise their discretion to filter out illicit, harmful and invalid content. These developments brought about by AI are worrisome and require a careful recalibration of the immunity rules in both the CDA and DMCA to ensure the continued relevance of these rules.

Segura et al. on Car Accidents in the Age of Robots

Adrian Segura (Department of Law, Universitat Pompeu Fabra) et al. have published “Car accidents in the age of robots” with the International Review of Law & Economics. Here is the abstract:

In this paper, we compare liability rules in a world where human-driven and fully-autonomous cars coexist. We develop a model where a manufacturer can invest to improve the safety of autonomous cars. Human drivers may decide to purchase a fully-autonomous car to save precaution costs to avoid road accidents and shift liability to the car manufacturer. As compared to the negligence rule, a strict liability regime on both human drivers and car manufacturers is proved to be a superior policy. In particular, strict liability leads to more efficient R&D investments to enhance the benefits of the technology and favors the adoption of fully-autonomous cars. We also recommend that users of fully-autonomous cars make a technology-dependent payment to a third-party if there is an accident to discipline their activity levels.

Widen on Autonomous Vehicles, Moral Hazards & the ‘AV Problem’

William H. Widen (Miami Law) has posted “Autonomous Vehicles, Moral Hazards & the ‘AV Problem'” on SSRN. Here is the abstract:

The autonomous vehicle (“AV”) industry faces the following ethical question: “How do we know when our AV technology is safe enough to deploy at scale?” The search for an answer to this question is the “AV Problem.” This essay examines that question through the lens of the July 15, 2021 filing on Form S-4 with the Securities and Exchange Commission in the going public transaction for Aurora Inventions, Inc.

The filing reveals that successful implementation of Aurora’s business plan in the long term depends on the truth of the following proposition: A vehicle controlled by a machine driver is safer than a vehicle controlled by a human driver (the “Safety Proposition”).

In a material omission for which securities law liability may attach, the S-4 fails to state Aurora’s position on deployment: will Aurora delay deployment until such time as it believes the Safety Proposition is true to a reasonable certainty or will it deploy at scale earlier in the hope that increased current losses will be offset by anticipated future safety gains?

The Safety Proposition is a statement about physical probability which is either true or false. For success, AV companies need the public to believe the Safety Proposition, yet belief is not the same as truth. The difference between truth and belief creates tension in the S-4 because the filing both fosters a belief in the Safety Proposition while at the same time making clear there is insufficient evidence to support the truth of the Safety Proposition.

A moral hazard results when financial pressures push for early deployment of AV systems before evidence shows that the Safety Proposition is true to a reasonable certainty. This problem is analyzed by comparison with the famous trolley problem in ethics and consideration of corporate governance techniques which an AV company might use to ensure the integrity of its decision process for deployment. The AV industry works to promote belief in the safety proposition in the hope that the public will accept that AV technology has benefits, thus avoiding the need to confront the truth of the Safety Proposition directly. This hinders a meaningful public debate about the merits and timing of deployment of AV technology, raising the question of whether there is a place for meaningful government regulation.

Recommended.

Peng on Autonomous Vehicle Standards under the TBT Agreement

Shin-yi Peng (National Tsing Hua University) has posted “Autonomous Vehicle Standards under the TBT Agreement: Disrupting the Boundaries?” in Shin-yi Peng, Ching-Fu Lin and Thomas Streinz (eds) Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (Cambridge University Press, 2021) on SSRN. Here is the abstract:

Products that incorporate AI will require the development of a range of new standards. This chapter uses the case of connected and autonomous vehicles (CAVs) standards as a window to explore how this “disruptive innovation” may alter the boundaries of international trade agreements. Amid the transition to a driverless future, the transformative nature of disruptive innovation renders the interpretation and application of trade rules challenging. This chapter offers a critical assessment of the two systematic issues – the goods/services boundaries, and the public/private sector boundaries. Looking to the future, regulations governing CAVs will become increasingly complex, as the level of systemic automation evolves into levels 3-5. The author argues that disruptive technologies have a greater fundamental and structural impact on the existing trade disciplines.

McPeak on Platform Immunity Redefined

Agnieszka McPeak (Gonzaga University School of Law) has posted “Platform Immunity Redefined” (William & Mary Law Review, Vol. 62, No. 5, 2021) on SSRN. Here is the abstract:

Section 230 of the Communications Decency Act (CDA) immunizes “interactive computer services” from most claims arising out of third-party content posted on the service. Passed in 1996, section 230 is a vital law for allowing free expression online, but it is ill-suited for addressing some of the harms that arise in the modern platform-based economy.

This Article proposes to redefine section 230 immunity for sharing economy platforms and online marketplaces by tying internet platform immunity to the economic relationship between the platform and the third party. It primarily focuses on one key flaw of section 230: its binary classification of online actors as either “interactive computer services” (who are immune under the statute) or “information content providers” (who are not immune). This binary classification, while perhaps adequate for the internet that existed in 1996, fails to account for the full range of economic activities in which modern platforms now engage.

This Article argues that courts applying section 230 should incorporate joint enterprise liability theory to better define the contours of platform immunity. A platform should lose immunity when there exists a common business purpose, specific pecuniary interest, and shared right of control in the underlying transaction giving rise to liability. Sharing economy platforms, such as Airbnb and Uber, and online marketplaces, such as Amazon, are primary examples of platforms that may function as joint enterprises. By using joint enterprise theory to redefine platform immunity, this Article seeks to promote greater fairness to tort victims while otherwise retaining section 230’s core free expression purpose.

Lin on Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems

Ching-Fu Lin (National Tsing Hua University) has posted “Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems” (Forthcoming in Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz (eds.), Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (Cambridge University Press 2021)) on SSRN. Here is the abstract:

Automated driving systems (ADS) is growing exponentially as one of the most promising AI applications. While governments worldwide have been promoting ADS, they have also been contemplating rules and standards in response to its legal, economic, and social ramifications. ADS promises to transform ways in which people commute between places and connect with one another, altering the conventional division of labor, social interactions, and provision of services. Regulatory issues such as testing and safety, cybersecurity, connectivity, liability, and insurance are driving governments to establish comprehensive and consistent policy frameworks. Of key importance is ADS’s ethical challenges, which play a key role in building trust and confidence among consumers, societies, and governments. How to align ADS development with fundamental ethical principles embedded in a society—with its own values and cultural contexts—remains a difficult question. The “Trolley Problem” aptly demonstrates such tension. While it seems essential to have rules and standards reflecting local values and contexts, potential conflicts and duplication may have serious trade implications in terms of how ADS is designed, manufactured, distributed, serviced, and driven across borders. This chapter examines the multifaceted, complex, and fluid regulatory issues related to ADS and uses the most controversial, ethical dimension to analyze the tensions between the protection of public morals and trade secrets under the WTO. This chapter unpacks three levels of challenges that may translate into a regulatory dilemma in light of WTO Members’ rights and obligations under GATT, TBT Agreement, and TRIPS Agreement and identifies possible venues of reconfiguration.

Buiten, de Streeel & Peitz on EU Liability Rules for the Age of Artificial Intelligence

Miriam Buiten (University of St.Gallen), Alexandre de Streel (University of Namur, and Martin Peitz (University of Mannheim – Department of Economics) have posted “EU Liability Rules for the Age of Artificial Intelligence” on SSRN. Here is the abstract:

When Artificial Intelligence (AI) systems possess the characteristics of unpredictability and autonomy, they present challenges for the existing liability framework. Two questions about the liability of AI deserve attention from policymakers: 1) Do existing civil liability rules adequately cover risks arising in the context of AI systems? 2) How would modified liability rules for producers, owners, and users of AI play out? This report addresses the two questions for EU non-contractual liability rules. It considers how liability rules affect the incentives of producers, users, and others that may be harmed by AI. The report provides concrete recommendations for updating the EU Product Liaiblity Directive and for the possible legal standard and scope of EU liability rules for owners and users of AI.

Recommended.