Gunawan, Santos & Kamara on Redress for Dark Patterns Privacy Harms

Johanna Gunawan (Northeastern University Khoury College of Computer Sciences), Cristiana Santos (Utrecht University), and Irene Kamara (Tilburg University – Tilburg Institute for Law, Technology, and Society (TILT); Free University of Brussels (LSTS)) have posted “Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions” on SSRN. Here is the abstract:

Internet users are constantly subjected to incessant demands for attention in a noisy digital world. Countless inputs compete for the chance to be clicked, to be seen, and to be interacted with, and they can deploy tactics that take advantage of behavioral psychology to ‘nudge’ users into doing what they want. Some nudges are benign; others deceive, steer, or manipulate users, as the U.S. FTC Commissioner says, “into behavior that is profitable for an online service, but often harmful to [us] or contrary to [our] intent”. These tactics are dark patterns, which are manipulative and deceptive interface designs used at-scale in more than ten percent of global shopping websites and more than ninety-five percent of the most popular apps in online services.

Literature discusses several types of harms caused by dark patterns that includes harms of a material nature, such as financial harms, or anticompetitive issues, as well as harms of a non-material nature, such as privacy invasion, time loss, addiction, cognitive burdens, loss of autonomy, and emotional or psychological distress. Through a comprehensive literature review of this scholarship and case law analysis conducted by our interdisciplinary team of HCI and legal scholars, this paper investigates whether harms caused by such dark patterns could give rise to redress for individuals subject to dark pattern practices using consent interactions and the GDPR consent requirements as a case study.

Marchant on Swords and Shields: Impact of Private Standards in Technology-Based Liability

Gary E. Marchant (Arizona State University – College of Law) has posted “Swords and Shields: Impact of Private Standards in Technology-Based Liability” on SSRN. Here is the abstract:

Private voluntary standards are playing an ever greater role in the governance of many emerging technologies, including autonomous vehicles. Government regulation has lagged due to the ‘pacing problem’ in which technology moves faster than government regulation, and regulators lack the first-hand information that is mostly in the hands of industry and other experts in the field who often participate in standard-setting activities. Consequently, private standards have moved beyond historical tasks such as inter-operability to now produce quasi-governmental policy specifications that address the risk management, governance, privacy risks of emerging technologies. As the federal government has prudently concluded that promulgating government standards for autonomous vehicles would be premature at this time and may do more harm than good, private standards have become the primary governance tool for these vehicles. A number of standard-setting organizations, including the SAE, ISO, UL and IEEE have stepped forward to adopt a series of inter-locking private standards that collectively govern autonomous vehicle safety. While these private standards were not developed with litigation in mind, they could provide a useful benchmark for judge and juries to use in evaluating the safety of autonomous vehicles and whether compensatory and punitive damages are appropriate after an injury-causing accident involving an autonomous vehicle. Drawing on several decades of relevant case law, this paper argues that a manufacturer’s conformance with private standards for autonomous vehicle safety should be a partial shield against liability, whereas failure to conform to such standards should be a partial sword used by plaintiffs tor lack of due care.

Husovec & Roche Laguna on the Digital Services Act: A Short Primer

Martin Husovec (London School of Economics – Law School) and Irene Roche Laguna (European Commission) have posted “Digital Services Act: A Short Primer” (in Principles of the Digital Services Act (Oxford University Press, Forthcoming 2023)) on SSRN. Here is the abstract:

This article provides a short primer on the forthcoming Digital Services Act. DSA is an EU Regulation aiming to assure fairness, trust, and safety in the digital environment. It preserves and upgrades the liability exemptions for online intermediaries that exist in the European framework since 2000. It exempts digital infrastructure-layer services, such as internet access providers, and application-layer services, such as social networks and file-hosting services, from liability for third-party content. Simultaneously, DSA imposes due diligence obligations concerning the design and operation of such services, in order to ensure a safe, transparent and predictable online ecosystem. These due diligence obligations aim to regulate the general design of services, content moderation practices, advertising, and transparency, including sharing of information. The due diligence obligations focus mainly on the process and design rather than the content itself, and usually correspond to the size and social relevance of various services. Very large online platforms and very large online search engines are subject to the most extensive risk mitigation responsibilities, which are subject to independent auditing.

Stein on Assuming the Risks of Artificial Intelligence

Amy L. Stein (University of Florida Levin College of Law) has posted “Assuming the Risks of Artificial Intelligence” (102 Boston University Law Review 2022) on SSRN. Here is the abstract:

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-risk- consent trilemma.

Sharkey on Personalized Damages

Catherine M. Sharkey (NYU School of Law) has posted “Personalized Damages” (U. Chi. L. Rev. Online 2022) on SSRN. Here is the abstract:

In Personalized Law: Different Rules for Different People, Professors Omri Ben-Shahar and Ariel Porat imagine a brave new tort world wherein the ubiquitous reasonable person standard is replaced by myriad personalized “reasonable you” commands. Ben-Shahar’s and Porat’s asymmetrical embrace of personalized law—full stop for standards of care, near rejection for damages—raises four issues, not sufficiently taken up in the book. First, the authors equivocate too much with regard to the purposes of tort law; ultimately, if and when forced to choose, law-and-economics deterrence-based theory holds the most promise for modern tort law. Second, the damage-uniformity approach clearly dominates the status quo of “crude” personalization. Third, via a deterrence lens that eschews “misalignments” in tort law, a personalized standard of care necessitates personalized damages. Fourth, the true benefit of an ideal personalized damages regime might be further uncovering the root cause of racial and gender disparities in status quo tort damages. Paradoxically, ideal personalization might then reinforce the damage-uniformity approach.

Haber & Harel Ben-Shahar on Algorithmic Parenting

Eldar Haber (University of Haifa – Faculty of Law) and Tammy Harel Ben-Shahar (same) have posted “Algorithmic Parenting” (32 Fordham Intell. Prop. Media & Ent. L.J. 1 (2021)) on SSRN. Here is the abstract:

Growing up in today’s world involves an increasing amount of interaction with technology. The rise in availability, accessibility, and use of the internet, along with social norms that encourage internet connection, make it nearly impossible for children to avoid online engagement. The internet undoubtedly benefits children socially and academically and mastering technological tools at a young age is indispensable for opening doors to valuable opportunities. However, the internet is risky for children in myriad ways. Parents and lawmakers are especially concerned with the tension between important advantages and risks technology bestows on children.

New technological developments in artificial intelligence are beginning to alter the ways parents might choose to safeguard their children from online risks. Recently, emerging AI-based devices and services can automatically detect when a child’s online behavior indicates that their well-being might be compromised or when they are engaging in inappropriate online communication. This technology can notify parents or immediately block harmful content in extreme cases. Referred to as algorithmic parenting in this Article, this new form of parental control has the potential to cheaply and effectively protect children against digital harms. If designed properly, algorithmic parenting would also ensure children’s liberties by neither excessively infringing their privacy nor limiting their freedom of speech and access to information.

This Article offers a balanced solution to the parenting dilemma that allows parents and children to maintain a relationship grounded in trust and respect, while simultaneously providing a safety net in extreme cases of risk. In doing so, it addresses the following questions: What laws should govern platforms with respect to algorithms and data aggregation? Who, if anyone, should be liable when risky behavior goes undetected? Perhaps most fundamentally, relative to the physical world, do parents have a duty to protect their children from online harm? Finally, assuming that algorithmic parenting is a beneficial measure for protecting children from online risks, should legislators and policymakers use laws and regulations to encourage or even mandate the use of such algorithms to protect children? This Article offers a taxonomy of current online threats to children, an examination of the potential shift toward algorithmic parenting, and a regulatory toolkit to guide policymakers in making such a transition.

Ebers on Civil Liability for Autonomous Vehicles in Germany

Martin Ebers (Humboldt University of Berlin – Faculty of Law; University of Tartu, School of Law) has posted “Civil Liability for Autonomous Vehicles in Germany” on SSRN. Here is the abstract:

This paper deals with civil liability for autonomous driving under German law, and is structured as follows: After an introduction (I.) the paper provides an overview of the current legal framework (II.), followed by an analysis of the liability of drivers (III.), technical supervisors (IV.), vehicle keepers (V.), manufacturers (VI.) and IT service providers (VII.). An additional section deals with the question of how autonomous vehicles would be integrated into the insurance system (VIII.), whereas the last section draws some final conclusions (IX.).

Pretelli on Internet Platform Users as Weaker Parties

Ilaria Pretelli (Swiss Institute of Comparative Law; University of Urbino) has posted “A Humanist Approach to Private International Law and the Internet a Focus on Platform Users as Weaker Parties” (Yearbook of Private International Law, Volume 22 (2020/2021), pp. 201-243) on SSRN. Here is the abstract:

The apps and platforms that we use on a daily basis have increased the effective enjoyment of many fundamental rights enshrined in our constitutions and universal declarations. These were drafted to guarantee a fairer distribution of the benefits of human progress among the population. The present article argues that a humanist approach to private international law can bring just solutions to disputes arising from digital interactions. It analyses cases where platform users are pitted against a digital platform and cases where platform users are pitted against each other. For the first set of cases, an enhanced protection of digital platform users, as weaker parties, points to an expansion of the principle of favor laesi in tortious liability and to a restriction of the operation of party autonomy by clickwrapping, in consideration that a gross inequality of bargaining power also exists in business to platform contracts. In the second set of cases, reliable guidance is offered by the principles of effectiveness and of protection of vulnerable parties. Exploiting the global reach of the internet to improve the situation of crowdworkers worldwide is also considered as a task for the ILO to seriously commit upon. In line with the most recent achievements in human rights due diligence, protection clauses pointing to destination-based labour standards will be a welcome step forward. The principle of effectiveness justifies the enforcement of court decisions in cyberspace, which has become a political and juridical necessity.

Guerra, Parisi & Pi on Liability for Robots II: An Economic Analysis

Alice Guerra (University of Bologna – Department of Economics), Francesco Parisi (University of Minnesota – Law School), and Daniel Pi (University of Maine – School of Law) have posted “Liability for Robots II: An Economic Analysis” (Journal of Institutional Economics 2021) on SSRN. Here is the abstract:

This is the second of two companion papers that discuss accidents caused by robots. In the first paper (Guerra et al., 2021), we presented the novel problems posed by robot accidents, and assessed the related legal approaches and institutional opportunities. In this paper, we build on the previous analysis to consider a novel liability regime, which we refer to as “manufacturer residual liability” rule. This makes operators and victims liable for accidents due to their negligence—hence, incentivizing them to act diligently; and makes manufacturers residually liable for non-negligent accidents—hence, incentivizing them to make optimal investments in R&D for robots’ safety. In turn, this rule will bring down the price of safer robots, driving unsafe technology out of the market. Thanks to the percolation effect of residual liability, operators will also be incentivized to adopt optimal activity levels in robots’ usage.

Recommended.

Pazos on The Case for a (European?) Law of Reputational Feedback Systems

Ricardo Pazos (Universidad Autónoma de Madrid – Faculty of Law) has posted “The Case for a (European?) Law of Reputational Feedback Systems” (InDret, Vol. 3, 2021) on SSRN. Here is the abstract:

Reputational feedback systems are essential in the digital economy, as tools to build trust among traders and consumers and help the latter to make better choices. Although the number of platforms using such systems is growing, some aspects undermine their reliability, endangering the proper functioning of the market. In this context, it might be convenient to create a “law of reputational feedback systems” – a comprehensive set of rules specifically aimed at online reviews and ratings, and possibly at the European Union level with the goal of contributing to develop the digital single market. This paper aims at fostering a debate on the matter. First, it presents how important reputational feedback systems are and the weaknesses they are affected by. Then, it addresses the fragmentation argument that favours legal harmonisation, without forgetting that harmonisation has downsides, too. Afterwards, some possible rules are envisaged, considering academic or institutional initiatives and norms that already exist. Finally, to balance the discussion, this paper also offers arguments to support that further regulating reputational feedback systems, or at least doing it at the European level, could be a step in the wrong direction.

Interesting expansion of common law of reputational torts.