Gunawan, Santos & Kamara on Redress for Dark Patterns Privacy Harms

Johanna Gunawan (Northeastern University Khoury College of Computer Sciences), Cristiana Santos (Utrecht University), and Irene Kamara (Tilburg University – Tilburg Institute for Law, Technology, and Society (TILT); Free University of Brussels (LSTS)) have posted “Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions” on SSRN. Here is the abstract:

Internet users are constantly subjected to incessant demands for attention in a noisy digital world. Countless inputs compete for the chance to be clicked, to be seen, and to be interacted with, and they can deploy tactics that take advantage of behavioral psychology to ‘nudge’ users into doing what they want. Some nudges are benign; others deceive, steer, or manipulate users, as the U.S. FTC Commissioner says, “into behavior that is profitable for an online service, but often harmful to [us] or contrary to [our] intent”. These tactics are dark patterns, which are manipulative and deceptive interface designs used at-scale in more than ten percent of global shopping websites and more than ninety-five percent of the most popular apps in online services.

Literature discusses several types of harms caused by dark patterns that includes harms of a material nature, such as financial harms, or anticompetitive issues, as well as harms of a non-material nature, such as privacy invasion, time loss, addiction, cognitive burdens, loss of autonomy, and emotional or psychological distress. Through a comprehensive literature review of this scholarship and case law analysis conducted by our interdisciplinary team of HCI and legal scholars, this paper investigates whether harms caused by such dark patterns could give rise to redress for individuals subject to dark pattern practices using consent interactions and the GDPR consent requirements as a case study.

Campbell Moriarty & McCluan on The Death of Eyewitness Testimony and the Rise of Machine

Jane Campbell Moriarty (Duquesne University – School of Law) & Erin McCluan (same) have posted “Forward to the Symposium, The Death of Eyewitness Testimony and the Rise of Machine” on SSRN. Here is the abstract:

Artificial intelligence, machine evidence, and complex technical evidence are replacing human-skill-based evidence in the court­ room. This may be an improvement on mistaken eyewitness identification and unreliable forensic science evidence, which are both causes of wrongful convictions. Thus, the move toward more machine-based evidence, such as DNA, biometric identification, cell service location information, neuroimaging, and other specialties may provide better evidence. But with such evidence comes different problems, including concerns about proper cross-examination and confrontation, reliability, inscrutability, human bias, constitutional concerns, and both philosophic and ethical questions.

Gipson Rankin on the Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence

Sonia Gipson Rankin (University of New Mexico – School of Law) has posted “The MiDAS Touch: Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence” on SSRN. Here is the abstract:

Professor Bernadette Atuahene’s article, Corruption 2.0, develops the new theoretical conception of “stategraft,” which provides a term for a disturbing practice by state agents. Professor Atuahene notes that when state agents have engaged in practices of transferring property from persons to the state in violation of the state’s own laws or basic human rights, it sits at the intersection of illegal behavior that generates public profit. Although these measures can be quantified in many other examples of state corruption, the criminality of state practice goes undetected and is compounded when the state uses artificial intelligence to illegally extract resources from people. This essay will apply stategraft to an algorithm implemented in Michigan that falsely accused unemployment benefit recipients of fraud and illegally took their resources.

The software, the Michigan Integrated Data Automated System (“MiDAS”), was supposed to detect unemployment fraud and automatically charge people with misrepresentation. The agency erroneously charged over 37,000 people, taking their tax refunds and garnishing wages. It would take years for the state to repay the people and it was often after disastrous fallout had happened due to the years of trying to clear their record and reclaim their money.
This essay examines the MiDAS situation using the elements of Atuahene’s stategraft as a basis. It will show how Michigan has violated its own state and basic human rights laws, and how this unfettered use of artificial intelligence can be seen as a corrupt state practice.

Ho on Countering Personalized Speech

Leon G. Ho (University of North Carolina Law) has posted “Countering Personalized Speech” (Northwestern Journal of Technology and Intellectual Property, Vol. 20, Issue 1, 2022) on SSRN. Here is the abstract:

Social media platforms use personalization algorithms to make content curation decisions for each end user. These “personalized instances of content curation” (“PICCs”) are essentially speech conveying a platform’s predictions on content relevance for each end user. Yet, PICCs are causing some of the worst problems on the internet. First, they facilitate the precipitous spread of mis- and disinformation by exploiting the very same biases and insecurities that drive end user engagement with such content in the first place. Second, they exacerbate social media addiction and related mental health harms by leveraging users’ affective needs to drive engagement to greater and greater heights. Lastly, they help erode end user privacy and autonomy as both sources and incentives for data collection.

As with any harmful speech, the solution is often counterspeech. Free speech jurisprudence considers counterspeech the most speech-protective weapon to combat false or harmful speech. Thus, to combat problematic PICCs, social media platforms, policymakers, and other stakeholders should embolden end users’ counterspeech capabilities in the digital public sphere.

One way to implement this solution is through platform-provided end user personalization tools. The prevailing end user personalization inputs prevent users from providing effective countermeasures against problematic PICCs, since on most, if not all, major social media platforms, these inputs confer limited ex post control over PICCs. To rectify this deficiency and empower end users, I make several proposals along key regulatory modalities to move end user personalization towards more robust ex ante capabilities that also filter by content type and characteristics, rather than just ad hoc filters on specific pieces of content and content creators.

Fagan on The Un-Modeled World: Law and the Limits of Machine Learning

Frank Fagan (South Texas College of Law; EDHEC Augmented Law Institute) has posted “The Un-Modeled World: Law and the Limits of Machine Learning” (MIT Computational Law Report, Vol. 4 (Forthcoming 2022)) on SSRN. Here is the abstract:

There is today a pervasive concern that humans will not be able to keep up with accelerating technological process in law and will become objects of sheer manipulation. For those who believe that human objectification is on the horizon, they offer solutions that require humans to take control, mostly by means of self-awareness and development of will. Among others, these strategies are present in Heidegger, Marcuse, and Habermas as presently discussed. But these solutions are not the only way. Technology itself offers a solution on its own terms. Machines can only learn if they can observe patterns, and those patterns must occur in sufficiently stable environments. Without detectable regularities and contextual invariance, machines remain prone to error. Yet humans innovate and things change. This means that innovation operates as a self-corrective—a built-in feature that limits the ability of technology to fully objectify human life and law error-free. Fears of complete technological ascendance in law and elsewhere are therefore exaggerated, though interesting intermediate states are likely to obtain. Progress will proceed apace in closed legal domains, but models will require continual adaptation and updating in legal domains where human innovation and openness prevail.

Ohm & Kim on The Internet of Things

Paul Ohm (Georgetown University Law Center) and Nathaniel Kim have posted “Legacy Switches: A Proposal to Protect Privacy, Security, Competition, and the Environment from the Internet of Things” (Ohio State Law Journal, Forthcoming) on SSRN. Here is the abstract:

The Internet of Things (IoT) promises us a life of automated convenience. Bright and shiny—if cheaply made and plasticky—“smart” thermostats, doorbells, cameras, and fridges carry out the functions once performed by “dumb” equivalents but in an automated, connected, and generally “better” way. This convenience comes at a significant cost. IoT devices listen to, record, and share our behavior, habits, speech, social interactions, and location minute-by-minute, 24/7. All of this information feeds a growing surveillance economy, as this data is bought, sold, and analyzed to predict our behavior, subject us to targeted advertising, and manipulate our actions. Many cheap IoT gadgets are developed on a shoestring budget, leaving them unsecure and vulnerable to attack. Malicious actors (and their automated computer programs) target IoT devices, breaking into them to spy on their owners or enlisting them into massive botnets used to cripple websites or critical infrastructure. These problems magnify over time, as IoT vendors focus on selling the next version of the device rather than on securing the preexisting installed base.

Consumers interested in protecting themselves from these harms may decide to replace outdated devices with newer, not-quite-yet-obsolete versions. Doing this does nothing to slow the growth of the surveillance economy and may even exacerbate it, as new devices tend to listen and record more than the models they replace. And even though replacing IoT devices can temporarily forestall security harms, asking consumers to replace all of their smart devices every few years introduces different harms. It harms the environment, filling our landfills with nonbiodegradable plastic housings and circuit parts which leach toxic materials into our air, soil, and water. It forces consumers to waste time, attention, and money tending to hard-wired, infrastructural devices that in the past would have lasted for decades. It compounds the harms of inequality, as those with more disposable income and connections to electricians and contractors have access to better security and privacy than those with less.

We propose a novel, simple, and concrete solution to address all of these problems. Every IoT device manufacturer should build a switch into their device called a “legacy switch.” When the consumer flips this switch, it should disable any smart feature that contributes to security or privacy risks. A legacy switch will render a smart thermostat just a thermostat and a smart doorbell just a doorbell. The switch will disable microphones, sensors, and wireless connectivity. Any user should find it easy to use and easy to verify whether the switch has been toggled.

This Article proposes legacy switches, elaborates key implementation details for any law requiring them, and connects them to the ongoing conversation about power, privacy, and platforms. The proposal to require legacy switches should be seen as a small but meaningful step toward taming the unchecked and destructive tendencies of the new networked economy.

Ho, Huang & Chang on Machine Learning Comparative Law

Han‐Wei Ho (IIAS), Patrick Chung-Chia Huang (U Chicago Law, students), and Yun-chien Chang (IIAS) have posted “Machine Learning Comparative Law” (Cambridge Handbook of Comparative Law, Siems and Yap eds. (2023)) on SSRN. Here is the abstract:

Comparative lawyers are interested in similarities between legal systems. Artificial intelligence offers a new approach to understanding legal families. This chapter introduces machine-learning methods useful in empirical comparative law, a nascent field. This chapter provides a step-by-step guide to evaluating and developing legal family theories using machine-learning algorithms. We briefly survey existing empirical comparative law data sets, then demonstrate how to visually explore these using a data set one of us compiled. We introduce popular and powerful algorithms of service to comparative law scholars, including dissimilarity coefficients, dimension reduction, clustering, and classification. The unsupervised machine-learning method enables researchers to develop a legal family scheme without the interference from existing schemes developed by human intelligence, thus providing as a powerful tool to test comparative law theories. The supervised machine-learning method enables researchers to start with a baseline scheme (developed by human or artificial intelligence) and then extend it to previously unstudied jurisdictions.

Colangelo on European Proposal for a Data Act – A First Assessment

Giuseppe Colangelo (University of Basilicata; Stanford Law School; LUISS) has posted “European Proposal for a Data Act – A First Assessment” (CERRE Evaluation Paper 2022) on SSRN. Here is the abstract:

On 23 February 2022, the European Commission unveiled its proposal for a Data Act (DA). As declared in the Impact Assessment, the DA complements two other major instruments shaping the European single market for data, such as the Data Governance Act and the Digital Markets Act (DMA), and is a key pillar of the European Strategy for Data in which the Commission announced the establishment of EU-wide common, interoperable data spaces in strategic sectors to overcome legal and technical barriers to data sharing.

To contribute to the current policy debate, the paper provides a first assessment of the tabled DA and will suggest possible improvements for the ongoing legislative negotiations.

Marchant on Swords and Shields: Impact of Private Standards in Technology-Based Liability

Gary E. Marchant (Arizona State University – College of Law) has posted “Swords and Shields: Impact of Private Standards in Technology-Based Liability” on SSRN. Here is the abstract:

Private voluntary standards are playing an ever greater role in the governance of many emerging technologies, including autonomous vehicles. Government regulation has lagged due to the ‘pacing problem’ in which technology moves faster than government regulation, and regulators lack the first-hand information that is mostly in the hands of industry and other experts in the field who often participate in standard-setting activities. Consequently, private standards have moved beyond historical tasks such as inter-operability to now produce quasi-governmental policy specifications that address the risk management, governance, privacy risks of emerging technologies. As the federal government has prudently concluded that promulgating government standards for autonomous vehicles would be premature at this time and may do more harm than good, private standards have become the primary governance tool for these vehicles. A number of standard-setting organizations, including the SAE, ISO, UL and IEEE have stepped forward to adopt a series of inter-locking private standards that collectively govern autonomous vehicle safety. While these private standards were not developed with litigation in mind, they could provide a useful benchmark for judge and juries to use in evaluating the safety of autonomous vehicles and whether compensatory and punitive damages are appropriate after an injury-causing accident involving an autonomous vehicle. Drawing on several decades of relevant case law, this paper argues that a manufacturer’s conformance with private standards for autonomous vehicle safety should be a partial shield against liability, whereas failure to conform to such standards should be a partial sword used by plaintiffs tor lack of due care.

Kaminski on Regulating the Risks of AI

Margot Kaminski (U Colorado Law School; Yale ISP; U Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Regulating the Risks of AI” (Boston University Law Review, Vol. 103, forthcoming 2023) on SSRN. Here is the abstract:

Companies and governments now use Artificial Intelligence (AI) in a wide range of settings. But using AI leads to well-known risks—that is, not yet realized but potentially catastrophic future harms that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems.

This Article observes that constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Moreover, there are at least four models for risk regulation, each with divergent goals and methods. Emerging conflicts over AI risk regulation illustrate the tensions that emerge when regulators employ one model of risk regulation, while stakeholders call for another.

This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).