Gunawan, Santos & Kamara on Redress for Dark Patterns Privacy Harms

Johanna Gunawan (Northeastern University Khoury College of Computer Sciences), Cristiana Santos (Utrecht University), and Irene Kamara (Tilburg University – Tilburg Institute for Law, Technology, and Society (TILT); Free University of Brussels (LSTS)) have posted “Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions” on SSRN. Here is the abstract:

Internet users are constantly subjected to incessant demands for attention in a noisy digital world. Countless inputs compete for the chance to be clicked, to be seen, and to be interacted with, and they can deploy tactics that take advantage of behavioral psychology to ‘nudge’ users into doing what they want. Some nudges are benign; others deceive, steer, or manipulate users, as the U.S. FTC Commissioner says, “into behavior that is profitable for an online service, but often harmful to [us] or contrary to [our] intent”. These tactics are dark patterns, which are manipulative and deceptive interface designs used at-scale in more than ten percent of global shopping websites and more than ninety-five percent of the most popular apps in online services.

Literature discusses several types of harms caused by dark patterns that includes harms of a material nature, such as financial harms, or anticompetitive issues, as well as harms of a non-material nature, such as privacy invasion, time loss, addiction, cognitive burdens, loss of autonomy, and emotional or psychological distress. Through a comprehensive literature review of this scholarship and case law analysis conducted by our interdisciplinary team of HCI and legal scholars, this paper investigates whether harms caused by such dark patterns could give rise to redress for individuals subject to dark pattern practices using consent interactions and the GDPR consent requirements as a case study.

Campbell Moriarty & McCluan on The Death of Eyewitness Testimony and the Rise of Machine

Jane Campbell Moriarty (Duquesne University – School of Law) & Erin McCluan (same) have posted “Forward to the Symposium, The Death of Eyewitness Testimony and the Rise of Machine” on SSRN. Here is the abstract:

Artificial intelligence, machine evidence, and complex technical evidence are replacing human-skill-based evidence in the court­ room. This may be an improvement on mistaken eyewitness identification and unreliable forensic science evidence, which are both causes of wrongful convictions. Thus, the move toward more machine-based evidence, such as DNA, biometric identification, cell service location information, neuroimaging, and other specialties may provide better evidence. But with such evidence comes different problems, including concerns about proper cross-examination and confrontation, reliability, inscrutability, human bias, constitutional concerns, and both philosophic and ethical questions.

Gipson Rankin on the Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence

Sonia Gipson Rankin (University of New Mexico – School of Law) has posted “The MiDAS Touch: Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence” on SSRN. Here is the abstract:

Professor Bernadette Atuahene’s article, Corruption 2.0, develops the new theoretical conception of “stategraft,” which provides a term for a disturbing practice by state agents. Professor Atuahene notes that when state agents have engaged in practices of transferring property from persons to the state in violation of the state’s own laws or basic human rights, it sits at the intersection of illegal behavior that generates public profit. Although these measures can be quantified in many other examples of state corruption, the criminality of state practice goes undetected and is compounded when the state uses artificial intelligence to illegally extract resources from people. This essay will apply stategraft to an algorithm implemented in Michigan that falsely accused unemployment benefit recipients of fraud and illegally took their resources.

The software, the Michigan Integrated Data Automated System (“MiDAS”), was supposed to detect unemployment fraud and automatically charge people with misrepresentation. The agency erroneously charged over 37,000 people, taking their tax refunds and garnishing wages. It would take years for the state to repay the people and it was often after disastrous fallout had happened due to the years of trying to clear their record and reclaim their money.
This essay examines the MiDAS situation using the elements of Atuahene’s stategraft as a basis. It will show how Michigan has violated its own state and basic human rights laws, and how this unfettered use of artificial intelligence can be seen as a corrupt state practice.