Download of the Week

The Download of the Week is “Privacy Harms” by Danielle Keats Citron (University of Virginia School of Law) and Daniel J. Solove (George Washington University Law School). Here is the abstract:

Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations.

The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm.

Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations do result in negative consequences, the effects are often small – frustration, aggravation, and inconvenience – and dispersed among a large number of people. When these minor harms are done at a vast scale by a large number of actors, they aggregate into more significant harms to people and society. But these harms do not fit well with existing judicial understandings of harm.

This article makes two central contributions. The first is the construction of a road map for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. We set forth a typology of privacy harms that elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigations suffers from a misalignment of law enforcement goals and remedies. For example, existing methods of litigating privacy cases, such as class actions, often enrich lawyers but fail to achieve meaningful deterrence. Because the personal data of tens of millions of people could be involved, even small actual damages could put companies out of business without providing much of value to each individual. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.

Cyphert on The First Step Act and Algorithmic Prediction of Risk

Any Cyphert (WVU College of Law) has posted “Reprogramming Recidivism: The First Step Act and Algorithmic Prediction of Risk” (Seton Hall Law Review, Vol. 51, 2020) on SSRN. Here is the abstract:

The First Step Act, a seemingly miraculous bipartisan criminal justice reform bill, was signed into law in late 2018. The Act directed the Attorney General to develop a risk and needs assessment tool that would effectively determine who would be eligible for early release based on an algorithmic prediction of recidivism. The resulting tool—PATTERN—was released in the summer of 2019 and quickly updated in January of 2020. It was immediately put to use in an unexpected manner, helping to determine who was eligible for early release during the COVID-19 pandemic. It is now the latest in a growing list of algorithmic recidivism prediction tools, tools that first came to mainstream notice with critical reporting about the COMPAS sentencing algorithm.

This Article evaluates PATTERN, both in its development as well as its still-evolving implementation. In some ways, the PATTERN algorithm represents tentative steps in the right direction on issues like transparency, public input, and use of dynamic factors. But PATTERN, like many algorithmic decision-making tools, will have a disproportionate impact on Black inmates; it provides fewer opportunities for inmates to reduce their risk score than it claims and is still shrouded in some secrecy due to the government’s decision to dismiss repeated calls to release more information about it. Perhaps most perplexing, it is unclear whether the tool actually advances accuracy with its predictions. This Article concludes that PATTERN is a decent first step, but it still has a long way to go before it is truly reformative.

Azzutti, Ringe & Stiehl on Machine Learning, Market Manipulation and Collusion on Capital Markets

Alessio Azzutti (Institute of Law & Economics – University of Hamburg), Wolf-Georg Ringe (University of Hamburg – Institute of Law & Economics, University of Oxford – Faculty of Law, European Corporate Governance Institute (ECGI)), and H. Siegfried Stiehl (University of Hamburg – Department of Informatics) have posted “Machine Learning, Market Manipulation and Collusion on Capital Markets: Why the ‘Black Box’ matters” on SSRN. Here is the abstract:

This paper offers a novel perspective on the implications of increasingly autonomous and “black box” algorithms, within the ramification of algorithmic trading, for the integrity of capital markets. Artificial intelligence (AI) and particularly its subfield of machine learning (ML) methods have gained immense popularity among the great public and achieved tremendous success in many real-life applications by leading to vast efficiency gains. In the financial trading domain, ML can augment human capabilities in both price prediction, dynamic portfolio optimization, and other financial decision-making tasks. However, thanks to constant progress in the ML technology, the prospect of increasingly capable and autonomous agents to delegate operational tasks and even decision-making is now beyond mere imagination, thus opening up the possibility for approximating (truly) autonomous trading agents anytime soon.

Given these spectacular developments, this paper argues that such autonomous algorithmic traders may involve significant risks to market integrity, independent from their human experts, thanks to self-learning capabilities offered by state-of-the-art and innovative ML methods. Using the proprietary trading industry as a case study, we explore emerging threats to the application of established market abuse laws in the event of algorithmic market abuse, by taking an interdisciplinary stance between financial regulation, law & economics, and computational finance. Specifically, our analysis focuses on two emerging market abuse risks by autonomous algorithms: market manipulation and “tacit” collusion. We explore their likelihood to arise on global capital markets and evaluate related social harm as forms of market failures.

With these new risks in mind, this paper questions the adequacy of existing regulatory frameworks and enforcement mechanisms, as well as current legal rules on the governance of algorithmic trading, to cope with increasingly autonomous and ubiquitous algorithmic trading systems. It shows how the “black box” nature of specific ML-powered algorithmic trading strategies can subvert existing market abuse laws, which are based upon traditional liability concepts and tests (such as “intent” and “causation”). In concluding, by addressing the shortcomings of the present legal framework, we develop a number of guiding principles to assist legal and policy reform in the spirit of promoting and safeguarding market integrity and safety.

Creel & Hellman on The Algorithmic Leviathan

Kathleen Creel (Stanford University) and Deborah Hellman (University of Virginia School of Law) have posted “The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems” on SSRN. Here is the abstract:

This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what “arbitrariness” means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data, are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain why this systemic exclusion is of moral concern and to offer a solution to address it.