Levesque on Applying the UN Guiding Principles on Business and Human Rights to Online Content Moderation

Maroussia Levesque (Harvard Law School, Berkman Klein Center for Internet & Society) has posted “Applying the Un Guiding Principles on Business and Human Rights to Online Content Moderation” on SSRN. Here is the abstract:

What do Rembrandt and social media platforms have in common? Light. Both judiciously use it to emphasize certain aspects and relegate others to obscurity, leveraging darkness to highlight flattering features.

This article assesses the accountability of social media platforms with regards to content moderation. It probes voluntary measures like the Facebook Oversight Board and transparency reports for similarities with the chiaroscuro painting technique. These two self-governance initiatives shed light on fairly uncontroversial aspects of content moderation, obscuring more problematic areas in the process. In that sense, chiaroscuro and self-governance actually travel in opposite directions; chiaroscuro uses darkness to create light while self-governance uses light to create darkness.

The United Nations Guiding Principles on Business and Human Rights (UNGP) could fill self-governance gaps by activating a clearer link between companies and user well-being. The notions of access to remedy and due diligence support the case for harmonizing accountability measures across moderation practices. External oversight should cover a broader array of moderation decisions, beyond individual content takedowns. It should also approach harms holistically, integrating privacy, equality and other human rights dimensions to the analysis. Transparency reports should provide more granular information about platforms’ informal collaboration with states.

Download of the Week

The Download of the Week is “Artificial Intelligence and the Rule of Law” by Azziz Z. Huq (University of Chicago School of Law). Here is the abstract:

This book chapter examines an interaction between technological shocks and the “rule of law.” It does so by analyzing the implications of a class of loosely related computational technologies termed “machine learning” (ML) or, rather less precisely “artificial intelligence” (AI). These tools are presently employed in the pre-adjudicative phase of enforcing of the laws, for example facilitating the selection of targets for tax and regulatory investigations.

Two general questions respecting the rule of law arise from these developments. The more immediately apparent one is whether these technologies, when integrated into the legal system, are themselves compatible or in conflict with the rule of law. Depending on which conception of the rule of law is deployed, the substitution of machine decision-making for human judgment can kindle objections based on transparency, predictability, bias, and procedural fairness. A first purpose of this chapter is to examine ways in which this technological shock poses such challenges. The interaction between the normative ambitions of the rule of law and ML technologies, I will suggest, is complex and ambiguous. In many cases, moreover, the more powerful normative objection to technology arises less from the bare fact of its objection, and more from the socio-political context in which that adoption occurred and the dynamic effect of technology on background disparities of power and resources. ML’s adoption likely exacerbates differences of social power and status in ways that place the rule of law under strain.

The second question posed by new AI and ML technologies has also not been extensively discussed. Yet it is perhaps of more profound significance. Rather than focusing on the compliance of new technologies with rule-of-law values, it hinges on the implications of ML and AI technologies for how the rule of law itself is conceived or implemented. Many of the canonical discussions of the rule of law—including Dicey’s and Waldron’s—entangle a conceptual definition and a series of institutional entailments. Many assume that the rule of law requires the specific institutional form of courts. They presumably also posit human judges exercising discretion and making judgments as necessary rather than optional. For these institutional entailments of the rule of law, a substitution of human for ML technologies likely has destabilizing implications. It sharpens the question whether the abstract concept of the rule of law needs to be realized by a particular institutional form. It raises a question whether instead technological change might demand amendments to relationship between the concept(s) and the practice of the rule of law. For pre-existing normative concepts and their practical, institutional correlates may no longer hold under conditions of technological change. At the very least then, specification of institutional forms of the rule of law under such conditions raises challenges not just as a practical matter but also in terms of legal theory.

Wagner & Murillo on Exploring the Accountability Challenges in Environmental and Public Health Regulation

Wendy E. Wagner (University of Texas School of Law) and Martin Murillo (Institute of Electrical and Electronics Engineers) have posted “Is the Administrative State Ready for Big Data?: Exploring the Accountability Challenges in Environmental and Public Health Regulation” (in Data and Democracy (Knight First Amendment Institute, Columbia University (2021)) on SSRN. Here is the abstract:

In this contribution to a symposium on “Data and Democracy” hosted by the Knight First Amendment Institute, we explore the administrative state’s growing use of complex statistical models and the challenges this trend poses for accountable administrative governance. We document how agencies’ use of big data can obscure critical framing decisions underlying policies, hide subjectivity in the design and development of models, and undermine scientific integrity. Legal process requirements should in theory counteract these tendencies to sideline public deliberation and oversight. But in practice, the threat of judicial review, protracted comment processes, and other features of administrative law sometimes tacitly reward agencies for developing and using algorithmic tools that are inaccessible to the public. To address these challenges, we propose standardized, interdisciplinary processes that encourage agency staff to comprehensibly explain—using best practices—the framing, algorithm choices, and procedures used to ensure the integrity of their analyses. We also suggest the use of rewards, such as increased judicial deference for accessible explanations, to promote the development of high quality and transparent models.

Buccafusco & Garcia on Pay-to-Playlist: The Commerce of Music Streaming

Christopher Buccafusco (Yeshiva University – Benjamin N. Cardozo School of Law) and Kristella Garcia (University of Colorado Law School) have posted “Pay-to-Playlist: The Commerce of Music Streaming” on SSRN. Here is the abstract:

Payola—sometimes referred to as “pay-for-play”—is the undisclosed payment, or acceptance of payment, in cash or in kind, for promotion of a song, album, or artist. Some form of pay-for-play has existed in the music industry since the 19th century. Most prominently, the term has been used to refer to the practice of record labels paying radio DJs to play certain songs in order to boost their popularity and sales. Since the middle of the 20th century, the FCC has regulated this behavior—ostensibly because of its propensity to harm consumers and competition—by requiring that broadcasters disclose such payments.

As streaming music platforms continue to siphon off listeners from analog radio, a new form of payola has emerged. In this new streaming payola, record labels, artists, and managers simply shift their payments from radio to streaming music platforms like Spotify, YouTube, TikTok, and Instagram. Instead of going to DJs, payments go to playlisters or to influencers who can help promote a song by directing audiences toward it. Because online platforms do not fall under the FCC’s jurisdiction, streaming pay-for-play is not currently regulated at the federal level, although some of it may be subject to state advertising disclosure laws.

In this Article, we describe the history and regulation of traditional forms of pay-for-play, and explain how streaming practices differ. Our account is based, in substantive part, on a novel series of qualitative interviews with music industry professionals. Our analysis finds the normative case for regulating streaming payola lacking: contrary to conventional wisdom, we show that streaming pay-for-play, whether disclosed or not, likely causes little to no harm to consumers, and it may even help independent artists gain access to a broader audience. Given this state of affairs, regulators should proceed with caution to preserve the potential advantages afforded by streaming payola and to avoid further exacerbating extant inequalities in the music industry.

Haupt on Regulating Speech Online: Free Speech Values in Constitutional Frames 

Claudia E. Haupt (Northeastern University School of Law, Yale Information Society Project) has posted “Regulating Speech Online: Free Speech Values in Constitutional Frames” (Washington University Law Review, Forthcoming) on SSRN. Here is the abstract:

Regulating speech online has become a key concern for lawmakers in several countries. But national and supranational regulatory efforts are being met with significant criticism, particularly in transatlantic perspective. Critiques, however, should not fall into the trap of merely relitigating old debates over the permissibility and extent of regulating speech. This Essay suggests that the normative balance between speech protection and speech regulation as a constitutional matter has been struck in different ways around the world, and this fundamental balance is unlikely to be upset by new speech mediums. To illustrate, this Essay uses a German statute, NetzDG, and its reception in the United States as a case study.

Contemporary U.S. legal discourse on online speech regulation has developed two crucial blindspots. First, in focusing on the domestic understanding of free speech, it doubles down on an outlier position in comparative speech regulation. Second, within First Amendment scholarship, the domestic literature heavily emphasizes the marketplace of ideas, displacing other theories of free speech protection. This emphasis spills over into analyses of online speech. This Essay specifically addresses these blindspots and argues that the combined narrative of free speech near-absolutism and the marketplace theory of speech protection make a fruitful comparative dialogue difficult. It ends by sketching the contours of a normative approach for evaluating regulatory efforts in light of different constitutional frameworks.

Huq on The Public Trust in Data

Azziz Z. Huq (University of Chicago Law School) has posted “The Public Trust in Data” (Georgetown Law Journal, Vol. 110, 2020) on SSRN. Here is the abstract:

Personal data is no longer just personal. Social networks and pervasive environmental surveillance via cellphones and the ‘internet of things’ extract minute-by-minute details of our behavior and cognition. This information accumulates into a valuable asset. It then circulates among data brokers, targeted advertisers, political campaigns, and even foreign states as fuel for predictive interventions. Rich gains flow to firms well positioned to leverage these new information aggregates. The privacy losses, economic exploitation, structural inequalities, and democratic backsliding produced by personal data economies, however, fall upon society at large.

This Article proposes a novel regulatory intervention to mitigate the harms from transforming personal data into an asset. States and municipalities should create “public trusts” as governance vehicles for their residents’ locational and personal data. An asset in “public trust” is owed and managed by the state. The state can permit its use, and even allow limited alienation, if doing so benefits a broad public rather than a handful of firms. Unique among the legal interventions proposed for new data economies, a public trust for data allows a democratic polity to durably commit to public-regarding management of its informational resources, coupled to judicially enforceable limits on private exploitation and public allocation decisions. The public trust itself is a common-law doctrine of ancient roots revived in the Progressive Era as an instrument to protect public assets against private exploitation. Both federal and state courts, including the U.S. Supreme Court, have since endorsed a variety of doctrinal formulations. The result today is a rich repertoire of rules and remedies for the management of common property. Personal data, usefully, has many similarities to assets long managed by public trust. And familiar justifications for creation of a public trust logically extend to personal data. Indeed, municipalities in the United States, Europe, and Canada have started to experiment with limited forms of a public trust in data. Generalizing from those experiences, this Article offers a more general ‘proof of concept’ for how personal data economies can be leashed through the public trust.

Lim on Artificial Intelligence and Antitrust in a Post-Qualcomm World

Daryl Lim (University of Illinois at Chicago John Marshall Law School, Fordham University – Fordham Intellectual Property Institute) has posted “Artificial Intelligence and Antitrust in a Post-Qualcomm World” (Competition Policy International Antitrust Chronicle, December 2020) on SSRN. Here is the abstract:

The questions in FTC v. Qualcomm are consequential in setting competitive norms in an economy anxious about the exercise of market power. Like many other antitrust cases, this one shows symptoms of antitrust law’s inherent vulnerability to ideology stampeding facts and data. Seen as an algorithm, antitrust has had patches and updates over the years. Still, few have recognized the breadth and depth of transformation artificial intelligence (“AI”) can bring to antitrust adjudication. AI enables courts to better render evidence-based decisions. As a tool, it is non-ideological and enables courts to minimize ideological stampeding. As a powerful new partner in making sense of the complex, dynamic, and fast-moving licensing markets many businesses operate in, courts and agencies can harness its ability to model price and innovation effects more precisely. There are challenges to implementing AI with data accountability, data availability, and data bias. These challenges can be addressed. The time to retool antitrust is now.

Huq on Artificial Intelligence and the Rule of Law

Azziz Z. Huq (University of Chicago Law School) has posted “Artificial Intelligence and the Rule of Law” (Routledge Handbook on the Rule of Law) on SSRN. Here is the abstract:

This book chapter examines an interaction between technological shocks and the “rule of law.” It does so by analyzing the implications of a class of loosely related computational technologies termed “machine learning” (ML) or, rather less precisely “artificial intelligence” (AI). These tools are presently employed in the pre-adjudicative phase of enforcing of the laws, for example facilitating the selection of targets for tax and regulatory investigations.

Two general questions respecting the rule of law arise from these developments. The more immediately apparent one is whether these technologies, when integrated into the legal system, are themselves compatible or in conflict with the rule of law. Depending on which conception of the rule of law is deployed, the substitution of machine decision-making for human judgment can kindle objections based on transparency, predictability, bias, and procedural fairness. A first purpose of this chapter is to examine ways in which this technological shock poses such challenges. The interaction between the normative ambitions of the rule of law and ML technologies, I will suggest, is complex and ambiguous. In many cases, moreover, the more powerful normative objection to technology arises less from the bare fact of its objection, and more from the socio-political context in which that adoption occurred and the dynamic effect of technology on background disparities of power and resources. ML’s adoption likely exacerbates differences of social power and status in ways that place the rule of law under strain.

The second question posed by new AI and ML technologies has also not been extensively discussed. Yet it is perhaps of more profound significance. Rather than focusing on the compliance of new technologies with rule-of-law values, it hinges on the implications of ML and AI technologies for how the rule of law itself is conceived or implemented. Many of the canonical discussions of the rule of law—including Dicey’s and Waldron’s—entangle a conceptual definition and a series of institutional entailments. Many assume that the rule of law requires the specific institutional form of courts. They presumably also posit human judges exercising discretion and making judgments as necessary rather than optional. For these institutional entailments of the rule of law, a substitution of human for ML technologies likely has destabilizing implications. It sharpens the question whether the abstract concept of the rule of law needs to be realized by a particular institutional form. It raises a question whether instead technological change might demand amendments to relationship between the concept(s) and the practice of the rule of law. For pre-existing normative concepts and their practical, institutional correlates may no longer hold under conditions of technological change. At the very least then, specification of institutional forms of the rule of law under such conditions raises challenges not just as a practical matter but also in terms of legal theory.

Recommended.

Download of the Week

The Download of the Week is “Privacy Harms” by Danielle Keats Citron (University of Virginia School of Law) and Daniel J. Solove (George Washington University Law School). Here is the abstract:

Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations.

The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm.

Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations do result in negative consequences, the effects are often small – frustration, aggravation, and inconvenience – and dispersed among a large number of people. When these minor harms are done at a vast scale by a large number of actors, they aggregate into more significant harms to people and society. But these harms do not fit well with existing judicial understandings of harm.

This article makes two central contributions. The first is the construction of a road map for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. We set forth a typology of privacy harms that elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigations suffers from a misalignment of law enforcement goals and remedies. For example, existing methods of litigating privacy cases, such as class actions, often enrich lawyers but fail to achieve meaningful deterrence. Because the personal data of tens of millions of people could be involved, even small actual damages could put companies out of business without providing much of value to each individual. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.

Cyphert on The First Step Act and Algorithmic Prediction of Risk

Any Cyphert (WVU College of Law) has posted “Reprogramming Recidivism: The First Step Act and Algorithmic Prediction of Risk” (Seton Hall Law Review, Vol. 51, 2020) on SSRN. Here is the abstract:

The First Step Act, a seemingly miraculous bipartisan criminal justice reform bill, was signed into law in late 2018. The Act directed the Attorney General to develop a risk and needs assessment tool that would effectively determine who would be eligible for early release based on an algorithmic prediction of recidivism. The resulting tool—PATTERN—was released in the summer of 2019 and quickly updated in January of 2020. It was immediately put to use in an unexpected manner, helping to determine who was eligible for early release during the COVID-19 pandemic. It is now the latest in a growing list of algorithmic recidivism prediction tools, tools that first came to mainstream notice with critical reporting about the COMPAS sentencing algorithm.

This Article evaluates PATTERN, both in its development as well as its still-evolving implementation. In some ways, the PATTERN algorithm represents tentative steps in the right direction on issues like transparency, public input, and use of dynamic factors. But PATTERN, like many algorithmic decision-making tools, will have a disproportionate impact on Black inmates; it provides fewer opportunities for inmates to reduce their risk score than it claims and is still shrouded in some secrecy due to the government’s decision to dismiss repeated calls to release more information about it. Perhaps most perplexing, it is unclear whether the tool actually advances accuracy with its predictions. This Article concludes that PATTERN is a decent first step, but it still has a long way to go before it is truly reformative.