Porat on Behavior-Based Price Discrimination and Consumer Protection in the Age of Algorithms

Haggai Porat (Harvard Law School; Tel Aviv University School of Economics) has posted “Behavior-Based Price Discrimination and Consumer Protection in the Age of Algorithms” on SSRN. Here is the abstract:

The legal literature on price discrimination focuses primarily on consumers’ immutable features, like when higher interest rates are offered to black borrowers and higher prices to women at car dealerships. This paper examines a different type of discriminatory pricing practice: behavior-based pricing (BBP), where prices are set based on consumers’ behavior, most prominently their prior purchasing decisions. The increased use of artificial intelligence and machine learning algorithms to set prices has facilitated the growing penetration of BBP in various markets. Unlike race-based and sex-based discrimination, with BBP, consumers can strategically adjust their behavior to impact the prices they will be offered in the future. Sellers, in turn, can adjust prices in early periods to influence consumers’ purchasing decisions so as to increase the informational value of these decisions and thereby maximize profits. This paper analyzes possible legal responses to BBP and arrives at three surprising policy implications: One, when non-BBP discrimination is efficient but with potentially problematic distributional implications, BBP can either increase or decrease efficiency. Two, even if BBP is desirable, mandating its disclosure may reduce overall welfare even though this would reduce informational asymmetry in the market. Three, a right to be forgotten (a right to erasure) may be desirable even though it increases informational asymmetry.

Gunkel on Should Robots Have Standing

David J. Gunkel (Northern Illinois University) has posted “Should Robots Have Standing? From Robot Rights to Robot Rites” (Frontiers of Artificial Intelligence and Applications, IOS Press forthcoming) on SSRN. Here is the abstract:

“Robot” designates something that does not quite fit the standard way of organizing beings into the mutually exclusive categories of “person” or “thing.” The figure of the robot interrupts this fundamental organizing schema, resisting efforts at both reification and personification. Consequently, what is seen reflected in the face or faceplate of the robot is the fact that the existing moral and legal ontology—the way that we make sense of and organize our world—is already broken or at least straining against its own limitations. What is needed in response to this problem is a significantly reformulated moral and legal ontology that can scale to the unique challenges of the 21st century and beyond.

Ranchordas on Smart Cities, Artificial Intelligence and Public Law

Sofia Ranchordas (U Groningen Law; LUISS) has posted “Smart Cities, Artificial Intelligence and Public Law: An Unchained Melody” on SSRN. Here is the abstract:

Governments and citizens are by definition in an unequal relationship. Public law has sought to address this power asymmetry with different legal principles and instruments. However, in the context of smart cities, the inequality between public authorities and citizens is growing, particularly for vulnerable citizens. This paper explains this phenomenon in light of the dissonance between the rationale, principles and instruments of public law and the practical implementation of AI in smart cities. It argues first that public law overlooks that smart cities are complex phenomena that pose novel and different legal problems. Smart cities are strategies, products, narratives, and processes that reshape the relationship between governments and citizens, often excluding citizens who are not deemed as ‘smart’. Second, smart urban solutions tend to be primarily predictive as they seek to anticipate, for example, crime, traffic congestion or pollution. On the contrary, public law principles and tools remain reactive or responsive, failing to regulate potential harms caused by predictive systems. In addition, public law remains focused on the need to constrain human discretion and individual flaws rather than systemic errors and datafication systems which place citizens in novel categories. This paper discusses the dissonance between public law and smart urban solutions, presenting the smart city as a corporate narrative which, with its attempts to optimise citizenship, inevitably excludes thousands of citizens.

Basu et al. on A Programming Language for Future Interests

Shrutarshi Basu (Harvard), Nate Foster (Cornell), James Grimmelmann (Cornell), Shan Parikh (Oracle), and Ryan Richardson (Google) have posted “A Programming Language for Future Interests” (24 Yale Journal of Law and Technology 75 (2022)) on SSRN. Here is the abstract:

Learning the system of estates in land and future interests can seem like learning a new language. Scholars and students must master unfamiliar phrases, razor-sharp rules, and arbitrarily complicated structures. Property law is this way not because future interests are a foreign language, but because they are a programming language.

This Article presents Orlando, a programming language for expressing conveyances of future interests, and Littleton, a freely available online interpreter (at https://conveyanc.es) that can diagram the interests created by conveyances and model the consequences of future events. Doing so has three payoffs. First, formalizing future interests helps students and teachers of the subject by allowing them to visualize and experiment with conveyances. Second, the process of formalization is itself deeply illuminating about property doctrine and theory. And third, the computer-science subfield of programming language theory has untapped potential for legal scholarship: a programming-language approach takes advantage of the linguistic parallels between legal texts and computer programs.

Katyal on Five Principles of Policy Reform for the Technological Age

Sonia Katyal (U California, Berkeley – School of Law) has posted “Lex Reformatica: Five Principles of Policy Reform for the Technological Age” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Almost twenty five years ago, beloved former colleague Joel Reidenberg penned an article that argued that law and government regulation were not the only source of authority and rulemaking in the Information Society. Rather, he argued that technology itself, particularly system design choices like network design and system configurations, can also impose similar regulatory norms on communities. These rules and systems, he argued, comprised a Lex Informatica—a term that Reidenberg coined in historical reference to “Lex Mercatoria,” a system of international, merchant-driven norms in the Middle Ages that emerged independent of localized sovereign control.

Today, however, we confront a different phenomenon, one that requires us to draw upon the wisdom of Reidenberg’s landmark work in considering the repercussions of the previous era. As much as Lex Informatica provided us with a descriptive lens to analyze the birth of the internet, we are now confronted with the aftereffects of decades of muted, if not absent, regulation. When technological social norms are allowed to develop outside of clear legal restraints, who wins? Who loses? In this new era, we face a new set of challenges—challenges that force us to confront a critical need for infrastructural reform that focuses on the interplay between public and private forms of regulation (and self-regulation), its costs, and its benefits.

Instead of demonstrating the richness, complexity, and promise of yesterday’s internet age, today’s events show us what precisely can happen in an age of information libertarianism, underscoring the need for a new approach to information regulation. The articles in this Issue are taken from two separate symposiums—one on Lex Informatica and another on race and technology law. At present, a conversation between them could not be any more necessary. Taken together, these papers showcase what I refer to as the Lex Reformatica of today’s digital age. This collection of papers demonstrates the need for scholars, lawyers, and legislators to return to Reidenberg’s foundational work and to update its trajectory towards a new era that focuses on the design of a new approach to reform.

Coombs & Abraha on Governance of AI and Gender

Elizabeth Coombs (U Malta) and Halefom H. Abraha (Oxford) have posted “Governance of AI and Gender: Building on International Human Rights Law and Relevant Regional Frameworks” (in ‘Handbook on the Politics and Governance of Big Data and Artificial Intelligence’ Zwitter & Gstrein (eds.) (Elgar, forthcoming) on SSRN. Here is the abstract:

The increasing uptake of artificial intelligence (AI)1 systems across industries and social activities raises questions as to who benefits from these systems and who does not, and whether existing regulatory frameworks are adequate to address AI-driven harms. Policy-makers around the world are grappling with the challenges of addressing the perils of AI without undermining its promises. Emerging regulatory approaches range from sectoral regulations and omnibus frameworks to abstract principles. This chapter examines the place of gender in the current and emerging AI governance frameworks. It examines the effectiveness of current mechanisms to address the gender implications of AI technologies by reviewing significant regional and national frameworks with a particular focus on whether they are ‘fit for purpose’ in addressing AI-driven gender harms.

The chapter finds that existing legal frameworks including data protection, anti- discrimination, anti-trust, consumer or equality law, have significant gaps as they apply to AI systems generally and AI-driven gender disparities in particular. It also argues that the proliferation of self-imposed standards and abstract ethical principles without enforcement mechanisms fall short in addressing the complex regulatory challenges of AI-driven gender harms. The chapter then makes the case for bringing the issue of gender to the centre of AI regulation discourse and recommends AI regulation frameworks to be based upon the international human rights instruments, with gender as a mainstreamed element, as these frameworks are more representative, enforceable and concerned with protecting the vulnerable.

Narayanan & Tan on Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI

Devesh Narayanan (National University of Singapore) and Zhi Ming Tan (Cornell) have posted “Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI” on SSRN. Here is the abstract:

It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if we were to account for the rich and diverse moral reasons that ground the call for explainable AI, and fully consider what it means to “trust” AI in a full-blooded sense of the term, we would uncover a deep and persistent tension between the two principles. For explainable AI to usefully serve the pursuit of normatively desirable goals, decision-makers must carefully monitor and critically reflect on the content of an AI-DST’s explanation. This entails a deliberative attitude. Conversely, the call for full-blooded trust in AI-DSTs implies the disposition to put questions about their reliability out of mind. This entails an unquestioning attitude. As such, the joint pursuit of explainable and trusted AI calls on decision-makers to simultaneously adopt incompatible attitudes towards their AI-DST, which leads to an intractable implementation gap. We analyze this gap and explore its broader implications: suggesting that we may need alternate theoretical conceptualizations of what explainability and trust entail, and/or alternate decision-making arrangements that separate the requirements for trust and deliberation to different parties.

Coglianese & Hefter on From Negative to Positive Algorithm Rights

Cary Coglianese (U Penn Law) and Kat Hefter (same) have posted “From Negative to Positive Algorithm Rights” (Wm. & Mary Bill Rts. J., forthcoming) on SSRN. Here is the abstract:

Artificial intelligence, or “AI,” is raising alarm bells. Advocates and scholars propose policies to constrain or even prohibit certain AI uses by governmental entities. These efforts to establish a negative right to be free from AI stem from an understandable motivation to protect the public from arbitrary, biased, or unjust applications of algorithms. This movement to enshrine protective rights follows a familiar pattern of suspicion that has accompanied the introduction of other technologies into governmental processes. Sometimes this initial suspicion of a new technology later transforms into widespread acceptance and even a demand for its use. In this paper, we show how three now-accepted technologies—DNA analysis, breathalyzers, and radar speed detectors—traversed a path from initial resistance to a positive right that demands their use. We argue that current calls for a negative right to be free from digital algorithms may dissipate over time, with the public and the legal system eventually embracing, if not even demanding, the use of AI. Increased recognition that the human-based status quo itself leads to unacceptable errors and biases may contribute to this transformation. A negative rights approach, after all, may only hamper the development of technologies that could lead to improved governmental performance. If AI tools are allowed to mature and become more standardized, they may also be accompanied by greater reliance on qualified personnel, robust audits and assessments, and meaningful oversight. Such maturation in the use of AI tools may lead to demonstrable improvements over the status quo, which eventually might well justify assigning a positive right to their use in the performance of governmental tasks.

Schrepel on The Making of An Antitrust API: Proof of Concept

Thibault Schrepel (VU Amsterdam; Stanford Codex Center; Sorbonne; Sciences Po) has posted “The Making of An Antitrust API: Proof of Concept” (Stanford University CodeX Research Paper Series 2022) on SSRN. Here is the abstract:

Computational antitrust promises not only to help antitrust agencies preside over increasingly complex and dynamic markets, but also to provide companies with the tools to assess and enforce compliance with antitrust laws. If research in the space has been primarily dedicated to supporting antitrust agencies, this article fills the gap by offering an innovative solution for companies. Specifically, this article serves as a proof of concept whose aim is to guide antitrust agencies in creating a decision-trees-based antitrust compliance API intended for market players. It includes an open access prototype that automates compliance with Article 102 TFEU, discusses its limitations and lessons to be learned.

Goldman on Zauderer and Compelled Editorial Transparency

Eric Goldman (Santa Clara University – School of Law) has posted “Zauderer and Compelled Editorial Transparency” (Iowa Law Review Online, Forthcoming) on SSRN. Here is the abstract:

A 1985 Supreme Court opinion, Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, holds the key to the Internet’s future. Zauderer provides a relaxed level of scrutiny for Constitutional challenges to some compelled commercial speech disclosure laws. Regulators throughout the country are adopting “transparency” laws to force Internet services to disclose information about their editorial operations or decisions when they publish third-party content, based on their assumption that Zauderer permits such compelled disclosures. This article explains why these transparency laws do not qualify for Zauderer’s relaxed scrutiny. Instead, given the inevitably censorial consequences of enacting and enforcing compelled editorial transparency laws, they should usually trigger strict scrutiny—just like outright speech restrictions do.