Basu et al. on A Programming Language for Future Interests

Shrutarshi Basu (Harvard), Nate Foster (Cornell), James Grimmelmann (Cornell), Shan Parikh (Oracle), and Ryan Richardson (Google) have posted “A Programming Language for Future Interests” (24 Yale Journal of Law and Technology 75 (2022)) on SSRN. Here is the abstract:

Learning the system of estates in land and future interests can seem like learning a new language. Scholars and students must master unfamiliar phrases, razor-sharp rules, and arbitrarily complicated structures. Property law is this way not because future interests are a foreign language, but because they are a programming language.

This Article presents Orlando, a programming language for expressing conveyances of future interests, and Littleton, a freely available online interpreter (at https://conveyanc.es) that can diagram the interests created by conveyances and model the consequences of future events. Doing so has three payoffs. First, formalizing future interests helps students and teachers of the subject by allowing them to visualize and experiment with conveyances. Second, the process of formalization is itself deeply illuminating about property doctrine and theory. And third, the computer-science subfield of programming language theory has untapped potential for legal scholarship: a programming-language approach takes advantage of the linguistic parallels between legal texts and computer programs.

Katyal on Five Principles of Policy Reform for the Technological Age

Sonia Katyal (U California, Berkeley – School of Law) has posted “Lex Reformatica: Five Principles of Policy Reform for the Technological Age” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Almost twenty five years ago, beloved former colleague Joel Reidenberg penned an article that argued that law and government regulation were not the only source of authority and rulemaking in the Information Society. Rather, he argued that technology itself, particularly system design choices like network design and system configurations, can also impose similar regulatory norms on communities. These rules and systems, he argued, comprised a Lex Informatica—a term that Reidenberg coined in historical reference to “Lex Mercatoria,” a system of international, merchant-driven norms in the Middle Ages that emerged independent of localized sovereign control.

Today, however, we confront a different phenomenon, one that requires us to draw upon the wisdom of Reidenberg’s landmark work in considering the repercussions of the previous era. As much as Lex Informatica provided us with a descriptive lens to analyze the birth of the internet, we are now confronted with the aftereffects of decades of muted, if not absent, regulation. When technological social norms are allowed to develop outside of clear legal restraints, who wins? Who loses? In this new era, we face a new set of challenges—challenges that force us to confront a critical need for infrastructural reform that focuses on the interplay between public and private forms of regulation (and self-regulation), its costs, and its benefits.

Instead of demonstrating the richness, complexity, and promise of yesterday’s internet age, today’s events show us what precisely can happen in an age of information libertarianism, underscoring the need for a new approach to information regulation. The articles in this Issue are taken from two separate symposiums—one on Lex Informatica and another on race and technology law. At present, a conversation between them could not be any more necessary. Taken together, these papers showcase what I refer to as the Lex Reformatica of today’s digital age. This collection of papers demonstrates the need for scholars, lawyers, and legislators to return to Reidenberg’s foundational work and to update its trajectory towards a new era that focuses on the design of a new approach to reform.

Coombs & Abraha on Governance of AI and Gender

Elizabeth Coombs (U Malta) and Halefom H. Abraha (Oxford) have posted “Governance of AI and Gender: Building on International Human Rights Law and Relevant Regional Frameworks” (in ‘Handbook on the Politics and Governance of Big Data and Artificial Intelligence’ Zwitter & Gstrein (eds.) (Elgar, forthcoming) on SSRN. Here is the abstract:

The increasing uptake of artificial intelligence (AI)1 systems across industries and social activities raises questions as to who benefits from these systems and who does not, and whether existing regulatory frameworks are adequate to address AI-driven harms. Policy-makers around the world are grappling with the challenges of addressing the perils of AI without undermining its promises. Emerging regulatory approaches range from sectoral regulations and omnibus frameworks to abstract principles. This chapter examines the place of gender in the current and emerging AI governance frameworks. It examines the effectiveness of current mechanisms to address the gender implications of AI technologies by reviewing significant regional and national frameworks with a particular focus on whether they are ‘fit for purpose’ in addressing AI-driven gender harms.

The chapter finds that existing legal frameworks including data protection, anti- discrimination, anti-trust, consumer or equality law, have significant gaps as they apply to AI systems generally and AI-driven gender disparities in particular. It also argues that the proliferation of self-imposed standards and abstract ethical principles without enforcement mechanisms fall short in addressing the complex regulatory challenges of AI-driven gender harms. The chapter then makes the case for bringing the issue of gender to the centre of AI regulation discourse and recommends AI regulation frameworks to be based upon the international human rights instruments, with gender as a mainstreamed element, as these frameworks are more representative, enforceable and concerned with protecting the vulnerable.

Narayanan & Tan on Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI

Devesh Narayanan (National University of Singapore) and Zhi Ming Tan (Cornell) have posted “Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI” on SSRN. Here is the abstract:

It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if we were to account for the rich and diverse moral reasons that ground the call for explainable AI, and fully consider what it means to “trust” AI in a full-blooded sense of the term, we would uncover a deep and persistent tension between the two principles. For explainable AI to usefully serve the pursuit of normatively desirable goals, decision-makers must carefully monitor and critically reflect on the content of an AI-DST’s explanation. This entails a deliberative attitude. Conversely, the call for full-blooded trust in AI-DSTs implies the disposition to put questions about their reliability out of mind. This entails an unquestioning attitude. As such, the joint pursuit of explainable and trusted AI calls on decision-makers to simultaneously adopt incompatible attitudes towards their AI-DST, which leads to an intractable implementation gap. We analyze this gap and explore its broader implications: suggesting that we may need alternate theoretical conceptualizations of what explainability and trust entail, and/or alternate decision-making arrangements that separate the requirements for trust and deliberation to different parties.

Coglianese & Hefter on From Negative to Positive Algorithm Rights

Cary Coglianese (U Penn Law) and Kat Hefter (same) have posted “From Negative to Positive Algorithm Rights” (Wm. & Mary Bill Rts. J., forthcoming) on SSRN. Here is the abstract:

Artificial intelligence, or “AI,” is raising alarm bells. Advocates and scholars propose policies to constrain or even prohibit certain AI uses by governmental entities. These efforts to establish a negative right to be free from AI stem from an understandable motivation to protect the public from arbitrary, biased, or unjust applications of algorithms. This movement to enshrine protective rights follows a familiar pattern of suspicion that has accompanied the introduction of other technologies into governmental processes. Sometimes this initial suspicion of a new technology later transforms into widespread acceptance and even a demand for its use. In this paper, we show how three now-accepted technologies—DNA analysis, breathalyzers, and radar speed detectors—traversed a path from initial resistance to a positive right that demands their use. We argue that current calls for a negative right to be free from digital algorithms may dissipate over time, with the public and the legal system eventually embracing, if not even demanding, the use of AI. Increased recognition that the human-based status quo itself leads to unacceptable errors and biases may contribute to this transformation. A negative rights approach, after all, may only hamper the development of technologies that could lead to improved governmental performance. If AI tools are allowed to mature and become more standardized, they may also be accompanied by greater reliance on qualified personnel, robust audits and assessments, and meaningful oversight. Such maturation in the use of AI tools may lead to demonstrable improvements over the status quo, which eventually might well justify assigning a positive right to their use in the performance of governmental tasks.

Schrepel on The Making of An Antitrust API: Proof of Concept

Thibault Schrepel (VU Amsterdam; Stanford Codex Center; Sorbonne; Sciences Po) has posted “The Making of An Antitrust API: Proof of Concept” (Stanford University CodeX Research Paper Series 2022) on SSRN. Here is the abstract:

Computational antitrust promises not only to help antitrust agencies preside over increasingly complex and dynamic markets, but also to provide companies with the tools to assess and enforce compliance with antitrust laws. If research in the space has been primarily dedicated to supporting antitrust agencies, this article fills the gap by offering an innovative solution for companies. Specifically, this article serves as a proof of concept whose aim is to guide antitrust agencies in creating a decision-trees-based antitrust compliance API intended for market players. It includes an open access prototype that automates compliance with Article 102 TFEU, discusses its limitations and lessons to be learned.

Goldman on Zauderer and Compelled Editorial Transparency

Eric Goldman (Santa Clara University – School of Law) has posted “Zauderer and Compelled Editorial Transparency” (Iowa Law Review Online, Forthcoming) on SSRN. Here is the abstract:

A 1985 Supreme Court opinion, Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, holds the key to the Internet’s future. Zauderer provides a relaxed level of scrutiny for Constitutional challenges to some compelled commercial speech disclosure laws. Regulators throughout the country are adopting “transparency” laws to force Internet services to disclose information about their editorial operations or decisions when they publish third-party content, based on their assumption that Zauderer permits such compelled disclosures. This article explains why these transparency laws do not qualify for Zauderer’s relaxed scrutiny. Instead, given the inevitably censorial consequences of enacting and enforcing compelled editorial transparency laws, they should usually trigger strict scrutiny—just like outright speech restrictions do.

Recommended.

Grimmelmann & Mulligan on Data Property

James Grimmelmann (Cornell Law School; Cornell Tech) and Christina Mulligan (Brooklyn Law School) have posted “Data Property” (American University Law Review, Forthcoming) on SSRN. Here is the abstract:

In this, the Information Age, people and businesses depend on data. From your family photos to Google’s search index, data has become one of society’s most important resources. But there is a gaping hole in the law’s treatment of data. If someone destroys your car, that is the tort of conversion and the law gives a remedy. But if someone deletes your data, it is far from clear that they have done you a legally actionable wrong. If you are lucky, and the data was stored on your own computer, you may be able to sue them for trespass to a tangible chattel. But property law does not recognize the intangible data itself as a thing that can be impaired or converted, even though it is the data that you care about, and not the medium on which it is stored. It’s time to fix that.

This Article proposes, explains, and defends a system of property rights in data. On our theory, a person has possession of data when they control at least one copy of the data. A person who interferes with that possession can be liable, just as they can be liable for interference with possession of real property and tangible personal property. This treatment of data as an intangible thing that is instantiated in tangible copies coheres with the law’s treatment of information protected by intellectual property law. But importantly, it does not constitute an expansive new intellectual property right of the sort that scholars have warned against. Instead, a regime of data property fits comfortably into existing personal-property law, restoring a balanced and even treatment of the different kinds of things that matter for people’s lives and livelihoods.

Schultz on The Right of Publicity: A New Framework for Regulating Facial Recognition

Jason Schultz (NYU Law) has posted “The Right of Publicity: A New Framework for Regulating Facial Recognition” (Brooklyn Law Review, forthcoming) on SSRN. Here is the abstract:

For over a century, the right of publicity (ROP) has protected individuals from unwanted commercial exploitation of their images and identities. Originating around the turn of the Twentieth Century in response to the newest image-appropriation technologies of the time, including portrait photography, mass-production packaging, and a ubiquitous printing press, the ROP has continued to evolve along with each new wave of technologies that enable companies to exploit peoples’ images and identities for commercial gain. Over time, the ROP has protected individuals from misappropriation in photographs, films, advertisements, action figures, baseball cards, animatronic robots, video game avatars, and even digital resurrection in film sequels. Critically, as new technologies gained capacity for mass appropriation, the ROP expanded to protect against these practices.

The newest example of such a technology is facial recognition (FR). Facial recognition systems derive their primary economic value from commercially exploiting massive facial image databases filled with millions of individual likenesses and identities, often obtained without sufficient consent. Such appropriations go beyond mere acquisition, playing critical roles in training FR algorithms, matching identities to new images, and displaying results to users. Without the capacity to appropriate and commercially exploit these images and identities, most FR systems would fail to function as commercial products.

In this article, I develop a novel theory for how ROP claims could apply to FR systems and detail how their history and development, both statutory and common law, demonstrate their power to impose liability on entities that conduct mass image and identity appropriation, especially through innovative visual technologies. This provides a robust framework for FR regulation while at the same time balancing issues of informed consent and various public interest concerns, such as compatibility with copyright law and First Amendment-protected news reporting.

Revolidis on International Jurisdiction and the Blockchain

Ioannis Revolidis (University of Malta, Centre for Distributed Ledger Technologies and Department of Media, Communications & Technology Law) has posted “On Arrogance and Drunkenness – A Primer on International Jurisdiction and the Blockchain” (Lex & Forum, 2 (2022)) on SSRN. Here is the abstract:

Blockchain applications are gradually approaching mainstream adoption. But with mainstream adoption come frictions and challenges, as larger digital communities are more complex and, therefore, more prone to developing disputes between transacting stakeholders. The problem of dispute resolution as regards blockchain transactions has mainly been discussed from the standpoint of blockchain-based alternative dispute resolution methods. A key narrative of this approach is that state courts shall generally stay away from blockchain dispute resolution because the characteristics of the technology make them ill-suited to meet the challenge. This paper takes a slightly different approach. While it does not question the value of blockchain-based ADR, it submits that state courts still have a role to play in the adjudication of blockchain-related disputes. To explore the challenges that state courts might face when dealing with blockchain-related disputes it focuses on the use case of Non-Fungible Tokens (NFTs). After critically exploring the characteristics of blockchain technologies and the deployment of NFT business models, it looks into the Brussels AI Regulation and investigates how far it can accommodate disputes that revolve around NFTs.