Katyal on Five Principles of Policy Reform for the Technological Age

Sonia Katyal (U California, Berkeley – School of Law) has posted “Lex Reformatica: Five Principles of Policy Reform for the Technological Age” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Almost twenty five years ago, beloved former colleague Joel Reidenberg penned an article that argued that law and government regulation were not the only source of authority and rulemaking in the Information Society. Rather, he argued that technology itself, particularly system design choices like network design and system configurations, can also impose similar regulatory norms on communities. These rules and systems, he argued, comprised a Lex Informatica—a term that Reidenberg coined in historical reference to “Lex Mercatoria,” a system of international, merchant-driven norms in the Middle Ages that emerged independent of localized sovereign control.

Today, however, we confront a different phenomenon, one that requires us to draw upon the wisdom of Reidenberg’s landmark work in considering the repercussions of the previous era. As much as Lex Informatica provided us with a descriptive lens to analyze the birth of the internet, we are now confronted with the aftereffects of decades of muted, if not absent, regulation. When technological social norms are allowed to develop outside of clear legal restraints, who wins? Who loses? In this new era, we face a new set of challenges—challenges that force us to confront a critical need for infrastructural reform that focuses on the interplay between public and private forms of regulation (and self-regulation), its costs, and its benefits.

Instead of demonstrating the richness, complexity, and promise of yesterday’s internet age, today’s events show us what precisely can happen in an age of information libertarianism, underscoring the need for a new approach to information regulation. The articles in this Issue are taken from two separate symposiums—one on Lex Informatica and another on race and technology law. At present, a conversation between them could not be any more necessary. Taken together, these papers showcase what I refer to as the Lex Reformatica of today’s digital age. This collection of papers demonstrates the need for scholars, lawyers, and legislators to return to Reidenberg’s foundational work and to update its trajectory towards a new era that focuses on the design of a new approach to reform.

Coombs & Abraha on Governance of AI and Gender

Elizabeth Coombs (U Malta) and Halefom H. Abraha (Oxford) have posted “Governance of AI and Gender: Building on International Human Rights Law and Relevant Regional Frameworks” (in ‘Handbook on the Politics and Governance of Big Data and Artificial Intelligence’ Zwitter & Gstrein (eds.) (Elgar, forthcoming) on SSRN. Here is the abstract:

The increasing uptake of artificial intelligence (AI)1 systems across industries and social activities raises questions as to who benefits from these systems and who does not, and whether existing regulatory frameworks are adequate to address AI-driven harms. Policy-makers around the world are grappling with the challenges of addressing the perils of AI without undermining its promises. Emerging regulatory approaches range from sectoral regulations and omnibus frameworks to abstract principles. This chapter examines the place of gender in the current and emerging AI governance frameworks. It examines the effectiveness of current mechanisms to address the gender implications of AI technologies by reviewing significant regional and national frameworks with a particular focus on whether they are ‘fit for purpose’ in addressing AI-driven gender harms.

The chapter finds that existing legal frameworks including data protection, anti- discrimination, anti-trust, consumer or equality law, have significant gaps as they apply to AI systems generally and AI-driven gender disparities in particular. It also argues that the proliferation of self-imposed standards and abstract ethical principles without enforcement mechanisms fall short in addressing the complex regulatory challenges of AI-driven gender harms. The chapter then makes the case for bringing the issue of gender to the centre of AI regulation discourse and recommends AI regulation frameworks to be based upon the international human rights instruments, with gender as a mainstreamed element, as these frameworks are more representative, enforceable and concerned with protecting the vulnerable.

Narayanan & Tan on Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI

Devesh Narayanan (National University of Singapore) and Zhi Ming Tan (Cornell) have posted “Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI” on SSRN. Here is the abstract:

It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if we were to account for the rich and diverse moral reasons that ground the call for explainable AI, and fully consider what it means to “trust” AI in a full-blooded sense of the term, we would uncover a deep and persistent tension between the two principles. For explainable AI to usefully serve the pursuit of normatively desirable goals, decision-makers must carefully monitor and critically reflect on the content of an AI-DST’s explanation. This entails a deliberative attitude. Conversely, the call for full-blooded trust in AI-DSTs implies the disposition to put questions about their reliability out of mind. This entails an unquestioning attitude. As such, the joint pursuit of explainable and trusted AI calls on decision-makers to simultaneously adopt incompatible attitudes towards their AI-DST, which leads to an intractable implementation gap. We analyze this gap and explore its broader implications: suggesting that we may need alternate theoretical conceptualizations of what explainability and trust entail, and/or alternate decision-making arrangements that separate the requirements for trust and deliberation to different parties.