Karni Chagal-Feferkorn (University of Ottawa Common Law Section) and Niva Elkin-Koren (Tel-Aviv University – Faculty of Law) have posted “LEX AI: Revisiting Private Ordering by Design” (Berkeley Technology Law Journal, Vol. 36) on SSRN. Here is the abstract:
In his seminal paper from 1997, Professor Joel R. Reidenberg articulated a novel governance strategy known as “Lex-Informatica.” Under the principles of Lex-Informatica, norms are no longer shaped by leaders, legislators, or judges, but rather by technological capabilities and design choices that grant users the flexibility to shape their own online experience based on their preferences. A quarter century later, a “second generation” of online governance systems has emerged, making use of artificial intelligence: “Lex-AI”.
The literature on governance by AI often focuses on governance of AI, seeking to render AI decision-making more compatible with principles of fairness, due process, and accountability. Scholars have also focused on who is governing behavior by using AI. Missing from these discussions is an inquiry into how norms are generated and enforced through the proliferation of AI. Ultimately, in order to govern AI and fully understand its social implications, we must first ascertain what is lost in translation as we shift to AI in deciding legal matters.
This paper explores how Lex AI governs and the implications of shifting from governance by a set of legal norms to the governance of human behavior and social relations by data-driven algorithms.
We argue that Lex AI is a sui generis type of governance—one which deserves scrutiny by regulators and policymakers. Lex AI bypasses autonomous choice as it is often based on personalization that is conducted for the user and not by the user. As such, it does not neatly fit the definition of private ordering—the process of setting up of social norms by parties involved in the regulated activity. When viewed as a distinct type of collective action mediated by algorithms, Lex AI may enable efficient collection of granular information on the preferences, needs, and interests of members of society, but Lex AI also raises new types of challenges. Path dependency, coupled with the reduced opportunity to signal users’ true preferences or to take part in the deliberation of the applicable norms, may render Lex AI a less efficient and less legitimate form of governance.
Shaping Lex AI to enhance social welfare may require a fresh way of thinking about these challenges and the public interventions that might address them.