Fagan on Law’s Computational Paradox

Frank Fagan (South Texas College of Law Houston) has posted “Law’s Computational Paradox” (Virginia Journal of Law and Technology, forthcoming) on SSRN. Here is the abstract:

Artificial intelligence (AI) and machine learning will bring about many changes to how law is practiced, made, and enforced. However, machines cannot do everything that humans can do, and law must face the limitations of computational learning just as much as any other human endeavor. For predictive learning, these limitations are permanent and can be used to ascertain the future of law. The basic tasks of lawyering, such as brief writing, oral argument, and witness coaching, will become increasingly precise, but that precision will eventually plateau, and the essential character of lawyering will remain largely unchanged. Similarly, where machines can be used to clarify application of law, they simply will limit judicial discretion consistent with moves from standards to rules or from rules to personalized law.

In each of these scenarios—lawyering and case clarification—enhanced precision is made possible through systemic closure of the machine’s domain and AI will ascend easily. In scenarios where law’s architecture is open, and systemic closure is not possible or worth it, machines will be frustrated by an inability to discern patterns, or by a powerlessness to comprehend the predictive power of previously discerned patterns in newly changed contexts. Lawmakers may add new variables to compensate and encourage attempts to model future environments, but open innovation and social change will undermine even a determined empiricism. In response to these limitations, lawmakers may attempt to actively impose closure of dynamic legal domains in an effort to enhance law’s precision. By limiting admissibility of evidence, black-listing variables, requiring specific thresholds of white-listed variables, and pursuing other formalist strategies of closure, law can elevate its predictive precision for a given environment, but this elevation comes at the expense of openness and innovation. This is law’s computational paradox.

This Article introduces the paradox across machine learning applications in lawmaking, enforcement, rights allocation, and lawyering, and shows that innovation serves as a self-corrective to the excessive mechanization of law. Because innovation, change, and open legal domains are necessary ingredients for continual technological ascendance in law and elsewhere, fears of AI-based law as an existential threat to human-centered law are exaggerated. It should be emphasized, however, that there is ample room for quantification and counting in both closed and open settings; the products of innovation will always undergo measurement and machine learning algorithms will always require updating and refinement. This is the process of technological becoming. The goal for law is to never fully arrive.

The uncertainty of dynamic legal environments, even if diminishing with growing predictive power in law, forms the basis of an interpersonal constitutional authority. Understanding that some disruptions will always be unplanned prevents the construction of blind pathways for longer-term legal error, and relatedly, prevents empirical and technical rationales from overrunning a human-centered public square. A growing awareness of paradoxical error generated by precise, but closed, computational environments will generate a societal response that seeks to balance the benefits of precision and innovation. This balancing—what might be termed a “computational legal ethics”—implies that tomorrow’s lawyers, more so than their counterparts of the past, will be called upon to discern what should be considered versus ignored.

Sunstein on The Use of Algorithms in Society

Cass R. Sunstein (Harvard Law School) has posted “The Use of Algorithms in Society” on SSRN. Here is the abstract:

The judgments of human beings can be biased; they can also be noisy. Across a wide range of settings, use of algorithms is likely to improve accuracy, because algorithms will reduce both bias and noise. Indeed, algorithms can help identify the role of human biases; they might even identify biases that have not been named before. As compared to algorithms, for example, human judges, deciding whether to give bail to criminal defendants, show Current Offense Bias and Mugshot Bias; as compared to algorithms, human doctors, deciding whether to test people for heart attacks, show Current Symptom Bias and Demographic Bias. These are cases in which large data sets are able to associate certain inputs with specific outcomes. But in important cases, algorithms struggle to make accurate predictions, not because they are algorithms but because they do not have enough data to answer the question at hand. Those cases often, though not always, involve complex systems. (1) Algorithms might not be able to foresee the effects of social interactions, which can depend on a large number of random or serendipitous factors, and which can lead in unanticipated and unpredictable directions. (2) Algorithms might not be able to foresee the effects of context, timing, or mood. (3) Algorithms might not be able to identify people’s preferences, which might be concealed or falsified, and which might be revealed at an unexpected time. (4) Algorithms might not be able to anticipate sudden or unprecedented leaps or shocks (a technological breakthrough, a successful terrorist attack, a pandemic, a black swan). (5) Algorithms might not have “local knowledge,” or private information, which human beings might have. Predictions about romantic attraction, about the success of cultural products, and about coming revolutions are cases in point. The limitations of algorithms are analogous to the limitations of planners, emphasized by Hayek in his famous critique of central planning. It is an unresolved question whether and to what extent some of the limitations of algorithms might be reduced or overcome over time, with more data or various improvements; calculations are improving in extraordinary ways, but some of the relevant challenges cannot be solved with ex ante calculations.

Coglianese on Regulating Machine Learning: The Challenge of Heterogeneity

Cary Coglianese (U Penn Law) has posetd “Regulating Machine Learning: The Challenge of Heterogeneity” (Competition Policy International: TechReg Chronicle, February 2023) on SSRN. Here is the abstract:

Machine learning, or artificial intelligence, refers to a vast array of different algorithms that are being put to highly varied uses, including in transportation, medicine, social media, marketing, and many other settings. Not only do machine-learning algorithms vary widely across their types and uses, but they are evolving constantly. Even the same algorithm can perform quite differently over time as it is fed new data. Due to the staggering heterogeneity of these algorithms, multiple regulatory agencies will be needed to regulate the use of machine learning, each within their own discrete area of specialization. Even these specialized expert agencies, though, will still face the challenge of heterogeneity and must approach their task of regulating machine learning with agility. They must build up their capacity in data sciences, deploy flexible strategies such as management-based regulation, and remain constantly vigilant. Regulators should also consider how they can use machine-learning tools themselves to enhance their ability to protect the public from the adverse effects of machine learning. Effective regulatory governance of machine learning should be possible, but it will depend on the constant pursuit of regulatory excellence.