Frank Fagan (South Texas College of Law Houston) has posted “Law’s Computational Paradox” (Virginia Journal of Law and Technology, forthcoming) on SSRN. Here is the abstract:
Artificial intelligence (AI) and machine learning will bring about many changes to how law is practiced, made, and enforced. However, machines cannot do everything that humans can do, and law must face the limitations of computational learning just as much as any other human endeavor. For predictive learning, these limitations are permanent and can be used to ascertain the future of law. The basic tasks of lawyering, such as brief writing, oral argument, and witness coaching, will become increasingly precise, but that precision will eventually plateau, and the essential character of lawyering will remain largely unchanged. Similarly, where machines can be used to clarify application of law, they simply will limit judicial discretion consistent with moves from standards to rules or from rules to personalized law.
In each of these scenarios—lawyering and case clarification—enhanced precision is made possible through systemic closure of the machine’s domain and AI will ascend easily. In scenarios where law’s architecture is open, and systemic closure is not possible or worth it, machines will be frustrated by an inability to discern patterns, or by a powerlessness to comprehend the predictive power of previously discerned patterns in newly changed contexts. Lawmakers may add new variables to compensate and encourage attempts to model future environments, but open innovation and social change will undermine even a determined empiricism. In response to these limitations, lawmakers may attempt to actively impose closure of dynamic legal domains in an effort to enhance law’s precision. By limiting admissibility of evidence, black-listing variables, requiring specific thresholds of white-listed variables, and pursuing other formalist strategies of closure, law can elevate its predictive precision for a given environment, but this elevation comes at the expense of openness and innovation. This is law’s computational paradox.
This Article introduces the paradox across machine learning applications in lawmaking, enforcement, rights allocation, and lawyering, and shows that innovation serves as a self-corrective to the excessive mechanization of law. Because innovation, change, and open legal domains are necessary ingredients for continual technological ascendance in law and elsewhere, fears of AI-based law as an existential threat to human-centered law are exaggerated. It should be emphasized, however, that there is ample room for quantification and counting in both closed and open settings; the products of innovation will always undergo measurement and machine learning algorithms will always require updating and refinement. This is the process of technological becoming. The goal for law is to never fully arrive.
The uncertainty of dynamic legal environments, even if diminishing with growing predictive power in law, forms the basis of an interpersonal constitutional authority. Understanding that some disruptions will always be unplanned prevents the construction of blind pathways for longer-term legal error, and relatedly, prevents empirical and technical rationales from overrunning a human-centered public square. A growing awareness of paradoxical error generated by precise, but closed, computational environments will generate a societal response that seeks to balance the benefits of precision and innovation. This balancing—what might be termed a “computational legal ethics”—implies that tomorrow’s lawyers, more so than their counterparts of the past, will be called upon to discern what should be considered versus ignored.