Ashley on Teaching Law and Digital Age Legal Practice with an AI and Law Seminar

Kevin Ashley (U Pitt Law) has posted “Teaching Law and Digital Age Legal Practice with an AI and Law Seminar: Justice, Lawyering and Legal Education in the Digital Age” (Chicago-Kent Law Review, Vol. 88, p. 783, 2013) on SSRN. Here is the abstract:

A seminar on Artificial Intelligence (“Al”) and Law can teach law students lessons about legal reasoning and legal practice in the digital age. Al and Law is a subfield of Al/computer science research that focuses on designing computer programs—computational models—that perform legal reasoning. These computational models are used in building tools to assist in legal practice and pedagogy and in studying legal reasoning in order to contribute to cognitive science and jurisprudence. Today, subject to a number of qualifications, computer programs can reason with legal rules, apply legal precedents, and even argue like a legal advocate.

This article provides a guide and examples to prepare law students for the digital age by means of an Al and Law seminar. After introducing the science of Artificial Intelligence and its application to law, the paper presents the syllabus for an up-to-date Al and Law seminar. With the Syllabus as a framework, the paper showcases some characteristic Al and Law programs, and illustrates the pedagogically important lessons that Al and Law has learned about reasoning with legal rules and cases, about legal argument and about the digital documents technologies that are becoming available, and even the norm, in legal practice.

Davis on Robolawyers and Robojudges

Joshua P. Davis (University of San Francisco – School of Law) has posted “Of Robolawyers and Robojudges” (Hastings Law Journal, 2022) on SSRN. Here is the abstract:

Artificial intelligence (AI) may someday play various roles in litigation, particularly complex litigation. It may be able to provide strategic advice, advocate through legal briefs and in court, help judges assess class action settlements, and propose or impose compromises. It may even write judicial opinions and decide cases. For it to perform those litigation tasks, however, would require two breakthroughs: one involving a form of instrumental reasoning that we might loosely call common sense or more precisely call abduction and the other involving a form of reasoning that we will label purposive, that is, the formation of ends or objectives. This Article predicts that AI will likely make strides at abductive reasoning but not at purposive reasoning. If those predictions prove accurate, it contends, AI will be able to perform sophisticated tasks usually reserved for lawyers, but it should not be trusted to perform similar tasks reserved for judges. In short, we might welcome a role for robolawyers but resist the rise of robojudges.

Ashley on Capturing the Dialectic between Principles and Cases

Kevin Ashley (University of Pittsburgh – School of Law) has posted “Capturing the Dialectic between Principles and Cases” (Jurimetrics, Vol. 44, p. 229, 2004) on SSRN. Here is the abstract:

Theorists in ethics and law posit a dialectical relationship between principles and cases; abstract principles both inform and are informed by the decisions of specific cases. Until recently, however, it has not been possible to investigate or confirm this relationship empirically. This work involves a systematic study of a set of ethics cases written by a professional association’s board of ethical review. Like judges, the board explains its decisions in opinions. It applies normative standards, namely principles from a code of ethics, and cites past cases. We hypothesized that the board’s explanations of its decisions elaborated upon the meaning and applicability of the abstract code principles and past cases. In effect, the board operationalizes the principles and cases. We hypothesized further that this operationalization could be captured computationally and used to improve automated information retrieval. A computer program was designed to retrieve from the on-line database those ethics code principles and past cases that are relevant to analyzing new problems. In an experiment, we used the computer program to test the hypotheses. The experiment demonstrated that the dialectical relationship between principles and cases exists and that the associated operationalization information improves the program’s ability to assess which codes and cases are relevant to analyzing new problems. The results have significance both to the study of legal reasoning and improvement of legal information retrieval.

Simon on Using Artificial Intelligence in the Law Review Submissions Process

Brenda M. Simon (California Western School of Law) has posted “Using Artificial Intelligence in the Law Review Submissions Process” (UC Davis Law Review, Forthcoming) on SSRN. Here is the abstract:

The use of artificial intelligence to help editors examine law review submissions may provide a way to improve an overburdened system. This Article is the first to explore the promise and pitfalls of using artificial intelligence in the law review submissions process. Technology-assisted review of submissions offers many possible benefits. It can simplify preemption checks, prevent plagiarism, detect failure to comply with formatting requirements, and identify missing citations. These efficiencies may allow editors to address serious flaws in the current selection process, including the use of heuristics that may result in discriminatory outcomes and dependence on lower-ranked journals to conduct the initial review of submissions. Although editors should not rely on a score assigned by an algorithm to decide whether to accept an article, technology-assisted review could increase the efficiency of initial screening and provide feedback to editors on their selection decisions. Uncovering potential human bias in the selection process may encourage editors to develop ways to minimize its harmful effects.

Despite these benefits, using artificial intelligence to streamline the submissions process raises significant concerns. Technology-assisted review may enable efficient implementation of existing biases into the selection process, rather than correcting them. Artificial intelligence systems may rely on considerations that result in discriminatory effects and negatively impact groups that are not adequately represented during development. The tendency to defer to seemingly neutral and often opaque algorithms can increase the risk of adverse outcomes. With careful oversight, however, some of these concerns can be addressed. Even an imperfect system may be worth using in limited situations where the benefits substantially outweigh the potential harms. With appropriate supervision, circumscribed application, and ongoing refinement, artificial intelligence may provide a more efficient and fairer submissions experience for both editors and authors.

Bridgesmith & Elmessiry on The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice

Larry Bridgesmith (Vanderbilt Law School; ASU Sandra Day O’Connor College of Law) and Adel Elmessiry have posted “The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice?” (Akron Law Review, Vol. 54, No. 4, 2021) on SSRN. Here is the abstract:

We live in an instant access and on-demand world of information sharing. The global pandemic of 2020 accelerated the necessity of remote working and team collaboration. Work teams are exploring and utilizing the remote work platforms required to serve in place of stand-ups common in the agile workplace. Online tools are needed to provide visibility to the status of projects and the accountability necessary to ensure that tasks are completed on time and on budget. Digital transformation of organizational data is now the target of AI projects to provide enterprise transparency and predictive insights into the process of work.

This paper develops the relationship between AI, law, and the digital transformation sweeping every industry sector. There is legitimate concern about the degree to which many nascent issues involving emerging technology oppose human rights and well being. However, lawyers will play a critical role in both the prosecution and defense of these rights. Equally, if not more so, lawyers will also be a vibrant source of insight and guidance for the development of “ethical” AI in a proactive—not simply reactive—way.

Atik on Quantum Computing and the Legal Imagination

Jeffery Atik (Loyola Law School Los Angeles) has posted “Quantum Computing and the Legal Imagination” (18 SciTech Lawyer 12 (2022)) on SSRN. Here is the abstract:

Powerful and cost-effective quantum computers will soon arrive. The existence of quantum computers – and their special capacities – will stimulate a search for legal applications. Quantum computing – when deployed together with artificial intelligence – will
(1) enable new legal tools,
(2) permit modeling of complex social and economic relations that can be used to inform legal determinations,
(3) raise new legal, ethical and distributional challenges and
(4) stimulate the legal imagination to reach new – and initially strange – insights and understandings.
The implementation of quantum computing in law will depend on the location of quantum advantage, where quantum computers outperform conventional computers. Mathematicians can already teach us about the characteristics of certain problem types and suggest which of these will be favorable ground for quantum advantage. Lawyer-engineers will need to match the identified mathematical characteristics of decisions amenable to quantum advantage to real-world legal concerns. Quantum computing will drive further developments in operationalizing law and facilitating legal prediction. Many of these changes will take place ‘under the hood’ and will not demand a thorough understanding of quantum theory, but the practitioner will experience a different feel from the technology. Antitrust and bank regulation are examples of fields where quantum computing can be expected to have an important impact. Quantum computing, like the digital technologies that arose during the past 30 years, might have the unfortunate effect of exacerbating wealth and power differentials. That said, quantum computing will have a stimulating effect on our minds, including our legal imagination. It will lead us to look for new approaches, new ways of thinking about problems and their solutions, and new roles for law.


Coupette & Hartung on Creating a Culture of Constructive Criticism in Computational Legal Studies

Corinna Coupette (Max Planck Institute for Informatics) and Dirk Hartung (Bucerius Law School; Stanford Codex Center) have posted “Sharing and Caring: Creating a Culture of Constructive Criticism in Computational Legal Studies” on SSRN. Here is the abstract:

We introduce seven foundational principles for creating a culture of constructive criticism in computational legal studies. Beginning by challenging the current perception of papers as the primary scholarly output, we call for a more comprehensive interpretation of publications. We then suggest to make these publications computationally reproducible, releasing all of the data and all of the code all of the time, on time, and in the most functioning form possible. Subsequently, we invite constructive criticism in all phases of the publication life cycle. We posit that our proposals will help form our field, and float the idea of marking this maturity by the creation of a modern flagship publication outlet for computational legal studies.

Hod et al. on Data Science Meets Law in the Classroom

Shlomi Hod (Boston University), Karni Chagal-Feferkorn (University of Ottawa Common Law Section), Niva Elkin-Koren (Tel-Aviv University – Faculty of Law), and Avigdor Gal (Technion-Israel Institute of Technology) have posted “Data Science Meets Law” (65 Communications of the ACM, 2022) on SSRN. Here is the abstract:

Engaging lawyers and data scientists in multi-disciplinary dialogue may result in the design of better AI systems. Combining the joint input of these two disciplines as early as possible in the life cycle of AI systems may help in properly embedding human values in these systems and in minimizing their risks of unintended harms.

Lawyers and data scientists, however, often think of the other discipline as “speaking a different language”, and facilitating dialogue between them is not always trivial.

This paper describes a “hands on” course taught to both law and data science students in academic institutions in the U.S., Europe and Israel. The unique format of the course, which is based on students working in small mixed groups, enables meaningful dialogue between the disciplines and is intended to contribute to the design of “responsible AI” systems.

In the paper we share the pedagogic principles that guided us as well as insights on how to foster multi disciplinary dialogue between law and data science students.

Delgado, Barocas & Levy on Participatory Design in Legal AI

Fernando Delgado (Cornell University), Solon Barocas (Microsoft Research; Cornell University), and Karen Levy (Cornell University) have posted “An Uncommon Task: Participatory Design in Legal AI” (ACM Proceedings on Human-Computer Interaction 2022) on SSRN. Here is the abstract:

Despite growing calls for participation in AI design, there are to date few empirical studies of what these processes look like and how they can be structured for meaningful engagement with domain experts. In this paper, we examine a notable yet understudied AI design process in the legal domain that took place over a decade ago, the impact of which still informs legal automation efforts today. Specifically, we examine the design and evaluation activities that took place from 2006 to 2011 within the Text REtrieval Conference’s (TREC) Legal Track, a computational research venue hosted by the National Institute of Standards and Technologies. The Legal Track of TREC is notable in the history of AI research and practice because it relied on a range of participatory approaches to facilitate the design and evaluation of new computational techniques—in this case, for automating attorney document review for civil litigation matters. Drawing on archival research and interviews with coordinators of the Legal Track of TREC, our analysis reveals how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers and helped bridge the chasm between computational research and real-world, high-stakes litigation practice. In analyzing this case from the recent past, our aim is to empirically ground contemporary critiques of AI development and evaluation and the calls for greater participation as a means to address them.

Ashley & Bruninghaus on Computer Models for Legal Prediction

Kevin Ashley (University of Pittsburgh – School of Law) and Stefanie Bruninghaus (same) have posted “Computer Models for Legal Prediction” (Jurimetrics, Vol. 46, p. 309, 2006) on SSRN. Here is the abstract:

Computerized algorithms for predicting the outcomes of legal problems can extract and present information from particular databases of cases to guide the legal analysis of new problems. They can have practical value despite the limitations that make reliance on predictions risky for other real-world purposes such as estimating settlement values. An algorithm’s ability to generate reasonable legal arguments also is important. In this article, computerized prediction algorithms are compared not only in terms of accuracy, but also in terms of their ability to explain predictions and to integrate predictions and arguments. Our approach, the Issue-Based Prediction algorithm, is a program that tests hypotheses about how issues in a new case will be decided. It attempts to explain away counterexamples inconsistent with a hypothesis, while apprising users of the counterexamples and making explanatory arguments based on them.