Simon on Using Artificial Intelligence in the Law Review Submissions Process

Brenda M. Simon (California Western School of Law) has posted “Using Artificial Intelligence in the Law Review Submissions Process” (UC Davis Law Review, Forthcoming) on SSRN. Here is the abstract:

The use of artificial intelligence to help editors examine law review submissions may provide a way to improve an overburdened system. This Article is the first to explore the promise and pitfalls of using artificial intelligence in the law review submissions process. Technology-assisted review of submissions offers many possible benefits. It can simplify preemption checks, prevent plagiarism, detect failure to comply with formatting requirements, and identify missing citations. These efficiencies may allow editors to address serious flaws in the current selection process, including the use of heuristics that may result in discriminatory outcomes and dependence on lower-ranked journals to conduct the initial review of submissions. Although editors should not rely on a score assigned by an algorithm to decide whether to accept an article, technology-assisted review could increase the efficiency of initial screening and provide feedback to editors on their selection decisions. Uncovering potential human bias in the selection process may encourage editors to develop ways to minimize its harmful effects.

Despite these benefits, using artificial intelligence to streamline the submissions process raises significant concerns. Technology-assisted review may enable efficient implementation of existing biases into the selection process, rather than correcting them. Artificial intelligence systems may rely on considerations that result in discriminatory effects and negatively impact groups that are not adequately represented during development. The tendency to defer to seemingly neutral and often opaque algorithms can increase the risk of adverse outcomes. With careful oversight, however, some of these concerns can be addressed. Even an imperfect system may be worth using in limited situations where the benefits substantially outweigh the potential harms. With appropriate supervision, circumscribed application, and ongoing refinement, artificial intelligence may provide a more efficient and fairer submissions experience for both editors and authors.

Joh on The Unexpected Consequences of Automation in Policing

Elizabeth E. Joh (UC Davis – School of Law) has posted “The Unexpected Consequences of Automation in Policing” (SMU Law Review 2022 forthcoming) on SSRN. Here is the abstract:

This essay has two aims. First, it explains how automated decisionmaking can produce unexpected results. This is a problem long understood in the field of industrial organization. To identify such effects in policing is no easy task. The police are a notoriously difficult institution to study. They are insular, dislike outsiders, and especially dislike critical outsiders. Fortunately, we have the benefit of a decade’s worth of experimentation in the police use of automated decisionmaking, and the resulting political backlash against some of these uses. As a result, some large urban police departments have undergone external investigations to see whether tools like predictive policing or individual criminal risk assessments are biased or ineffective or simply too costly in light of their benefits. One of these recent reports, on the use of acoustic gunshot detection software in Chicago, provides a window into one type of police automation.

This leads to the article’s second observation. Automation is not just a set of tools that the police use; it changes the environment of policing in unexpected ways. There are now some widely-known criticisms of the increasing use of automated tools in policing, but they focus primarily on the flaws of the technologies used. The training data in facial recognition algorithms may be biased along lines of race, gender, and ethnicity. Risk assessments for gun violence may in truth be poor guides for police intervention. These claims are singularly technology-focused. Accordingly, errors and inefficiencies merit technological improvements. Even calls for bans on technologies like facial recognition are responses to the technology itself. As Chicago’s experience with acoustic gunshot detection technology demonstrates, however, automation serves not just as a tool for the police, but also led to changes in police behavior. These changes in police conduct are documented in a 2021 report from the Chicago Office of Inspector General. And they are noteworthy. If automation unexpectedly changes police behaviors, these changes have implications for how we understand policing through the lens of inequality and unaccountability.

Bridgesmith & Elmessiry on The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice

Larry Bridgesmith (Vanderbilt Law School; ASU Sandra Day O’Connor College of Law) and Adel Elmessiry have posted “The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice?” (Akron Law Review, Vol. 54, No. 4, 2021) on SSRN. Here is the abstract:

We live in an instant access and on-demand world of information sharing. The global pandemic of 2020 accelerated the necessity of remote working and team collaboration. Work teams are exploring and utilizing the remote work platforms required to serve in place of stand-ups common in the agile workplace. Online tools are needed to provide visibility to the status of projects and the accountability necessary to ensure that tasks are completed on time and on budget. Digital transformation of organizational data is now the target of AI projects to provide enterprise transparency and predictive insights into the process of work.

This paper develops the relationship between AI, law, and the digital transformation sweeping every industry sector. There is legitimate concern about the degree to which many nascent issues involving emerging technology oppose human rights and well being. However, lawyers will play a critical role in both the prosecution and defense of these rights. Equally, if not more so, lawyers will also be a vibrant source of insight and guidance for the development of “ethical” AI in a proactive—not simply reactive—way.

Coglianese & Lai on Algorithm vs. Algorithm

Cary Coglianese (University of Pennsylvania Carey Law School) and Alicia Lai (University of Pennsylvania Law School ; U.S. Courts of Appeals) have posted “Algorithm vs. Algorithm” (Duke Law Journal, Vol. 72, p. 1281, 2022) on SSRN. Here is the abstract:

Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent. On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free-riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones.

Whalen & Zingg on The Patent-Eligibility of Artificial Intelligence after Alice Corp. v. CLS Bank International

Ryan Whalen (The University of Hong Kong – Faculty of Law) and Raphael Zingg (Waseda University) have posted “Innovating under Uncertainty: The Patent-Eligibility of Artificial Intelligence after Alice Corp. v. CLS Bank International” (Research in Law and Economics, Volume 30 (2022)) on SSRN. Here is the abstract:

Artificial intelligence-related inventions raise complex questions of how to define the boundaries around patentable subject matter. In the United States, many claim that the recent doctrinal developments by the Supreme Court have led to incoherence and excessive uncertainty within the innovation community. In response, policymakers and stakeholders have suggested legislative amendments to address these concerns. We first review these developments, and subsequently use the patent examination record to empirically test the claims of increased uncertainty. We find that, although uncertainty did spike following the Supreme Court’s holding in Alice, it quickly returned to levels comparable to its historic norm. This has implications both for those advocating for legislative changes to the law of eligible subject matter, as well as other jurisdictions considering adopting a test similar to that applied in Alice.

Stern et al. on Artificial Intelligence in the Chinese Courts

Rachel E. Stern (University of California, Berkeley), Benjamin L. Liebman (Columbia Law School), Margaret E. Roberts (UCSD) – 21st Century China Center), and Alice Wang (Columbia Law School) have posted “Automating Fairness? Artificial Intelligence in the Chinese Courts” (Columbia Journal of Transnational Law, No. 59, 2021) on SSRN. Here is the abstract:

How will surging global interest in data analytics and artificial intelligence transform the day-to-day operations of courts, and what are the implications for judicial power? In the last five years, Chinese courts have come to lead the world in their efforts to deploy automated pattern analysis to monitor judges, standardize decision-making, and observe trends in society. This article chronicles how and why Chinese courts came to embrace artificial intelligence, making public tens of millions of court judgments in the process. Although technology is certainly being used to strengthen social control and boost the legitimacy of the Chinese Communist Party, examining recent developments in the Chinese courts complicates common portrayals of China as a rising exemplar of digital authoritarianism. Data are incomplete, and algorithms are often untested.

The rise of algorithmic analytics also risks negative consequences for the Chinese legal system itself, including increased inequality among court users, new blind spots in the state’s ability to see and track its own officials and citizens, and diminished judicial authority. Other jurisdictions grappling with how to integrate artificial intelligence into the legal system are likely to confront similar dynamics. Framed broadly, our goal is to push the nascent literature on courts, data analytics, and artificial intelligence to consider the political implications of technological change. In particular, recent developments in China’s courts offer a caution that two powerful trends—ascendant interest in algorithmic governance and worldwide assaults on judicial authority—could be intertwined.

Nabilou on Probabilistic Settlement Finality in Proof-of-Work Blockchains: Legal Considerations

Hossein Nabilou (University of Amsterdam Law School; UNIDROIT) has posted “Probabilistic Settlement Finality in Proof-of-Work Blockchains: Legal Considerations” on SSRN. Here is the abstract:

The concept of settlement finality sits at the heart of any type of commercial transaction; whether the transaction is in physical or electronic form or is mediated by fiat currencies or cryptocurrencies.
Transaction finality refers to the exact moment in time when proprietary interests in the object or
medium of transaction pass from one party to his counterparty and the obligations of the parties to a transaction are discharged in an unconditional and irrevocable manner, i.e., in a way that cannot be reversed even by the subsequent legal defenses or actions against the counterparty. Given the benefits of finality in terms of legal certainty and its potential systemic implications, legal systems throughout the globe have devised mechanisms to determine the exact moment of the finality of a transaction and settlement of obligations conducted using fiat currencies as a medium of exchange. However, as the transactions involving cryptocurrencies fall beyond the scope of such rules, they introduce new challenges to determining the exact moment of finality in on-chain cryptocurrency transactions. This complexity arises because the finality of the transactions in the cryptocurrencies that rely on proof-of-work (PoW) consensus algorithms is probabilistic. The probabilistic finality makes the determination of the exact moment of operational finality nearly impossible.


After discussing the mechanisms of settlement of contractual obligations in the traditional sale of goods as well as payment and settlement systems – which rather than relying on the concept of operational finality, rely upon the concept of legal finality – the paper argues that even in the traditional payment and settlement systems the determination of operational settlement finality is nearly impossible. This is because no transaction, even a transaction involving a cash payment, cannot be operationally deemed irrevocable as it remains prone to hacks or unwinding by electronic means or mere brute force. The paper suggests that the concept of finality is inherently a legal concept and, as is the case in the conventional finance, the moment of finality in PoW blockchains should also rely on the conceptual separation of operational finality from legal finality. However, given the decentralized nature of cryptocurrencies, defining the moment of finality in PoW blockchains, which may require a minimum level of institutional infrastructures and centralization to support the credibility of the finality, may face insurmountable challenges.