Hausman on The Danger of Rigged Algorithms: Evidence from Immigration Detention Decisions

David Hausman (Stanford University, Department of Political Science) has posted “The Danger of Rigged Algorithms: Evidence from Immigration Detention Decisions” on SSRN. Here is the abstract:

This article illustrates a simple risk of algorithmic risk assessment tools: rigging. In 2017, U.S. Immigration and Customs Enforcement removed the “release” recommendation from the algorithmic tool that helped officers decide whom to detain and whom to release. After the change, the tool only recommended detention or referred cases to a supervisor. Taking advantage of the suddenness of this change, I use a fuzzy regression discontinuity design to show that the change reduced actual release decisions by about half, from around 10% to around 5% of all decisions. Officers continued to follow the tool’s detention recommendations at almost the same rate even after the tool stopped recommending release, and when officers deviated from the tool’s recommendation to order release, supervisors became more likely to overrule their decisions. Although algorithmic tools offer the possibility of reducing the use of detention, they can also be rigged to increase it.

Blass on Observing the Effects of Automating the Judicial System with Behavioral Equivalence

Joseph Blass (Northwestern University Pritzker School of Law; Northwestern University – Dept. Electrical Engineering & Computer Science) has posted “Observing the Effects of Automating the Judicial System with Behavioral Equivalence” (South Carolina Law Review, Vol. 72, No. 4, 2022) on SSRN. Here is the abstract:

Building on decades of work in Artificial Intelligence, legal scholars have begun to consider whether components of the judicial system could be replaced by computers. Much of the scholarship in AI and Law has focused on whether such automated systems could reproduce the reasoning and outcomes produced by the current system. This scholarly framing captures many aspects of judicial processes, but overlooks how automated judicial decision-making likely would change how participants in the legal system interact with it, and how societal interests outside that system who care about its processes would be affected by those changes.

This Article demonstrates how scholarship on legal automation comes to leave out perspectives external to the process of judicial decision-making. It analyses the problem using behavioral equivalence, a Computer Science concept that assesses systems’ behaviors according to the observations of specific monitors of those systems. It introduces a framework to examine the various observers of the judicial process and the tradeoffs they may perceive when legal systems are automated. This framework will help scholars and policymakers more effectively anticipate the consequences of automating components of the legal system.

Adams, Adams-Prassl & Adams-Prassl on Online Tribunal Judgments and The Limits of Open Justice

Zoe Adams (University of Cambridge), Abi Adams-Prassl (University of Oxford – Department of Economics), and Jeremias Adams-Prassl (University of Oxford – Faculty of Law) have posted “Online Tribunal Judgments and The Limits of Open Justice” (Forthcoming (2021) 41 Legal Studies) on SSRN. Here is the abstract:

The principle of open justice is a constituent element of the rule of law: it demands publicity of legal proceedings, including the publication of judgments. Since 2017, the UK government has systematically published first instance Employment Tribunal decisions in an online repository. Whilst a veritable treasure trove for researchers and policy makers, the database also has darker potential – from automating blacklisting to creating new and systemic barriers to access to justice. Our scrutiny of existing legal safeguards, from anonymity orders to equality law and data protection, finds a number of gaps, which threaten to make the principle of open justice as embodied in the current publication regime inimical to equal access to justice.

Chen, Stremitzer & Tobia on Having Your Day in Robot Court

Benjamin Minhao Chen (The University of Hong Kong – Faculty of Law), Alexander Stremitzer (ETH Zurich), and Kevin Tobia (Georgetown University Law Center; Georgetown University – Department of Philosophy) have posted “Having Your Day in Robot Court” on SSRN. Here is the abstract:

Should machines be judges? Some balk at this possibility, holding that ordinary citizens would see a robot-led legal proceeding as procedurally unfair: To have your “day in court” is to have a human hear and adjudicate your claims. Two original experiments assess whether laypeople share this intuition. We discover that laypeople do, in fact, see human judges as fairer than artificially intelligent (“AI”) robot judges: All else equal, there is a perceived human-AI “fairness gap.” However, it is also possible to eliminate the fairness gap. The perceived advantage of human judges over AI judges is related to perceptions of accuracy and comprehensiveness of the decision, rather than “softer” and more distinctively human factors. Moreover, the study reveals that laypeople are amenable to “algorithm offsetting.” Adding an AI hearing and increasing the AI interpretability reduces the perceived human-AI fairness gap. Ultimately, the results support a common challenge to robot judges: there is a concerning human-AI fairness gap. Yet, the results also indicate that the strongest version of this challenge — human judges have inimitable procedural fairness advantages — is not reflected in the views of laypeople. In some circumstances, people see a day in robot court as no less fair than day in human court.

Papagianneas on Automated Justice and Fairness in the PRC 

Straton Papagianneas (Leiden University, Leiden Institute for Area Studies) has posted “Automated Justice and Fairness in the PRC” on SSRN. Here is the abstract:

The digitalisation and automation of the judiciary, also known as judicial informatisation, (司法信息化) has been ongoing for two decades in China. The latest development is the emergence of “smart courts” (智慧法院), which are part of the Chinese party-state’s efforts to reform and modernise its governance capacity. These are legal courts where the judicial process is fully conducted digitally, and judicial officers make use of technological applications sustained by algorithms and big-data analytics. The end-goal is to create a judicial decision-making process that is fully conducted in an online judicial ecosystem where the majority of tasks are automated and opportunities for human discretion or interference are minimal.

This article asks how automation and digitalisation satisfy procedural fairness in the PRC? First, it discusses the Chinese conception of judicial fairness through a literature review. It finds that the utilitarian conception of fairness is a reflection of the inherently legalist and instrumentalist vision of law. This is turn, also influences the way innovations, such as judicial automation, are assessed. Then, it contextualises the policy of ‘building smart courts’, launched in 2017, which aimed to automate and digitalise large parts of the judicial process. The policy is part of a larger reform drive that aims to recentralise power and standardise decision-making. Next, it discusses how automation and digitalisation have changed the judicial process, based on a reading of court and media reports of technological applications. The final section analyses the implications of automation and digitalisation for judicial fairness in the PRC.

The article argues that, within the utilitarian conceptualisation of justice and law, automated justice can indeed be considered fair because it improves the quality of procedures to the extent that they facilitate the achievement of the political goals of judicial reform and the judiciary in general.

Kemper on Kafkaesque AI

Carolin Kemper (German Research Institute for Public Administration) has posted “Kafkaesque AI? Legal Decision-Making in the Era of Machine Learning” (University of San Francisco Intellectual Property and Technology Law Journal, Vol. 24, No. 2, 2020) on SSRN. Here is the abstract:

Artificial Intelligence (“AI”) is already being employed to make critical legal decisions in many countries all over the world. The use of AI in decision-making is a widely debated issue due to allegations of bias, opacity, and lack of accountability. For many, algorithmic decision-making seems obscure, inscrutable, or virtually dystopic. Like in Kafka’s The Trial, the decision-makers are anonymous and cannot be challenged in a discursive manner. This article addresses the question of how AI technology can be used for legal decisionmaking and decision-support without appearing Kafkaesque.

First, two types of machine learning algorithms are outlined: both Decision Trees and Artificial Neural Networks are commonly used in decision-making software. The real-world use of those technologies is shown on a few examples. Three types of use-cases are identified, depending on how directly humans are influenced by the decision. To establish criteria for evaluating the use of AI in decision-making, machine ethics, the theory of procedural justice, the rule of law, and the principles of due process are consulted. Subsequently, transparency, fairness, accountability, the right to be heard and the right to notice, as well as dignity and respect are discussed. Furthermore, possible safeguards and potential solutions to tackle existing problems are presented. In conclusion, AI rendering decisions on humans does not have to be Kafkaesque. Many solutions and approaches offer possibilities to not only ameliorate the downsides of current AI technologies, but to enrich and enhance the legal system.