Chalkidis et al. on LexGLUE: A Benchmark Dataset for Legal Language Understanding in English

Ilias Chalkidis (University of Copenhagen; Athens University) et al. have posted “LexGLUE: A Benchmark Dataset for Legal Language Understanding in English” on SSRN. Here is the abstract:

Law, interpretations of law, legal arguments, agreements, etc. are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.

Recommended.

Molitorisova, Purnhagen & Šístek on Technological Collaboration between EU Administrations

Alexandra Molitorisova (University of Bayreuth, Faculty of Law; Masaryk University), Kai P. Purnhagen
(University of Bayreuth; Erasmus University of Rotterdam – Rotterdam Institute of Law and Economics), and Pavel Šístek have posted “Techno-regulation: Technological Collaboration between EU Administrations” on SSRN. Here is the abstract:

This article examines different forms of technological collaboration between Member States’ public administrations as currently present in the EU, namely institutional and transactional, drawing from examples in two sectors – telecommunications and food. The article argues that different collaboration forms can be explored more systematically by policy makers when faced with techno-regulatory choices. It subsequently argues that when developing techno-regulatory tools for the implementation and enforcement of EU law, national regulatory authorities should place technological cooperation at the forefront of their policy considerations. It concludes with a plea for an increased reciprocity in technological collaboration based on open-source solutions.

Winter on The Challenges of Artificial Judicial Decision-Making for Liberal Democracy

Christoph Winter (Harvard University; Instituto Tecnológico Autónomo de México) has posted “The Challenges of Artificial Judicial Decision-Making for Liberal Democracy” (P. Bystranowski, P. Janik, & M. Próchnicki (Eds.), Judicial decision-making: Integrating empirical and theoretical perspectives (forthcoming)) on SSRN. Here is the abstract:

The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill this void by identifying and engaging with challenges arising from artificial judicial decision-making, focusing on three pillars of liberal democracy, namely equal treatment of citizens, transparency, and judicial independence. Methodologically, the work takes a comparative perspective between human and artificial decision-making, using the former as a normative benchmark to evaluate the latter.

The chapter first argues that AI that would improve on equal treatment of citizens has already been developed, but not yet adopted. Second, while the lack of transparency in AI decision-making poses severe risks which ought to be addressed, AI can also increase the transparency of options and trade-offs that policy makers face when considering the consequences of artificial judicial decision-making. Such transparency of options offers tremendous benefits from a democratic perspective. Third, the overall shift of power from human intuition to advanced AI may threaten judicial independence, and with it the separation of powers. While improvements regarding discrimination and transparency are available or on the horizon, it remains unclear how judicial independence can be protected, especially with the potential development of advanced artificial judicial intelligence (AAJI). Working out the political and legal infrastructure to reap the fruits of artificial judicial intelligence in a safe and stable manner should become a priority of future research in this area.

Lee on Algorithmic Bias and the New Chicago School

Jyh-An Lee (The Chinese University of Hong Kong – Faculty of Law) has posted “Algorithmic Bias and the New Chicago School” (Law, Innovation & Technology, Volume 14, Issue 1, 2022) on SSRN. Here is the abstract:

AI systems are increasingly deployed in both public and private sectors to independently make complicated decisions with far-reaching impact on individuals and the society. However, many AI algorithms are biased in the collection or processing of data, resulting in prejudiced decisions based on demographic features. Algorithmic biases occur because of the training data fed into the AI system or the design of algorithmic models. While most legal scholars propose a direct-regulation approach associated with right of explanation or transparency obligation, this article provides a different picture regarding how indirect regulation can be used to regulate algorithmic bias based on the New Chicago School framework developed by Lawrence Lessig. This article concludes that an effective regulatory approach toward algorithmic bias will be the right mixture of direct and indirect regulations through architecture, norms, market, and the law.

Bambauer, Zarsky & Mayer on Algorithmic Fairness Among Similar Individuals

Jane R. Bambauer (University of Arizona College of Law), Tal Zarsky (University of Haifa – Faculty of Law), and Jonathan Mayer (Princeton University) have posted “When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals” (UC Davis Law Review, Forthcoming) on SSRN. Here is the abstract:

If a machine learning algorithm treats two people very differently because of a slight difference in their attributes, the result intuitively seems unfair. Indeed, an aversion to this sort of treatment has already begun to affect regulatory practices in employment and lending. But an explanation, or even a definition, of the problem has not yet emerged. This Article explores how these situations—when a Small Change Makes a Big Difference (SCMBDs)—interact with various theories of algorithmic fairness related to accuracy, bias, strategic behavior, proportionality, and explainability. When SCMBDs are associated with an algorithm’s inaccuracy, such as overfitted models, they should be removed (and routinely are.) But outside those easy cases, when SCMBDs have, or seem to have, predictive validity, the ethics are more ambiguous. Various strands of fairness (like accuracy, equity, and proportionality) will pull in different directions. Thus, while SCMBDs should be detected and probed, what to do about them will require humans to make difficult choices between social goals.