Adams, Adams-Prassl & Adams-Prassl on Online Tribunal Judgments and The Limits of Open Justice

Zoe Adams (University of Cambridge), Abi Adams-Prassl (University of Oxford – Department of Economics), and Jeremias Adams-Prassl (University of Oxford – Faculty of Law) have posted “Online Tribunal Judgments and The Limits of Open Justice” (Forthcoming (2021) 41 Legal Studies) on SSRN. Here is the abstract:

The principle of open justice is a constituent element of the rule of law: it demands publicity of legal proceedings, including the publication of judgments. Since 2017, the UK government has systematically published first instance Employment Tribunal decisions in an online repository. Whilst a veritable treasure trove for researchers and policy makers, the database also has darker potential – from automating blacklisting to creating new and systemic barriers to access to justice. Our scrutiny of existing legal safeguards, from anonymity orders to equality law and data protection, finds a number of gaps, which threaten to make the principle of open justice as embodied in the current publication regime inimical to equal access to justice.

Okidegbe on The Democratizing Potential Of Algorithms?

Ngozi Okidegbe (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “The Democratizing Potential Of Algorithms?” (Connecticut Law Review, Forthcoming 2021) on SSRN. Here is the abstract:

Jurisdictions are increasingly embracing the use of pretrial risk assessment algorithms as a solution to the problem of mass pretrial incarceration. Conversations about the use of pretrial algorithms in legal scholarship have tended to focus on their opacity, determinativeness, reliability, validity, or their (in)ability to reduce high rates of incarceration as well as racial and socioeconomic disparities within the pretrial system. This Article breaks from this tendency, examining these algorithms from a democratization of criminal law perspective. Using this framework, it points out that currently employed algorithms are exclusionary of the viewpoints and values of the racially marginalized communities most impacted by their usage, since these algorithms are often procured, adopted, constructed, and overseen without input from these communities.

This state of affairs should caution enthusiasm for the transformative potential of pretrial algorithms since they reinforce and entrench the democratic exclusion that members of these communities already experience in the creation and implementation of the laws and policies shaping pretrial practices. This democratic exclusion, alongside social marginalization, contributes to the difficulties that these communities face in contesting and resisting the political, social, and economic costs that pretrial incarceration has had and continues to have on them. Ultimately, this Article stresses that resolving this democratic exclusion and its racially stratifying effects might be possible but requires shifting power over pretrial algorithms toward these communities. Unfortunately, the actualization of this prescription may be unreconcilable with the aims sought by algorithm reformers, revealing a deep tension between the algorithm project and racial justice efforts.

Laux, Wachter & Mittelstadt on Neutralizing Online Behavioral Advertising

Johann Laux (University of Oxford – Oxford Internet Institute), Sandra Wachter (University of Oxford – Oxford Internet Institute), and Brent Mittelstadt (University of Oxford – Oxford Internet Institute) have posted “Neutralizing Online Behavioural Advertising: Algorithmic Targeting with Market Power as an Unfair Commercial Practice” (Common Market Law Review, 58(3), 2021 (forthcoming)) on SSRN. Here is the abstract:

Online behavioural advertising (‘OBA’) relies on inferential analytics to target consumers based on data about their online behaviour. While the technology can improve the matching of adverts with consumers’ preferences, it also poses risks to consumer welfare as consumers face offer discrimination and the exploitation of their cognitive errors. The technology’s risks are exacerbated by the market power of ad intermediaries. This article shows how the Unfair Commercial Practices Directive (UCPD) can protect consumers from behavioural exploitation through incorporating market power analysis. By drawing on current research in economic theory, it argues for applying a stricter average consumer test if the market for ad intermediaries is highly concentrated. This stricter test should neutralize negative effects of behavioural targeting on consumer welfare. The article shows how OBA can amount to a misleading action and/or a misleading omission according to Articles 6 and 7 UCPD as well as an aggressive practice according to Article 8 UCPD. It further considers how the recent legislative proposals by the European Commission to enact a Digital Markets Act (DMA) and a Digital Services Act (DSA) may interact with the UCPD and the suggested stricter average consumer test.

Buiten, de Streeel & Peitz on EU Liability Rules for the Age of Artificial Intelligence

Miriam Buiten (University of St.Gallen), Alexandre de Streel (University of Namur, and Martin Peitz (University of Mannheim – Department of Economics) have posted “EU Liability Rules for the Age of Artificial Intelligence” on SSRN. Here is the abstract:

When Artificial Intelligence (AI) systems possess the characteristics of unpredictability and autonomy, they present challenges for the existing liability framework. Two questions about the liability of AI deserve attention from policymakers: 1) Do existing civil liability rules adequately cover risks arising in the context of AI systems? 2) How would modified liability rules for producers, owners, and users of AI play out? This report addresses the two questions for EU non-contractual liability rules. It considers how liability rules affect the incentives of producers, users, and others that may be harmed by AI. The report provides concrete recommendations for updating the EU Product Liaiblity Directive and for the possible legal standard and scope of EU liability rules for owners and users of AI.

Recommended.

Shope on Lawyer and Judicial Competency in the Era of Artificial Intelligence

Mark Shope (National Yang Ming Chiao Tung University; Indiana University Robert H. McKinney School of Law) has posted “Lawyer and Judicial Competency in the Era of Artificial Intelligence: Ethical Requirements for Documenting Datasets and Machine Learning Models” (Georgetown Journal of Legal Ethics, Vol. 34, 2021) on SSRN. Here is the abstract:

Judges and lawyers have the duty of technology competence, which includes competence in artificial intelligence technologies (“AI”). So not only must lawyers advise their clients on new legal, regulatory, ethical, and human rights challenges associated with AI, they increasingly need to evaluate the ethical implications of including AI technology tools in their own legal practice. Similarly, judge competence consists of, among other things, knowledge and skill of technology relevant to service as a judicial officer, which includes AI. After describing how AI implicates ethical issues for lawyers and judges and the requirement for lawyers and judges to have technical competency in the AI tools they use, this article argues for the requirement to use one or both of the following human interpretable AI disclosure forms when lawyers and judges are using AI tools: Dataset Disclosure Form or Model Disclosure Form.

Recommended.