Adams, Adams-Prassl & Adams-Prassl on Online Tribunal Judgments and The Limits of Open Justice

Zoe Adams (University of Cambridge), Abi Adams-Prassl (University of Oxford – Department of Economics), and Jeremias Adams-Prassl (University of Oxford – Faculty of Law) have posted “Online Tribunal Judgments and The Limits of Open Justice” (Forthcoming (2021) 41 Legal Studies) on SSRN. Here is the abstract:

The principle of open justice is a constituent element of the rule of law: it demands publicity of legal proceedings, including the publication of judgments. Since 2017, the UK government has systematically published first instance Employment Tribunal decisions in an online repository. Whilst a veritable treasure trove for researchers and policy makers, the database also has darker potential – from automating blacklisting to creating new and systemic barriers to access to justice. Our scrutiny of existing legal safeguards, from anonymity orders to equality law and data protection, finds a number of gaps, which threaten to make the principle of open justice as embodied in the current publication regime inimical to equal access to justice.

Okidegbe on The Democratizing Potential Of Algorithms?

Ngozi Okidegbe (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “The Democratizing Potential Of Algorithms?” (Connecticut Law Review, Forthcoming 2021) on SSRN. Here is the abstract:

Jurisdictions are increasingly embracing the use of pretrial risk assessment algorithms as a solution to the problem of mass pretrial incarceration. Conversations about the use of pretrial algorithms in legal scholarship have tended to focus on their opacity, determinativeness, reliability, validity, or their (in)ability to reduce high rates of incarceration as well as racial and socioeconomic disparities within the pretrial system. This Article breaks from this tendency, examining these algorithms from a democratization of criminal law perspective. Using this framework, it points out that currently employed algorithms are exclusionary of the viewpoints and values of the racially marginalized communities most impacted by their usage, since these algorithms are often procured, adopted, constructed, and overseen without input from these communities.

This state of affairs should caution enthusiasm for the transformative potential of pretrial algorithms since they reinforce and entrench the democratic exclusion that members of these communities already experience in the creation and implementation of the laws and policies shaping pretrial practices. This democratic exclusion, alongside social marginalization, contributes to the difficulties that these communities face in contesting and resisting the political, social, and economic costs that pretrial incarceration has had and continues to have on them. Ultimately, this Article stresses that resolving this democratic exclusion and its racially stratifying effects might be possible but requires shifting power over pretrial algorithms toward these communities. Unfortunately, the actualization of this prescription may be unreconcilable with the aims sought by algorithm reformers, revealing a deep tension between the algorithm project and racial justice efforts.

Laux, Wachter & Mittelstadt on Neutralizing Online Behavioral Advertising

Johann Laux (University of Oxford – Oxford Internet Institute), Sandra Wachter (University of Oxford – Oxford Internet Institute), and Brent Mittelstadt (University of Oxford – Oxford Internet Institute) have posted “Neutralizing Online Behavioural Advertising: Algorithmic Targeting with Market Power as an Unfair Commercial Practice” (Common Market Law Review, 58(3), 2021 (forthcoming)) on SSRN. Here is the abstract:

Online behavioural advertising (‘OBA’) relies on inferential analytics to target consumers based on data about their online behaviour. While the technology can improve the matching of adverts with consumers’ preferences, it also poses risks to consumer welfare as consumers face offer discrimination and the exploitation of their cognitive errors. The technology’s risks are exacerbated by the market power of ad intermediaries. This article shows how the Unfair Commercial Practices Directive (UCPD) can protect consumers from behavioural exploitation through incorporating market power analysis. By drawing on current research in economic theory, it argues for applying a stricter average consumer test if the market for ad intermediaries is highly concentrated. This stricter test should neutralize negative effects of behavioural targeting on consumer welfare. The article shows how OBA can amount to a misleading action and/or a misleading omission according to Articles 6 and 7 UCPD as well as an aggressive practice according to Article 8 UCPD. It further considers how the recent legislative proposals by the European Commission to enact a Digital Markets Act (DMA) and a Digital Services Act (DSA) may interact with the UCPD and the suggested stricter average consumer test.

Buiten, de Streeel & Peitz on EU Liability Rules for the Age of Artificial Intelligence

Miriam Buiten (University of St.Gallen), Alexandre de Streel (University of Namur, and Martin Peitz (University of Mannheim – Department of Economics) have posted “EU Liability Rules for the Age of Artificial Intelligence” on SSRN. Here is the abstract:

When Artificial Intelligence (AI) systems possess the characteristics of unpredictability and autonomy, they present challenges for the existing liability framework. Two questions about the liability of AI deserve attention from policymakers: 1) Do existing civil liability rules adequately cover risks arising in the context of AI systems? 2) How would modified liability rules for producers, owners, and users of AI play out? This report addresses the two questions for EU non-contractual liability rules. It considers how liability rules affect the incentives of producers, users, and others that may be harmed by AI. The report provides concrete recommendations for updating the EU Product Liaiblity Directive and for the possible legal standard and scope of EU liability rules for owners and users of AI.

Recommended.

Shope on Lawyer and Judicial Competency in the Era of Artificial Intelligence

Mark Shope (National Yang Ming Chiao Tung University; Indiana University Robert H. McKinney School of Law) has posted “Lawyer and Judicial Competency in the Era of Artificial Intelligence: Ethical Requirements for Documenting Datasets and Machine Learning Models” (Georgetown Journal of Legal Ethics, Vol. 34, 2021) on SSRN. Here is the abstract:

Judges and lawyers have the duty of technology competence, which includes competence in artificial intelligence technologies (“AI”). So not only must lawyers advise their clients on new legal, regulatory, ethical, and human rights challenges associated with AI, they increasingly need to evaluate the ethical implications of including AI technology tools in their own legal practice. Similarly, judge competence consists of, among other things, knowledge and skill of technology relevant to service as a judicial officer, which includes AI. After describing how AI implicates ethical issues for lawyers and judges and the requirement for lawyers and judges to have technical competency in the AI tools they use, this article argues for the requirement to use one or both of the following human interpretable AI disclosure forms when lawyers and judges are using AI tools: Dataset Disclosure Form or Model Disclosure Form.

Recommended.

Howells & Twigg-Flesner on Interconnectivity and Liability: AI and the Internet of Things

Geraint Howells (University of Manchester) and Christian Twigg-Flesner (University of Warwick – School of Law) have posted “Interconnectivity and Liability: AI and the Internet of Things” (Larry di Matteo et.al. Artificial Intelligence: Global Perspectives on Law & Ethics, Forthcoming) on SSRN. Here is the abstract:

In this paper, we focus on the question of liability in circumstances where an IoT system has not performed as expected and where this has resulted in loss or damage of some kind. We will argue that the combination of AI and the IoT raises several novel aspects concerning the basis for assessing responsibility and of allocating liability for loss or damage, and that this will necessitate the development of a more creative approach to liability than generally followed in many legal systems. Most legal systems combine linear liability based on contractual relationships and fault-based or strict liability on a wrongdoer in tort law. We seek to demonstrate that this approach is no longer sufficient to deal with the complex issues associated with the interaction of AI and the IoT, and to offer possible solutions. Our discussion will proceed as follows: first, we will explain the nature of an IoT system in general terms, drawing on case studies from both the consumer and commercial sphere to illustrate this. We will then focus on the role of AI in the operation of an IoT system. Secondly, we will analyze the particular issues that arise in the circumstances where an AI-driven IoT system malfunctions and causes loss or damage, and the specific legal questions this raises. Third, we will examine to what extent legal systems (particularly the UK and the EU) are currently able to address these questions, and identify aspects that require action, whether in the form of legislation or some other intervention. Finally, we will propose an alternative for addressing the liability challenges arising in this particular context.

Naughton on Facebook’s Decision to Uphold the Ban on Donald Trump and its Consequence in Social Media Censorship Regulations

James Naughton (Loyola University Chicago School of Law) has posted “Facebook’s Decision to Uphold the Ban on Donald Trump and its Consequence in Social Media Censorship Regulations” on SSRN. Here is the abstract:

A short comment written on Facebook’s recent decision to ban President Trump from its platforms. This comment argues that, while the U.S. Supreme Court has not applied the First Amendment to the “vast democratic forums of the internet,” the time is ripe for guidance by the Courts. This is especially true when social media has become a main source of communication for governmental figures and entities.

Waldman on Outsourcing Privacy

Ari Ezra Waldman (Northeastern University) has posted “Outsourcing Privacy” (Notre Dame Law Review, Vol. 96, 2021) on SSRN. Here is the abstract:

An underappreciated part of the narrative of privacy managerialism—and the focus of this Essay—is the information industry’s increasing tendency to outsource privacy compliance responsibilities to technology vendors. In the last three years alone, the International Association of Privacy Professionals has identified more than 250 companies in the privacy technology vendor market. These companies market their products as tools to help companies comply with new privacy laws like the General Data Protection Regulation, with consent orders from the Federal Trade Commission, and with other privacy rules from around the world. They do so by building compliance templates, pre-completed assessment forms, and monitoring consents, among many other things. As such, many of these companies are doing far more than helping companies identify the data they have or answer data access requests; many of them are instantiating their own definitions and interpretations of complex privacy laws into the technologies they create and doing so only with managerial values in mind. This undermines privacy law in four ways: it creates asymmetry between large technology companies and their smaller competitors, it makes privacy law underinclusive by limiting it to those requirements that can be written into code, it erodes expertise by outsourcing human work to artificial intelligence and automated systems, and it creates a “black box” that undermines accountability.

Erdos on Comparing Constitutional Privacy and Data Protection Rights within the EU

David Erdos (University of Cambridge – Faculty of Law; Trinity Hall) has posted “Comparing Constitutional Privacy and Data Protection Rights within the EU” on SSRN. Here is the abstract:

Although both data protection and the right to privacy (or respect for private life) are recognised within the EU Charter, they are otherwise generally seen as having very different constitutional histories. The right of privacy is often seen as traditional and data protection as novel. Based on a comprehensive analysis of rights within EU State constitutions, it is found that this distinction is overdrawn. Only five current EU States recognised a constitutional right to privacy prior to 1990, although approximately three quarters and also the European Convention do so today. Subsidiary constitutional rights related to the home and correspondence but not honour and/or reputation are more long-standing and this helps link the core of privacy to the protection of intimacy. Constitutional rights to data protection emerged roughly contemporaneously and were often linked to a general right to privacy but are still only found in around half of EU States. There is also no clear consensus on specific guarantees, although around half of the States which recognise these do include rights to transparency and a slightly lower number right to rectification. This could suggest that data subject empowerment over a wide range of connected information is an important emerging particularity tied to data protection as a constitutional guarantee.

Budish, Gasser & Eigen on German Digital Council: An ‘Inside-Out’ Case Study

Ryan Budish (Harvard University – Berkman Klein Center for Internet & Society), Urs Gasser (Harvard University – Berkman Klein Center for Internet & Society), and Melyssa Eigen (Harvard Law School; Berkman Klein Center) have posted “German Digital Council: An ‘Inside-Out’ Case Study” on SSRN. Here is the abstract:

In 2018, German Chancellor Dr. Angela Merkel appointed a group of nine scholars and practitioners, including BKC’s Urs Gasser, to the German Digital Council (GDC). The GDC was formed with the unusual mission to ask both critical and constructive questions about government projects from a digital perspective, to alert the government to new technical and economic developments, and operate as a symbol for ‘digitalization’ during its interactions with the government.

While the GDC has made significant contributions since its founding, its operation has been fairly opaque; this opacity was an intentional part of the GDC’s operational strategy. As a consequence, the basic design, structure, operation, and impact of the GDC have not been easily accessible beyond a relatively small group of involved parties.

At the same time, the unusual mission, composition, and mode of operation of the GDC have been met with interest from governments around the world and other stakeholders that are confronted with the question of how to bring outside expertise into the workings of a government.

This case study is an attempt to bridge this knowledge gap by documenting and sharing some of the important and, thus far, undocumented details about the formation and operation of the GDC. The insights shared are based on a series of group and individual interviews that the authors conducted with members of the GDC and members of the Chancellery that worked closely with the GDC.