Bender on Algorithmic Elections

Sarah M.L. Bender (University of Michigan Law School) has posted “Algorithmic Elections” (Michigan Law Review, Forthcoming) on SSRN. Here is the abstract:

Artificial intelligence (AI) has entered election administration. Across the country, election officials are beginning to use AI systems to purge voter records, verify mail-in ballots, and draw district lines. Already, these technologies are having a profound effect on voting rights and democratic processes. However, they have received relatively little attention from AI experts, advocates, and policymakers. Scholars have sounded the alarm on a variety of “algorithmic harms” resulting from AI’s use in the criminal justice system, employment, healthcare, and other civil rights domains. Many of these same algorithmic harms manifest in elections and voting, but have been underexplored and remain unaddressed.

This Note offers three contributions. First, it documents the various forms of “algorithmic decisionmaking” that are currently present in U.S. elections. This is the most comprehensive survey of AI’s use in elections and voting to date. Second, it explains how algorithmic harms resulting from these technologies are disenfranchising eligible voters and disrupting democratic processes. Finally, it identifies several unique characteristics of the U.S. election administration system that are likely to complicate reform efforts and must be addressed to safeguard voting rights.

Jarovsky on Transparency by Design: Reducing Informational Vulnerabilities Through UX Design

Luiza Jarovsky (Tel Aviv University, Buchmann Faculty of Law) has posted “Transparency by Design: Reducing Informational Vulnerabilities Through UX Design” on SSRN. Here is the abstract:

Can transparency help us solve the challenges posed by dark patterns and other unfair practices online? Despite the many weaknesses of transparency obligations in the data protection arena, I suggest that a Transparency by Design (TbD) approach can assist us in better achieving data protection goals, especially by empowering data subjects with accessible information, facilitating the exercise of data protection rights, and helping to reduce informational vulnerabilities. TbD proposes that compliance with transparency rules should happen in all levels of design and user interaction, instead of being restricted to Privacy Policies (PPs) or similar legal statements. In a previous work, I discussed how manipulative design can exploit behavioral biases and generate unfairness; here, I show how failing to support data subjects with accessible information, adequate design and meaningful choices can similarly create an unfair online environment.

This work highlights the shortcomings of transparency rules in the context of the General Data Protection Regulation (GDPR). I demonstrate that, in practice, GDPR obligations do not result in effective transparency for data subjects, increasing unfairness in the data protection context. Consequently, data subjects are most of the time unaware of how, why, and when their data is collected, are uninformed about the risks or broader consequences of their personal data- fueled online activities, do not know their rights regarding their data, and do not have access to meaningful choices.

In order to answer these shortcomings, I propose TbD, so that we – the data subjects – are not only effectively informed of the collection and use of our data, but can also exercise our data subjects’ rights, make meaningful privacy choices, and mitigate our informational vulnerabilities.

The main goal of TbD is that data subjects will be served with information that is meaningful and actionable, instead of a standard block of text that acts as a liability document for the controller’s legal department – as currently happens with PPs. Design, manifested through User Experience (UX), is a central tool in this framework, as it should embed TbD’s values and premises and empower data subjects throughout their interaction with the controller.

Craig Reviewing The Reasonable Robot

Carys J. Craig (Osgoode Hall Law School U Toronto) has posted “The Relational Robot: A Normative Lens for AI Legal Neutrality” (Reviewing Ryan Abbott, The Reasonable Robot, Cambridge University Press, 2020) (Jerusalem Review of Legal Studies (2022 Forthcoming)) on SSRN. Here is the abstract:

In his impactful book, “The Reasonable Robot” (Cambridge UP, 2020), Ryan Abbott proposes a new guiding tenet for the law’s regulation of Artificial Intelligence: AI legal neutrality. This would establish, as a principled starting point, the default position that “the law should not discriminate between AI and human behaviour.” In this Review Essay, I suggest that AI legal neutrality, as conceived by Abbott, is an interesting proposition but a potentially dangerous default principle with which to equip law for the emerging realities of AI. When we scratch beneath the surface, this concept of neutrality or equal treatment is too focused on the individual person/thing, and too reliant on analogical reasoning and false equivalence. It is therefore too far removed from the normative underpinnings of law, its subjects, and its teleology. Instead, I argue, we need a more substantive notion of law’s technological neutrality—one that looks to the law’s normative objectives to set the default. The place to start, I suggest, is not with a legal neutrality that, by design, disregards the inherent differences between humans and robots and their respective behaviors; on the contrary, we need to be fully attentive to the dynamics of human-robot relations in social context – and alert to the dangers of overlooking differences. Rights and responsibilities should be allocated with a clear view to the relationships and subjectivities they shape and the social values they advance.

In Part II, I outline Abbott’s argument in respect of legal neutrality and its application to intellectual property law. I then compare its implications in the patent law sphere to its potential significance in the copyright context. In Part III, I sketch an alternative approach to understanding technological neutrality and law, concerned with consistency in pursuit of normative objectives rather than formal non-discrimination between technologies. I conclude by proposing a relational approach to regulating robots, which would, I believe, bring a much-needed normative lens to the pursuit of legal neutrality.

Schrepel on Law + Technology

Thibault Schrepel (University Paris 1 Panthéon-Sorbonne; VU University Amsterdam; Stanford University’s Codex Center; Sciences Po) has posted “Law + Technology” (Stanford University CodeX Research Paper Series 2022) on SSRN. Here is the abstract:

The classical “law & technology” approach focuses on harms created by technology. This approach seems to be common sense; after all, why be interested—from a legal standpoint—in situations where technology does not cause damage? On close inspection, another approach dubbed “law + technology” can better increase the common good.

The “+” approach builds on complexity science to consider both the issues and positive contributions technology brings to society. The goal is to address the negative ramifications of technology while leveraging its positive regulatory power. Achieving this double objective requires policymakers and regulators to consider a range of intervention methods and choose the ones that are most suitable.