Veale & Borgesius on Demystifying the Draft EU Artificial Intelligence Act

Michael Veale (University College London, Faculty of Laws; The Alan Turing Institute) and Frederik Zuiderveen Borgesius (iHub, Radboud University, Nijmegen) have posted “Demystifying the Draft EU Artificial Intelligence Act” (Computer Law Review International (2021) 22(4) 97-112) on SSRN. Here is the abstract:

In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act. We present an overview of the Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades. Aspects of the AI Act, such as different rules for different risk-levels of AI, make sense. But we also find that some provisions of the Draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals. Several overarching aspects, including the enforcement regime and the risks of maximum harmonisation pre-empting legitimate national AI policy, engender significant concern. These issues should be addressed as a priority in the legislative process.

Selinger & Rhee on Normalizing Surveillance

Evan Selinger (Rochester Institute of Technology) and Judy Hyojoo Rhee (Duke University) have posted “Normalizing Surveillance” (Northern European Journal of Philosophy 22, 1 (2021): 49-74) on SSRN. Here is the abstract:

Definitions of privacy change, as do norms for protecting it. Why, then, are privacy scholars and activists currently worried about “normalization”? This essay explains what normalization means in the context of surveillance concerns and clarifies why normalization has significant governance consequences. We emphasize two things. First, the present is a transitional moment in history. AI-infused surveillance tools offer a window into the unprecedented dangers of automated real-time monitoring and analysis. Second, privacy scholars and ac- tivists can better integrate supporting evidence to counter skepticism about their most disturbing and speculative claims about normalization. Empirical results in moral psychology support the assertion that widespread surveillance typically will lead people to become favorably disposed toward it. If this causal dynamic is pervasive, it can diminish autonomy and contribute to a slippery slope trajectory that diminishes privacy and civil liberties.

Hausman on The Danger of Rigged Algorithms: Evidence from Immigration Detention Decisions

David Hausman (Stanford University, Department of Political Science) has posted “The Danger of Rigged Algorithms: Evidence from Immigration Detention Decisions” on SSRN. Here is the abstract:

This article illustrates a simple risk of algorithmic risk assessment tools: rigging. In 2017, U.S. Immigration and Customs Enforcement removed the “release” recommendation from the algorithmic tool that helped officers decide whom to detain and whom to release. After the change, the tool only recommended detention or referred cases to a supervisor. Taking advantage of the suddenness of this change, I use a fuzzy regression discontinuity design to show that the change reduced actual release decisions by about half, from around 10% to around 5% of all decisions. Officers continued to follow the tool’s detention recommendations at almost the same rate even after the tool stopped recommending release, and when officers deviated from the tool’s recommendation to order release, supervisors became more likely to overrule their decisions. Although algorithmic tools offer the possibility of reducing the use of detention, they can also be rigged to increase it.