Kolt on Algorithmic Black Swans

Noam Kolt (University of Toronto) has posted “Algorithmic Black Swans” (Washington University Law Review, Vol. 101, Forthcoming) on SSRN. Here is the abstract:

From biased lending algorithms to chatbots that spew violent hate speech, AI systems already pose many risks to society. While policymakers have a responsibility to tackle pressing issues of algorithmic fairness, privacy, and accountability, they also have a responsibility to consider broader, longer-term risks from AI technologies. In public health, climate science, and financial markets, anticipating and addressing societal-scale risks is crucial. As the COVID-19 pandemic demonstrates, overlooking catastrophic tail events — or “black swans” — is costly. The prospect of automated systems manipulating our information environment, distorting societal values, and destabilizing political institutions is increasingly palpable. At present, it appears unlikely that market forces will address this class of risks. Organizations building AI systems do not bear the costs of diffuse societal harms and have limited incentive to install adequate safeguards. Meanwhile, regulatory proposals such as the White House AI Bill of Rights and the European Union AI Act primarily target the immediate risks from AI, rather than broader, longer-term risks. To fill this governance gap, this Article offers a roadmap for “algorithmic preparedness” — a set of five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society.

Christakis & Lodie on French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations ‘Strictly Necessary’ and Proportional

Theodore Christakis (University Grenoble-Alpes) and Alexandre Lodie (MIAI – AI Regulation Chair) have posted “The French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations ‘Strictly Necessary’ and Proportional” (European Review of Digital Administration & Law (ERDAL), Forthcoming) on SSRN. Here is the abstract:

In this case the French NGO “La Quadrature du Net” (LQDN) asked the French Supreme Administrative Court (“Conseil d’Etat) to invalidate article R 40-26 of the code of criminal procedure which expressly provides for the use of facial recognition to aid in the identification of suspects during criminal investigations. LQDN considered that the use of this technology was not “absolutely necessary” as required by the French version of Article 10 of the Law Enforcement Directive (LED).

The Court dismissed this claim. The Conseil d’Etat claims that using facial recognition in such a way is ‘absolutely necessary’ when the amount of data available to the police is taken into account, and that it is proportionate to the aim pursued.

This decision feeds into the debate about how to interpret the “strict necessity” requirement (“absolute necessity” in the French version of the text) laid down by the LED concerning the use of facial recognition.

This decision is also part of a wider issue in Europe, where facial recognition for investigative purposes has been under the spotlight. Indeed, States are currently thinking about which facial recognition techniques should be prohibited and what facial recognition uses should be authorised, assuming that adequate safeguards are put in place.

The view of the Conseil d’Etat, together with that of the Italian DPA cited in the article, tend to suggest that States consider that deploying facial recognition for ex-post individual identification purposes is necessary and proportionate to the aim pursued, which is to repress crime. The EDPB and the draft AI Act proposed by the European Commission also seem to align in terms of allowing such use of facial recognition technology for ex-post individual identification in criminal investigations, if there is an appropriate national legal framework authorizing this and providing all adequate safeguards.