Custers on AI in Criminal Law

Bart Custers (Leiden University – Center for Law and Digital Technologies) has posted “AI in Criminal Law: An Overview of AI Applications in Substantive and Procedural Criminal Law” (in: B.H.M. Custers & E. Fosch Villaronga (eds.) Law and Artificial Intelligence, Heidelberg: Springer, p. 205-223.) on SSRN. Here is the abstract:

Both criminals and law enforcement are increasingly making use of the opportunities that AI may offer, opening a whole new chapter in the cat-and-mouse game of committing versus addressing crime. This chapter maps the major developments of AI use in both substantive criminal law and procedural criminal law. In substantive criminal law, A/B optimisation, deepfake technologies, and algorithmic profiling are examined, particularly the way in which these technologies contribute to existing and new types of crime. Also the role of AI in assessing the effectiveness of sanctions and other justice-related programs and practices is examined, particularly risk taxation instruments and evidence-based sanctioning. In procedural criminal law, AI can be used as a law enforcement technology, for instance, for predictive policing or as a cyber agent technology. Also the role of AI in evidence (data analytics after search and seizure, Bayesian statistics, developing scenarios) is examined. Finally, focus areas for further legal research are proposed.

Christakis & Lodie on French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations ‘Strictly Necessary’ and Proportional

Theodore Christakis (University Grenoble-Alpes) and Alexandre Lodie (MIAI – AI Regulation Chair) have posted “The French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations ‘Strictly Necessary’ and Proportional” (European Review of Digital Administration & Law (ERDAL), Forthcoming) on SSRN. Here is the abstract:

In this case the French NGO “La Quadrature du Net” (LQDN) asked the French Supreme Administrative Court (“Conseil d’Etat) to invalidate article R 40-26 of the code of criminal procedure which expressly provides for the use of facial recognition to aid in the identification of suspects during criminal investigations. LQDN considered that the use of this technology was not “absolutely necessary” as required by the French version of Article 10 of the Law Enforcement Directive (LED).

The Court dismissed this claim. The Conseil d’Etat claims that using facial recognition in such a way is ‘absolutely necessary’ when the amount of data available to the police is taken into account, and that it is proportionate to the aim pursued.

This decision feeds into the debate about how to interpret the “strict necessity” requirement (“absolute necessity” in the French version of the text) laid down by the LED concerning the use of facial recognition.

This decision is also part of a wider issue in Europe, where facial recognition for investigative purposes has been under the spotlight. Indeed, States are currently thinking about which facial recognition techniques should be prohibited and what facial recognition uses should be authorised, assuming that adequate safeguards are put in place.

The view of the Conseil d’Etat, together with that of the Italian DPA cited in the article, tend to suggest that States consider that deploying facial recognition for ex-post individual identification purposes is necessary and proportionate to the aim pursued, which is to repress crime. The EDPB and the draft AI Act proposed by the European Commission also seem to align in terms of allowing such use of facial recognition technology for ex-post individual identification in criminal investigations, if there is an appropriate national legal framework authorizing this and providing all adequate safeguards.

Slobogin on Predictive Policing in the United States

Christopher Slobogin (Vanderbilt U Law) has posted “Predictive Policing in the United States” (forthcoming in The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed.) on SSRN. Here is the abstract:

 This chapter, published in the book The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed., Thomson Reuters, 2022) describes police use of algorithms to identify “hot spots” and “hot people,” and then discusses how this practice should be regulated. Predictive policing algorithms should have to demonstrate a “hit rate” that justifies both the intrusion necessary to acquire the information necessary to implement the algorithm and the action (e.g., surveillance, stop or arrest) that police seek to carry out based on the algorithm’s results. Further, for legality reasons, even a sufficient hit rate should not authorize action unless police have also observed risky conduct by the person the algorithm targets. Finally, the chapter discusses ways of dealing with the possible impact of racialized policing on the data fed into these algorithms.

Joh on Ethical AI in American Policing

Elizabeth E. Joh (UC Davis School of Law) has posted “Ethical AI in American Policing” (Notre Dame J. Emerging Tech. 2022) on SSRN. Here is the abstract:

We know there are problems in the use of artificial intelligence in policing, but we don’t quite know what to do about them. One can also find many reports and white papers today offering principles for the responsible use of AI systems by the government, civil society organizations, and the private sector. Yet, largely missing from the current debate in the United States is a shared framework for thinking about the ethical and responsible use of AI that is specific to policing. There are many AI policy guidance documents now, but their value to the police is limited. Simply repeating broad principles about the responsible use of AI systems are less helpful than ones that 1) take into account the specific context of policing, and 2) consider the American experience of policing in particular. There is an emerging consensus about what ethical and responsible values should be part of AI systems. This essay considers what kind of ethical considerations can guide the use of AI systems by American police.

Lin on How to Save Face & the Fourth Amendment: Developing an Algorithmic Accountability Industry for Facial Recognition Technology in Law Enforcement

Patrick K. Lin (Brooklyn Law School) has posted “How to Save Face & the Fourth Amendment: Developing an Algorithmic Accountability Industry for Facial Recognition Technology in Law Enforcement” (33 Alb. L.J. Sci. & Tech. 2023 Forthcoming) on SSRN. Here is the abstract:

For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations setting standards with respect to its development. To make matters worse, the Fourth Amendment—intended to limit police power and enacted to protect against unreasonable searches—has struggled to rein in new surveillance technologies since its inception.

This Article examines the Supreme Court’s Fourth Amendment jurisprudence leading up to Carpenter v. United States and suggests that the Court is reinterpreting the amendment for the digital age. Still, the too-slow expansion of privacy protections raises challenging questions about racial bias, the legitimacy of police power, and ethical issues in artificial intelligence design. This Article proposes the development of an algorithmic auditing and accountability market that not only sets standards for AI development and limitations on governmental use of facial recognition but encourages collaboration between public interest technologists and regulators. Beyond the necessary changes to the technological and legal landscape, the current system of policing must also be reevaluated if hard-won civil liberties are to endure.