Tokson on Artificial Intelligence and the Anti-Authoritarian Fourth Amendment

Matthew Tokson (U Utah S.J. Quinney College Law) has posted “Artificial Intelligence and the Anti-Authoritarian Fourth Amendment” (27 U. Penn. J. Const. L. __ (forthcoming 2025)) on SSRN. Here is the abstract:

AI-based surveillance and policing technologies facilitate authoritarian drift. That is, the systems of observation, detection, and enforcement that AI makes possible tend to reduce structural checks on executive authority and to concentrate power among fewer and fewer people. In the wrong hands, they can help authorities detect subversive behavior and discourage or punish dissent, while enabling corruption, selective enforcement, and other abuses. These effects, although subtle in today’s relatively primitive AI-enabled systems, will become increasingly significant as AI technology improves.

Today, the most influential branch of Fourth Amendment scholarship conceives of the Fourth Amendment’s central purpose as preserving citizen privacy against intrusive government observation. Another, less prominent line of scholarship emphasizes the Fourth Amendment’s role in preventing government authoritarianism, focusing on concepts like power, security, and citizen autonomy. The insights of this latter branch of Fourth Amendment theory are likely to be increasingly relevant as AI comes to play a larger role in surveillance and law enforcement.

The pro-authoritarian nature of AI law enforcement should influence how courts assess such law enforcement under the Fourth Amendment. This symposium Essay examines the role that Fourth Amendment law can play in regulating AI-enabled enforcement and preventing authoritarianism. It contends that, among other things, courts assessing whether networked camera or other sensor systems implicate the Fourth Amendment should account for the risks of unregulated, permeating surveillance by AI agents. Judges evaluating the reasonable use of force by police robots should consider the dangers of allowing AI systems to monopolize the use of force in a jurisdiction and the diminished justifications for self-defense. Likewise, courts can incorporate factors specific to the AI context into their totality of the circumstances analyses of Fourth Amendment reasonableness. Whether there is a “human in the loop” during enforcement encounters, and whether there is meaningful civilian oversight over AI-enabled enforcement programs, should play a substantial role in assessing the reasonableness of AI-centered police practices. By adapting the principles of the anti-authoritarian Fourth Amendment to the new frontier of AI law enforcement, legal actors can restrain the pro-authoritarian effects of emerging law enforcement technologies.

Heydari et al. on Putting Police Body-Worn Camera Footage to Work: A Civil Liberties Evaluation of Truleo’s AI Analytics Platform

Farhang Heydari (Vanderbilt Law) et al. have posted “Putting Police Body-Worn Camera Footage to Work: A Civil Liberties Evaluation of Truleo’s AI Analytics Platform” on SSRN. Here is the abstract:

This report summarizes findings from a civil liberties evaluation of Truleo, an AI-powered analytics platform designed to automate the review of police body-worn camera (BWC) footage. It includes a summary of how Truleo’s platform works, policy choices made by the company, and our assessment of safeguards and risks of the platform from a civil liberties perspective. The report also offers a series of recommendations for policymakers considering the adoption of Truleo or similar technologies. These include the necessity for independent testing of claimed benefits, democratic authorization for deployment, and ongoing transparency and public input around the platform’s design and operation. Importantly, the report argues that BWC footage should be treated as “civic data” owned by the public, not the police, to enable wider access and use for purposes such as research, oversight, and the exploration of alternative public safety approaches.

Generalizing beyond Truleo, we note that despite their cost, explosive growth, and the incredible amount of personal data they capture, BWCs are significantly underregulated by law, with many critical policy choices left to the law enforcement agencies that use the technology. As a result, the use of the technology has shifted away from its original impetus—to improve outcomes for members of the public interacting with the police and to provide transparency and accountability when things went wrong—and increasingly toward an investigative tool. But we view BWC as the largest collection of data on policing in existence, and one that has been woefully underutilized as a tool for evaluating and improving policing, thus leaving much of the value of our nation’s investment in BWCs untapped. Given this gap, there is great potential in AI technologies, like Truleo, that can rebalance the scales by automating the review of this footage. Although we see great potential in a platform like Truleo’s, we worry that its full potential will never be achieved so long as police retain sole control of BWC footage. Accordingly, we emphasize the need for proactive policymaking by legislators to ensure that emerging AI analytics technologies serve the public interest and help realize the full potential of the significant public investment in BWCs.

Seng on Electronic Evidence

Daniel Kiat Boon Seng (Director) has posted “Electronic Evidence” on SSRN. Here is the abstract:

In any modern society, information of probative value is increasingly being produced, processed, transmitted and stored in digital devices and storage facilities. When admited in a court of law, such information takes the form of electronic evidence. To address the peculiar issues associated with the admissibility of electronic evidence in Singapore, the rules of hearsay and authentication require that electronic evidence receive special treatment. In Singapore, the rules of evidence enacted in 2012 that provide for the authentication of electronic evidence seeks to strike a balance between facilitating the admission of uncontested electronic evidence and providing for the authentication of contested electronic evidence. In addition, existing rules of evidence in Singapore are well placed to deal with the admission of electronic and digital signatures in evidence as well as the proof of copies of digital evidence. However, it remains to be seen what new legal measures are required to address the issues of digitally-altered or generated electronic evidence.

Logan on Deepfakes in Interrogations

Wayne A. Logan (Florida State U College Law) has posted “Deepfakes in Interrogations” on SSRN. Here is the abstract:

In recent years, academics, policymakers, and others have expressed concern over police use of artificial intelligence, in areas such as predictive policing and facial recognition. One area not receiving attention is the interrogation of suspects. This article addresses that gap, focusing on the inevitable coming use by police of AI-generated deepfakes to secure confessions, such as by creating and presenting to suspects a highly realistic still photo or video falsely indicating their presence at a crime scene, or an equally convincing audio recording of an associate or witness implicating them in a crime.

Police authority to lie in interrogations dates back to Frazier v. Cupp (1969), where the Supreme Court condoned a police lie to a suspect that an associate implicated him in a crime, holding that the deceit did not render the confession secured involuntary for due process purposes, while positing that an innocent individual would not falsely confess. Building upon the now-recognized reality that innocents do indeed confess, and research demonstrating the coercive impact of police use of the “false evidence ploy” (FEP) in securing confessions, scholars have urged a general ban on its use. Courts, while often expressing dismay over police resort to FEPs, typically conclude that they do not violate due process, but at times have held otherwise, expressing particular concern over police presentation of fabricated physical evidence to suspects (versus orally relating its existence, as in Frazier).

While sympathetic to a ban on police deceit in interrogations more generally, this Article singles out deepfakes for specific concern, based on their unprecedented verisimilitude, the demonstrated inability of the public to identify their falsity, and the common belief that police are not permitted to lie about evidence, much less fabricate it. Ultimately, the article makes the case for reconsideration of Frazier, based on research findings of the past fifty years, as well as the many major changes to the criminal legal system since 1969, especially the significantly increased pressure felt by defendants to plead guilty (very often on the basis of confessions, rendering them more susceptible to FEPs).

Beyond doctrine, a ban will have important functional benefits. These include providing ex ante guidance to police and judges alike who lack clarity on the parameters of the notoriously indeterminate due process voluntariness standard. More broadly, a ban will serve as an important bulwark against the deleterious wave of disinformation now sowing distrust in governmental actors and institutions. If deepfakes are condoned in interrogations, it is not hard to imagine that judges, jurors, witnesses, and the public will be skeptical of police, as well the reliability of evidence in criminal cases, undermining a cornerstone of the nation’s constitutional democracy.

Custers on AI in Criminal Law

Bart Custers (Leiden University – Center for Law and Digital Technologies) has posted “AI in Criminal Law: An Overview of AI Applications in Substantive and Procedural Criminal Law” (in: B.H.M. Custers & E. Fosch Villaronga (eds.) Law and Artificial Intelligence, Heidelberg: Springer, p. 205-223.) on SSRN. Here is the abstract:

Both criminals and law enforcement are increasingly making use of the opportunities that AI may offer, opening a whole new chapter in the cat-and-mouse game of committing versus addressing crime. This chapter maps the major developments of AI use in both substantive criminal law and procedural criminal law. In substantive criminal law, A/B optimisation, deepfake technologies, and algorithmic profiling are examined, particularly the way in which these technologies contribute to existing and new types of crime. Also the role of AI in assessing the effectiveness of sanctions and other justice-related programs and practices is examined, particularly risk taxation instruments and evidence-based sanctioning. In procedural criminal law, AI can be used as a law enforcement technology, for instance, for predictive policing or as a cyber agent technology. Also the role of AI in evidence (data analytics after search and seizure, Bayesian statistics, developing scenarios) is examined. Finally, focus areas for further legal research are proposed.

Christakis & Lodie on French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations ‘Strictly Necessary’ and Proportional

Theodore Christakis (University Grenoble-Alpes) and Alexandre Lodie (MIAI – AI Regulation Chair) have posted “The French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations ‘Strictly Necessary’ and Proportional” (European Review of Digital Administration & Law (ERDAL), Forthcoming) on SSRN. Here is the abstract:

In this case the French NGO “La Quadrature du Net” (LQDN) asked the French Supreme Administrative Court (“Conseil d’Etat) to invalidate article R 40-26 of the code of criminal procedure which expressly provides for the use of facial recognition to aid in the identification of suspects during criminal investigations. LQDN considered that the use of this technology was not “absolutely necessary” as required by the French version of Article 10 of the Law Enforcement Directive (LED).

The Court dismissed this claim. The Conseil d’Etat claims that using facial recognition in such a way is ‘absolutely necessary’ when the amount of data available to the police is taken into account, and that it is proportionate to the aim pursued.

This decision feeds into the debate about how to interpret the “strict necessity” requirement (“absolute necessity” in the French version of the text) laid down by the LED concerning the use of facial recognition.

This decision is also part of a wider issue in Europe, where facial recognition for investigative purposes has been under the spotlight. Indeed, States are currently thinking about which facial recognition techniques should be prohibited and what facial recognition uses should be authorised, assuming that adequate safeguards are put in place.

The view of the Conseil d’Etat, together with that of the Italian DPA cited in the article, tend to suggest that States consider that deploying facial recognition for ex-post individual identification purposes is necessary and proportionate to the aim pursued, which is to repress crime. The EDPB and the draft AI Act proposed by the European Commission also seem to align in terms of allowing such use of facial recognition technology for ex-post individual identification in criminal investigations, if there is an appropriate national legal framework authorizing this and providing all adequate safeguards.

Slobogin on Predictive Policing in the United States

Christopher Slobogin (Vanderbilt U Law) has posted “Predictive Policing in the United States” (forthcoming in The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed.) on SSRN. Here is the abstract:

 This chapter, published in the book The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed., Thomson Reuters, 2022) describes police use of algorithms to identify “hot spots” and “hot people,” and then discusses how this practice should be regulated. Predictive policing algorithms should have to demonstrate a “hit rate” that justifies both the intrusion necessary to acquire the information necessary to implement the algorithm and the action (e.g., surveillance, stop or arrest) that police seek to carry out based on the algorithm’s results. Further, for legality reasons, even a sufficient hit rate should not authorize action unless police have also observed risky conduct by the person the algorithm targets. Finally, the chapter discusses ways of dealing with the possible impact of racialized policing on the data fed into these algorithms.

Joh on Ethical AI in American Policing

Elizabeth E. Joh (UC Davis School of Law) has posted “Ethical AI in American Policing” (Notre Dame J. Emerging Tech. 2022) on SSRN. Here is the abstract:

We know there are problems in the use of artificial intelligence in policing, but we don’t quite know what to do about them. One can also find many reports and white papers today offering principles for the responsible use of AI systems by the government, civil society organizations, and the private sector. Yet, largely missing from the current debate in the United States is a shared framework for thinking about the ethical and responsible use of AI that is specific to policing. There are many AI policy guidance documents now, but their value to the police is limited. Simply repeating broad principles about the responsible use of AI systems are less helpful than ones that 1) take into account the specific context of policing, and 2) consider the American experience of policing in particular. There is an emerging consensus about what ethical and responsible values should be part of AI systems. This essay considers what kind of ethical considerations can guide the use of AI systems by American police.

Lin on How to Save Face & the Fourth Amendment: Developing an Algorithmic Accountability Industry for Facial Recognition Technology in Law Enforcement

Patrick K. Lin (Brooklyn Law School) has posted “How to Save Face & the Fourth Amendment: Developing an Algorithmic Accountability Industry for Facial Recognition Technology in Law Enforcement” (33 Alb. L.J. Sci. & Tech. 2023 Forthcoming) on SSRN. Here is the abstract:

For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations setting standards with respect to its development. To make matters worse, the Fourth Amendment—intended to limit police power and enacted to protect against unreasonable searches—has struggled to rein in new surveillance technologies since its inception.

This Article examines the Supreme Court’s Fourth Amendment jurisprudence leading up to Carpenter v. United States and suggests that the Court is reinterpreting the amendment for the digital age. Still, the too-slow expansion of privacy protections raises challenging questions about racial bias, the legitimacy of police power, and ethical issues in artificial intelligence design. This Article proposes the development of an algorithmic auditing and accountability market that not only sets standards for AI development and limitations on governmental use of facial recognition but encourages collaboration between public interest technologists and regulators. Beyond the necessary changes to the technological and legal landscape, the current system of policing must also be reevaluated if hard-won civil liberties are to endure.