Jane Campbell Moriarty (Duquesne University – School of Law) & Erin McCluan (same) have posted “Forward to the Symposium, The Death of Eyewitness Testimony and the Rise of Machine” on SSRN. Here is the abstract:
Artificial intelligence, machine evidence, and complex technical evidence are replacing human-skill-based evidence in the court room. This may be an improvement on mistaken eyewitness identification and unreliable forensic science evidence, which are both causes of wrongful convictions. Thus, the move toward more machine-based evidence, such as DNA, biometric identification, cell service location information, neuroimaging, and other specialties may provide better evidence. But with such evidence comes different problems, including concerns about proper cross-examination and confrontation, reliability, inscrutability, human bias, constitutional concerns, and both philosophic and ethical questions.
Julia Ann Simon-Kerr (University of Connecticut – School of Law) has posted “Credibility in an Age of Algorithms” (Rutgers Law Review, Forthcoming) on SSRN. Here is the abstract:
Evidence law has a “credibility” problem. Artificial intelligence creators will soon be marketing tools for assessing credibility in the courtroom. Yet, although credibility is a vital concept in the U.S. legal system, there is deep ambiguity within the law about its function. American jurisprudence assumes that impeachment evidence tells us about a witness’s propensity for truthfulness. Yet this same jurisprudence focuses fact-finders on external qualities that are probative of a witness’s worthiness of belief but not of the risk that they will lie. Without a clear understanding of what credibility in the legal system is or should be, the terms of engagement will be set by the creators of algorithms in accordance with their interests.
This article focuses on the two main paradigms within current credibility jurisprudence as a guide to thinking about how algorithms might be brought to bear on legal credibility. It does this by analogy to two existing algorithmic products. One is the U.S. credit scoring system. The other is China’s experiment with a “social credit” scoring system. These examples reflect the actual and purported function of credibility in the law in ways that are revealing both for current practice and as we contemplate the credibility of the future.
Haley Amster (Stanford Law School) and Brett Diehl (Stanford Law School) have posted “Against Geofences” (Stanford Law Review, Forthcoming) on SSRN. Here is the abstract:
Since roughly 2016, law enforcement has increasingly relied on a new tool when investigating a crime with no suspects: geofence warrants. These warrants operate differently from a typical digital location history search warrant, through which law enforcement requires a third-party company to provide the location history of a particular user’s device. […] This Note aims to begin filling that analytical void by putting forward the first thorough scholarly analysis of the constitutionality of a geofence warrant.
This Note proceeds in five parts. Part I is a technology primer, explaining the steps involved in geofence warrants: the initial data dump, the expansion, and the unmasking. Part II catalogs the burgeoning geofence litigation, analyzing the first few federal magistrate opinions on the issue before briefly profiling other pending litigation. Part III looks more closely at the initial data dump, identifying the difficulty law enforcement has in meeting probable cause and particularity requirements due to the inherent breadth of the search. This Part explores potential constitutional limits on geofence warrants through analogies to the search of many people located at the scene of a crime, digital checkpoints, and area warrants. In this Part, the Note answers the question of whether probable cause must be shown for each device included in a digital search, exploring relevant scholarship regarding cell tower dumps. This Part then explores the difficulty in achieving constitutional tailoring, analogizing to digital searches of multi-occupancy buildings, and considers potential particularized search protocol that could indeed meet constitutional requirements. Part IV examines geofence warrants’ expansion and unmasking steps. It first argues that geofence warrants are unconstitutional general warrants because of the discretion given to law enforcement in the warrant execution. It then considers the additional steps as reaches beyond the scope of a warrant, or as multiple searches encompassed under one warrant.
Vikas Didwania (University of Chicago Law School and U.S. Attorney’s Office) has posted “Privacy Amid Prosecution” on SSRN. Here is the abstract:
Prosecutors increasingly marshal electronic evidence from social media companies like Facebook and Twitter to detect and prosecute crime. For example, today’s prosecutors use social media communications to establish a defendant’s illegal activities online or his relationship to co-conspirators. Under the federal Stored Communications Act, the government is able to obtain electronic evidence by serving social media companies with a search warrant. The same is not true for criminal defendants because, on its face, the Act bans litigants other than the government from subpoenaing social media companies to obtain content information. As electronic evidence becomes more important in criminal prosecutions, defendants have sought to challenge this ban. To date, at least six federal and state appellate courts have confronted the question of whether litigants other than the government can compel service providers to produce content information.
The stakes are high. On the one hand, a decision giving access to criminal defendants and other litigants would open up for discovery the most intimate online communications, photographs, videos, and other content belonging to billions of users worldwide. On the other hand, a decision blocking access means defendants may not be able to obtain the evidence they need as they fight for their liberty. The Supreme Court is likely to soon be asked to address this question.
Criminal defendants have been aided by a scholar who has put forward in the Harvard Law Review an innovative argument that federal privilege law requires allowing litigants to subpoena information from social media companies in a way that the text of the Stored Communications Act appears to preclude. This Article challenges this novel privilege argument. Cases dating back to the telegram era of the late nineteenth century and continuing to modern day show that, contrary to this scholar’s argument, Congress does not have to use any specific language to block defendants’ access to content. Rather, courts have applied the plain text of the law, which, alongside the Act’s structure and purpose, shows that defendants are banned from obtaining content. Moreover, I argue, courts should not rely on this new approach because it will both create a doctrinal mess in a carefully structured statute and will enmesh courts in difficult policy decisions about the privacy of billions of users worldwide. Instead, Congress is best-positioned to reform the statute in a way that balances the serious privacy and liberty interests at stake. Given that legislative change can take time, this Article ends by explaining the tools already available to defendants for obtaining online content.
Kendra Albert (Harvard Law School), Maggie Delano (Swarthmore College Engineering Department), Jon Penney (Citizen Lab, University of Toronto, Harvard University – Berkman Klein Center for Internet & Society, Harvard Law School), Afsaneh Rigot (ARTICLE 19), and Ram Shankar Siva Kumar, Microsoft Corporation, Harvard University – Berkman Klein Center for Internet & Society have posted “Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning” to SSRN. Here is the abstract:
This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as “real world.” Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges.
Theodore Christakis (Institut Universitaire France; Université Grenoble Alpes; CESICE) and Fabien Terpan (Science Po Grenoble; Université Grenoble Alpes; CESICE) have posted “EU-US Negotiations on Law Enforcement Access to Data: Divergences, Challenges and EU Law Procedures and Options” (International Data Privacy Law, OUP (2020)) to SSRN. Here is the abstract:
The EU and the US kicked off negotiations in September 2019 for the conclusion of a very important agreement on LEA access to data. This is the first article to present the context of these negotiations and the numerous challenges surrounding them.
There are strong divergences between the EU and the US about what the scope and the architecture of this agreement should be. The US government supports the conclusion of a “framework agreement” with the EU to be followed by bilateral agreements with EU Member States – in order to satisfy CLOUD Act requirements. The EU wishes to arrive at a self-standing, EU-wide comprehensive agreement and is opposed to solutions that might lead to fragmentation and unequal treatment between EU Member States.
This article presents a detailed EU Law perspective on all these issues, and refers to relevant precedents concerning the conclusion of law enforcement, data-related or other international agreements. It discusses the division of competence on e-evidence between the EU and its Members States; possible architecture for the agreement and options under EU Law; and the role of the respective European Institutions (Commission, Council, Parliament) in the negotiation and conclusion of such an agreement.
The article also studies, using existing case law, what the role of the CJEU could be in relation to such an EU-US e-evidence Agreement.
The article will be useful to anyone interested in transatlantic data flows as well as judicial cooperation matters and, beyond its specific scope, could be used as a real “guide” to EU Law procedures, options and precedents in relation to the conclusion of international data-related agreements.