Aka on When Routine Becomes Criminal: Testing AI Recognition of Context-Dependent Illegality

Hilal Aka (Harvard U Harvard Kennedy (HKS)) has posted “When Routine Becomes Criminal: Testing AI Recognition of Context-Dependent Illegality” on SSRN. Here is the abstract:

AI systems readily refuse overtly harmful requests yet often miss when routine tasks turn illegal once context changes. We probed this gap with Gemini 2.5 Pro Preview. After discarding six malformed runs, 394 valid trials remained. Each model instance received a request to edit the board meeting attendance records, a lawful action unless a potential SEC fraud investigation makes any alteration obstruction of justice. Compliance was 100% when no legal context was given and remained 100% when the investigation was merely mentioned. Adding standard legal-hold language lowered compliance by only one point (99%). A clear directive not to alter company documents reduced compliance to 89%: 11% refused outright, while 9% warned of legal risk but proceeded anyway. Even runs that spontaneously referenced the SEC inquiry still complied unless given a clear directive. Flagging the SEC investigation—or even adding standard legal-hold language—did nothing to curb compliance; only the blunt “do not alter documents” order made a dent, and even that dent was small. The model thus perceives the investigation yet fails to connect the dots: it follows explicit rules but doesn’t infer that editing becomes illegal from context alone, a critical vulnerability for deploying AI in regulated environments where context determines legality.

Yakowitz Bambauer et al. on The National Artificial Intelligence Advisory Committee (NAIAC)- Recommendation: Require Public Summary Reporting on Use of High-Risk AI

Jane R. Bambauer (U Florida Levin College Law) et al. have posted “The National Artificial Intelligence Advisory Committee (NAIAC)- Recommendation: Require Public Summary Reporting on Use of High-Risk AI” on SSRN. Here is the abstract:

This document contains a recommendation adopted by the National AI Advisory Committee (“NAIAC”) and delivered to the Office of the President concerning law enforcement use of AI.

Recommendation: The Office of Management and Budget (OMB) — or another appropriate arm of the Executive branch — should require that law enforcement agencies create and publish annual summary usage reports for safety- or rights-impacting AI to be included in the AI Use Case Inventory.

Pruss et al. on Prediction and Punishment: Critical Report on Carceral AI

Dasha Pruss (George Mason U) et al. have posted “Prediction and Punishment: Critical Report on Carceral AI” on SSRN. Here is the abstract:

Prediction and Punishment is a report collectively envisioned, researched, written, and edited by a group of critical researchers and activists to expose key issues at the intersection of the carceral system and artificial intelligence (AI). The report emerged from a cross-disciplinary workshop on carceral AI that took place in Pittsburgh, Pennsylvania in February 2024. The report’s aim is to provide a resource for researchers, community organizers, and policy-makers to get informed about the impacts of technologies designed to police, incarcerate, surveil, and control human beings. As a community, we stand against the use of carceral technologies. 

This report begins with an introduction that provides some context and definitions for our analysis and then continues in two main parts:

Part 1: State of Carceral AI discusses our core takeaways, including the ways that the advent of carceral AI is (and is not) novel; the perils of centering the conversation around algorithmic bias; the pernicious role of public-private partnerships; the spread of carceral AI globally and beyond the criminal legal system; the unpredictable human element in how carceral AI is used; and the incompatibility between algorithmic reforms and liberatory futures.

Part 2: Recommendations and Paths Forward discusses our suggested routes to mitigate the use and expansion of carceral AI. These include divesting from carceral technology and reducing the size and scope of the carceral system through low-tech interventions; blocking the rebranding of scrapped carceral AI systems under new names; expanding how we think about ‘evidence-based’ policy; increasing public access to information about carceral AI systems; building technology that intentionally centers our values; and community building to resist carceral AI.

Ball on The Deepfake Challenge: Targeted AI Policy Solutions for States

Dean Ball (George Mason U Mercatus Center) has posted “The Deepfake Challenge: Targeted AI Policy Solutions for States” on SSRN. Here is the abstract:

This study focuses on AI content authenticity issues such as deepfakes. It analyzes the evidence about the extent and nature of AI-enabled malicious content, finding it to be a serious, though often overstated, problem. It then analyzes how existing state and federal laws apply, asking centrally, “Is it already a crime to distribute malicious deepfakes?” The study finds that it often is a crime, but not always under some state common and statutory law. Thus, it suggests, a targeted legislative approach could be taken by either states or the federal government to deal with the problem.

Purves & Jenkins on A Machine Learning Evaluation Framework for Place-based Algorithmic Patrol Management

Duncan Purves (U Florida) and Ryan Jenkins (Cal Polytechnic) have posted “A Machine Learning Evaluation Framework for Place-based Algorithmic Patrol Management” on SSRN. Here is the abstract:

American law enforcement agencies are increasingly adopting data-driven technologies to combat crime, with the market for such technologies projected to grow significantly in the coming years. One prevalent approach, place-based algorithmic patrol management (PAPM), analyzes data on past crimes to optimize police patrols. These systems promise several benefits, including efficient resource allocation, reduced bias, and increased transparency. However, the adoption of these technologies has raised ethical and social concerns, particularly around privacy, bias, and community impact. This report aims to provide a comprehensive framework, including many concrete recommendations, for the ethical and responsible development and deployment of PAPM systems. Targeting developers, law enforcement agencies, policymakers, and community advocates, the recommendations emphasize collaboration among these stakeholders to address the complex challenges presented by PAPM. We suggest that failure to meet the proposed ethical guidelines might make the use of such technologies unacceptable. This report has been supported by National Science Foundation awards #1917707 and #1917712 and the Center for Advancing Safety of Machine Intelligence (CASMI).

Jabri on Algorithmic Policing

Ranae Jabri (National Bureau of Economic Research; Duke University) has posted “Algorithmic Policing” on SSRN. Here is the abstract:

Predictive policing algorithms are increasingly used by law enforcement agencies in the United States. These algorithms use past crime data to generate predictive policing boxes, specifically the highest crime risk areas where law enforcement is instructed to patrol every shift. I collect a novel dataset on predictive policing box locations, crime incidents, and arrests from a major urban jurisdiction where predictive policing is used. Using institutional features of the predictive policing policy, I isolate quasi-experimental variation to examine the causal impacts of algorithm-induced police presence. I find that algorithm-induced police presence decreases serious property and violent crime. At the same time, I also find disproportionate racial impacts on arrests for serious violent crimes as well as arrests in traffic incidents i.e. lower-level offenses where police have discretion. These results highlight that using predictive policing to target neighborhoods can generate a tradeoff between crime prevention and equity.

Patil on Cyber Laws in India

Jatin Patil (NMIMS School of Law, Hyderabad) has posted “Cyber Laws in India: An Overview” (Indian Journal of Law and Legal Research, 4(01), pp. 1391-1411) on SSRN. Here is the abstract:

The world has progressed in terms of communication, particularly since the introduction of the Internet. The rise of cybercrime, often known as e-crimes (electronic crimes), is a major challenge confronting today’s society. As a result, cybercrime poses a threat to nations, companies, and individuals all across the world. It has expanded to many parts of the globe, and millions of individuals have become victims of cybercrime. Given the serious nature of e-crime, as well as its worldwide character and repercussions, it is evident that a common understanding of such criminal conduct is required to successfully combat it. The definitions, types, and incursions of e-crime are all covered in this study. It has also focused on India’s anti-e-crime legislation.

Hamilton on Platform-Enabled Crimes

Rebecca J. Hamilton (American University – Washington College of Law) has posted “Platform-Enabled Crimes” (B.C. L. Rev (forthcoming 2022)) on SSRN. Here is the abstract:

Online intermediaries are omnipresent. Each day, across the globe, the corporations that run these platforms execute policies and practices that serve their profit model, typically by sustaining user engagement. Sometimes, these seemingly banal business activities enable principal perpetrators to commit crimes; yet online intermediaries are almost never held to account for their complicity in the resulting harms.

This Article introduces the term and concept of platform-enabled crimes into the legal literature to draw attention to way that the ordinary business activities of online intermediaries can enable the commission of crime. It then singles out a subset of platform-enabled crimes—those where a social media company has facilitated international crimes—for the purpose of understanding and addressing the accountability gap associated with them.

Adopting a survivor-centered methodology, and using Facebook’s complicity in the Rohingya genocide in Myanmar as a case study, this Article begins the work of addressing the accountability deficit for platform-enabled crimes. It advances a menu of options to be pursued in parallel, including amending domestic legislation, strengthening transnational cooperation between international and domestic prosecutors for criminal and civil corporate liability cases, and pursuing de-monopolizing regulatory action. I conclude by acknowledging that the advent of platform-enabled crimes is not something that any single body of law is equipped to respond to. However by pursuing a plurality of options to address this previously overlooked form of criminal facilitation, we can make a vast improvement on the status quo.

Richardson & Kak on Suspect Development Systems: Databasing Marginality and Enforcing Discipline

Rashida Richardson (Northeastern University School of Law) & Amba Kak (New York University (NYU)) have posted “Suspect Development Systems: Databasing Marginality and Enforcing Discipline” (University of Michigan Journal of Law Reform, Vol. 55, Forthcoming). Here is the abstract:

Algorithmic accountability law — focused on the regulation of data-driven systems like artificial intelligence (AI) or automated decision-making (ADM) tools — is the subject of lively policy debates, heated advocacy, and mainstream media attention. Concerns have moved beyond data protection and individual due process to encompass a broader range of group-level harms such as, discrimination and modes of democratic participation. While this is a welcome and long overdue shift, this discourse has ignored systems like databases, that are viewed as technically ‘rudimentary’ and often siloed from regulatory scrutiny and public attention. Additionally, burgeoning regulatory proposals like algorithmic impact assessments are not structured to surface important yet often overlooked social, organizational and political economy contexts that are critical to evaluating the practical functions and outcomes of technological systems.

This article presents a new categorical lens and analytical framework that aims to address and overcome these limitations. “Suspect Development Systems” (SDS) refers to: (1) information technologies used by government and private actors, (2) to manage vague or often immeasurable social risk based on presumed or real social conditions (e.g. violence, corruption, substance abuse), (3) that subjects targeted individuals or groups to greater suspicion, differential treatment, and more punitive and exclusionary outcomes. This frame includes some of the most recent and egregious examples of data-driven tools (such as predictive policing or risk assessments) but critically, it is also inclusive of a broader range of database systems that are currently at the margins of technology policy discourse. By examining the use of various criminal intelligence databases in India, the United Kingdom, and the United States, we developed a framework of five categories of features (technical, legal, political economy, organizational, and social) that together and separately influence how these technologies function in practice, the ways they are used, and the outcomes they produce. We then apply this analytical framework to welfare system databases, universal or ID number databases, and citizenship databases to demonstrate the value of this framework in both identifying and evaluating emergent or under-examined technologies in other sensitive social domains.

Suspect Development Systems is an intervention in legal scholarship and practice as it provides a much-needed definitional and analytical framework for understanding an ever-evolving ecosystem of technologies embedded and employed in modern governance. Our analysis also helps redirect attention toward important yet often under-examined contexts, conditions, and consequences that are pertinent to the development of meaningful legislative or regulatory interventions in the field of algorithmic accountability. The cross-jurisdictional evidence put forth across this Article illuminates the value of examining commonalities between the Global North and South to inform our understanding of how seemingly disparate technologies and contexts are in fact coaxial, which is the basis for building more global solidarity.

Law Commission of Ontario on The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada

The Law Commission of Ontario has posted “The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada” on SSRN. Here is the abstract:

Artificial intelligence (AI) and algorithms are often referred to as “weapons of math destruction.” Many systems are also credibly described as “a sophisticated form of racial profiling.” These views are widespread in many current discussions of AI and algorithms.
The Law Commission of Ontario (LCO) Issue Paper, The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada, is the first of three LCO Issue Papers considering AI and algorithms in the Canadian justice system. The paper provides an important first look at the potential use and regulation of AI and algorithms in Canadian criminal proceedings. The paper identifies important legal, policy and practical issues and choices that Canadian policymakers and justice stakeholders should consider before these technologies are widely adopted in this country.