Ranae Jabri (National Bureau of Economic Research; Duke University) has posted “Algorithmic Policing” on SSRN. Here is the abstract:
Predictive policing algorithms are increasingly used by law enforcement agencies in the United States. These algorithms use past crime data to generate predictive policing boxes, specifically the highest crime risk areas where law enforcement is instructed to patrol every shift. I collect a novel dataset on predictive policing box locations, crime incidents, and arrests from a major urban jurisdiction where predictive policing is used. Using institutional features of the predictive policing policy, I isolate quasi-experimental variation to examine the causal impacts of algorithm-induced police presence. I find that algorithm-induced police presence decreases serious property and violent crime. At the same time, I also find disproportionate racial impacts on arrests for serious violent crimes as well as arrests in traffic incidents i.e. lower-level offenses where police have discretion. These results highlight that using predictive policing to target neighborhoods can generate a tradeoff between crime prevention and equity.
Jatin Patil (NMIMS School of Law, Hyderabad) has posted “Cyber Laws in India: An Overview” (Indian Journal of Law and Legal Research, 4(01), pp. 1391-1411) on SSRN. Here is the abstract:
The world has progressed in terms of communication, particularly since the introduction of the Internet. The rise of cybercrime, often known as e-crimes (electronic crimes), is a major challenge confronting today’s society. As a result, cybercrime poses a threat to nations, companies, and individuals all across the world. It has expanded to many parts of the globe, and millions of individuals have become victims of cybercrime. Given the serious nature of e-crime, as well as its worldwide character and repercussions, it is evident that a common understanding of such criminal conduct is required to successfully combat it. The definitions, types, and incursions of e-crime are all covered in this study. It has also focused on India’s anti-e-crime legislation.
Rebecca J. Hamilton (American University – Washington College of Law) has posted “Platform-Enabled Crimes” (B.C. L. Rev (forthcoming 2022)) on SSRN. Here is the abstract:
Online intermediaries are omnipresent. Each day, across the globe, the corporations that run these platforms execute policies and practices that serve their profit model, typically by sustaining user engagement. Sometimes, these seemingly banal business activities enable principal perpetrators to commit crimes; yet online intermediaries are almost never held to account for their complicity in the resulting harms.
This Article introduces the term and concept of platform-enabled crimes into the legal literature to draw attention to way that the ordinary business activities of online intermediaries can enable the commission of crime. It then singles out a subset of platform-enabled crimes—those where a social media company has facilitated international crimes—for the purpose of understanding and addressing the accountability gap associated with them.
Adopting a survivor-centered methodology, and using Facebook’s complicity in the Rohingya genocide in Myanmar as a case study, this Article begins the work of addressing the accountability deficit for platform-enabled crimes. It advances a menu of options to be pursued in parallel, including amending domestic legislation, strengthening transnational cooperation between international and domestic prosecutors for criminal and civil corporate liability cases, and pursuing de-monopolizing regulatory action. I conclude by acknowledging that the advent of platform-enabled crimes is not something that any single body of law is equipped to respond to. However by pursuing a plurality of options to address this previously overlooked form of criminal facilitation, we can make a vast improvement on the status quo.
Rashida Richardson (Northeastern University School of Law) & Amba Kak (New York University (NYU)) have posted “Suspect Development Systems: Databasing Marginality and Enforcing Discipline” (University of Michigan Journal of Law Reform, Vol. 55, Forthcoming). Here is the abstract:
Algorithmic accountability law — focused on the regulation of data-driven systems like artificial intelligence (AI) or automated decision-making (ADM) tools — is the subject of lively policy debates, heated advocacy, and mainstream media attention. Concerns have moved beyond data protection and individual due process to encompass a broader range of group-level harms such as, discrimination and modes of democratic participation. While this is a welcome and long overdue shift, this discourse has ignored systems like databases, that are viewed as technically ‘rudimentary’ and often siloed from regulatory scrutiny and public attention. Additionally, burgeoning regulatory proposals like algorithmic impact assessments are not structured to surface important yet often overlooked social, organizational and political economy contexts that are critical to evaluating the practical functions and outcomes of technological systems.
This article presents a new categorical lens and analytical framework that aims to address and overcome these limitations. “Suspect Development Systems” (SDS) refers to: (1) information technologies used by government and private actors, (2) to manage vague or often immeasurable social risk based on presumed or real social conditions (e.g. violence, corruption, substance abuse), (3) that subjects targeted individuals or groups to greater suspicion, differential treatment, and more punitive and exclusionary outcomes. This frame includes some of the most recent and egregious examples of data-driven tools (such as predictive policing or risk assessments) but critically, it is also inclusive of a broader range of database systems that are currently at the margins of technology policy discourse. By examining the use of various criminal intelligence databases in India, the United Kingdom, and the United States, we developed a framework of five categories of features (technical, legal, political economy, organizational, and social) that together and separately influence how these technologies function in practice, the ways they are used, and the outcomes they produce. We then apply this analytical framework to welfare system databases, universal or ID number databases, and citizenship databases to demonstrate the value of this framework in both identifying and evaluating emergent or under-examined technologies in other sensitive social domains.
Suspect Development Systems is an intervention in legal scholarship and practice as it provides a much-needed definitional and analytical framework for understanding an ever-evolving ecosystem of technologies embedded and employed in modern governance. Our analysis also helps redirect attention toward important yet often under-examined contexts, conditions, and consequences that are pertinent to the development of meaningful legislative or regulatory interventions in the field of algorithmic accountability. The cross-jurisdictional evidence put forth across this Article illuminates the value of examining commonalities between the Global North and South to inform our understanding of how seemingly disparate technologies and contexts are in fact coaxial, which is the basis for building more global solidarity.
The Law Commission of Ontario has posted “The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada” on SSRN. Here is the abstract:
Artificial intelligence (AI) and algorithms are often referred to as “weapons of math destruction.” Many systems are also credibly described as “a sophisticated form of racial profiling.” These views are widespread in many current discussions of AI and algorithms.
The Law Commission of Ontario (LCO) Issue Paper, The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada, is the first of three LCO Issue Papers considering AI and algorithms in the Canadian justice system. The paper provides an important first look at the potential use and regulation of AI and algorithms in Canadian criminal proceedings. The paper identifies important legal, policy and practical issues and choices that Canadian policymakers and justice stakeholders should consider before these technologies are widely adopted in this country.