Slobogin on Predictive Policing in the United States

Christopher Slobogin (Vanderbilt U Law) has posted “Predictive Policing in the United States” (forthcoming in The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed.) on SSRN. Here is the abstract:

 This chapter, published in the book The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed., Thomson Reuters, 2022) describes police use of algorithms to identify “hot spots” and “hot people,” and then discusses how this practice should be regulated. Predictive policing algorithms should have to demonstrate a “hit rate” that justifies both the intrusion necessary to acquire the information necessary to implement the algorithm and the action (e.g., surveillance, stop or arrest) that police seek to carry out based on the algorithm’s results. Further, for legality reasons, even a sufficient hit rate should not authorize action unless police have also observed risky conduct by the person the algorithm targets. Finally, the chapter discusses ways of dealing with the possible impact of racialized policing on the data fed into these algorithms.

Crootof on AI and the Actual IHL Accountability Gap

Rebecca Crootof (U Richmond Law; Yale ISP) has posted “AI and the Actual IHL Accountability Gap” (in The Ethics of Automated Warfare and AI, Centre for Int’l Gov Innov 2022) on SSRN. Here is the abstract:

Article after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.

But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.

Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.

Kumar & Choudhury on Cognitive Moral Development in AI Robots

Shailendra Kumar (Sikkim University) and Sanghamitra Choudhury (University of Oxford) have posted “Cognitive Moral Development in AI Robots” on SSRN. Here is the abstract:

The widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality, and human rights. This manuscript explores the possibility and means of cognitive moral development in AI bots, and while doing so, it floats a new concept for the characterization and development of artificially intelligent and ethical robotic machines. It proposes the classification of the order of evolution of ethics in the AI bots, by making use of Lawrence Kohlberg’s study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI robots in accordance with the proposed concept, humans may assist in the development of an ideal robotic creature that is morally responsible.