Tucker on Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder

Emily Tucker (Center on Privacy & Technology at Georgetown Law) has posted “Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder” (New York University Review of Law & Social Change, Vol. 46, No. 1, 2022) on SSRN. Here is the abstract:

In the many debates about whether and how algorithmic technologies should be used in law enforcement, all sides seem to share one assumption: that, in the struggle for justice and equity in our systems of governance, the subjectivity of human judgment is something to be overcome. While there is significant disagreement about the extent to which, for example, a machine-generated risk assessment might ever be unpolluted by the problematic biases of its human creators and users, no one in the scholarly literature has so far suggested that if such a thing were achievable, it would be undesirable.

This essay argues that it only becomes possible for policing to be something other than mere brutality when the activities of policing are themselves a way of deliberating about what policing is and should be, and that algorithms are definitionally opposed to such deliberation. An algorithmic process, whether carried out by a human brain or by a computer, can only operate at all if the terms that govern its operations have fixed definitions. Fixed definitions may be useful or necessary for human endeavors—like getting bread to rise or designing a sturdy foundation for a building—which can be reduced to techniques of measurement and calculation. But the fixed definitions that underlie policing algorithms (what counts as transgression, which transgressions warrant state intervention, etc) relate to an ancient, fundamental, and enduring political question, one that cannot be expressed by equation or recipe: the question of justice. The question of justice is not one to which we can ever give a final answer, but one that must be the subject of ongoing ethical deliberation within human communities.

Recommended.

Grennan on FinTech Regulation in the United States: Past, Present, and Future

Jillian Grennan (Duke University – Fuqua School of Business) has posted “FinTech Regulation in the United States: Past, Present, and Future” on SSRN. Here is the abstract:

This study reviews the regulatory issues developers and users of emerging financial technologies face as use cases expand. DeFi and DAOs, which build upon advances in AI and blockchain, reduce the cost of coordinating complex financial services. Yet the efficiency gains intertwine with potential legal risks associated with liability, financial crime, dispute resolution, jurisdiction, and taxes. Regulatory solutions may include adapted definitions and safe harbors, regulatory sandboxes, self-regulatory organizations, and/or policing misleading characterizations (e.g., regarding the extent of decentralization or agreed to data uses). As it will take time for regulators to implement effective policies, stakeholders can still influence policy.

Jones on A Civil Strategization of AI in Germany

Maurice Jones (Concordia University; Humboldt Institute for Internet and Society) has posted “Towards Civil Strategization of AI in Germany” on SSRN. Here is the abstract:

The involvement of civil society has been identified as key in ensuring ethical and equitable approaches towards the governance of AI by a variety of state and non-state actors. Civil society carries the potential to hold organisations and institutions accountable, to advocate for marginalised voices to be heard, to spearhead ethically sound applications of AI, and to mediate between a variety of different perspectives. Despite proclaimed ambitions and visible potentials, civil society actors face great challenges in actively engaging in the governance of AI. Based upon a survey of the involvement of civil society actors in the making of the German National Artificial Intelligence Strategy this discussion paper identifies and contextualises key challenges that hinder civil society’s fruitful participation in the governance of AI in Germany. These hurdles include existing structural challenges commonly faced by civil society actors, such as a notorious lack of financial and human resources, as well as broader questions of governance, such as interministerial competition, and a lack foresight in the design of participatory processes. Additional challenges related to technology governance, such as a lack of expertise not only in civil society but also among ministries and industry, are amplified within the rapidly evolving field of AI. Leveraging the potential of civil society’s involvement requires reevaluation of the relationship between civil society, state, and economic actors.

Lee on Investor Protection on Crowdfunding Platforms

Joseph Lee (School of Law, University of Manchester) has posted “Investor Protection on Crowdfunding Platforms” (The EU Crowdfunding Regulation, OUP) on SSRN. Here is the abstract:

This paper discusses the protection of investors on crowdfunding platforms under the Crowdfunding Regulation. Although there are many provisions in the regulation that protect investors, this paper concentrates specifically on those included under the heading of Caper IV ‘Investor protection’ of the Crowdfunding Regulation.

This paper focuses on how investor protection can contribute to the objectives of crowdfunding and, in particular, how the provisions of the Crowdfunding Regulation serve this purpose. To this end, Section 2 discusses the investor-focused objectives of crowdfunding, and the role that technology can play in realising these objectives. Section 3 considers the meaning of investor protection within the scope of the Crowdfunding Regulation, and identifies areas where the current regime might be extended in the future. Section 4 discusses the categorisation of investors and the relevance thereof for the investor protection. Major provisions pertinent to investor protection are subsequently discussed in Sections 5 to 9, including the information to be provided to clients, default rate disclosure, the entry knowledge test and the simulation of ability to bear loss, the pre-contractual reflection period, and the key investment information sheet. The Sections also contain reflections pertinent to the different topics discussed in order to put them in a greater context. Section 10 concludes.

Henderson Reviewing When Machines Can Be Judge, Jury, and Executioner

Stephen E. Henderson (University of Oklahoma – College of Law) has posted a review of Katherine Forrest’s “When Machines Can Be Judge, Jury, and Executioner” (Book Review: Criminal Law and Criminal Justice Books 2022) on SSRN. Here is the abstract:

There is much in Katherine Forrest’s claim—and thus in her new book—that is accurate and pressing. Forrest adds her voice to the many who have critiqued contemporary algorithmic criminal justice, and her seven years as a federal judge and decades of other experience make her perspective an important one. Many of her claims find support in kindred writings, such as her call for greater transparency, especially when private companies try to hide algorithmic details for reasons of greater profit. A for-profit motive is a fine thing in a private company, but it is anathema to our ideals of public trial. Algorithms are playing an increasingly dominant role in criminal justice, including in our systems of pretrial detention and sentencing. And as we criminal justice scholars routinely argue, there is much that is rather deeply wrong in that criminal justice.

But the relation between those two things—algorithms on the one hand and our systems of criminal justice on the other—is complicated, and it most certainly does not run any single direction. Just as often as numbers and formulae are driving the show (a right concern of Forrest’s), a terrible dearth of both leaves judges meting out sentences that, in the words of Ohio Supreme Court Justice Michael Donnelly, “have more to do with the proclivities of the judge you’re assigned to, rather than the rule of law.” Moreover, most of the algorithms we currently use—and even most of those we are contemplating using—are ‘intelligent’ in only the crudest sense. They constitute ‘artificial intelligence’ only if we term every algorithm run by, or developed with the assistance of, a computer to constitute AI, and that is hardly the kind of careful, precise definition that criminal justice deserves. A calculator is a machine that we most certainly want judges using, a truly intelligent machine is something we humans have so far entirely failed to create, and the spectrum between is filled with innumerable variations, each of which must be carefully, scientifically evaluated in the particular context of its use.

This brief review situates Forrest’s claims in these two regards. First, we must always compare apples to apples. We ought not compare a particular system of algorithmic justice to some elysian ideal, when the practical question is whether to replace and/or supplement a currently biased and logically-flawed system with that algorithmic counterpart. After all, the most potently opaque form of ‘intelligence’ we know is that we term human—we humans go so far as routine, affirmative deception—and that truth calls for a healthy dose of skepticism and humility when it comes to claims of human superiority. Comparisons must be, then, apples to apples. Second, when we speak of ‘artificial intelligence,’ we ought to speak carefully, in a scientifically precise manner. We will get nowhere good if we diverge into autonomous weapons when trying to decide, say, whether we ought to run certain historic facts about an arrestee through a formula as an aid to deciding whether she is likely to appear as required for trial. The same if we fail to understand the very science upon which any particular algorithm runs. We must use science for science.