Green on Algorithmic Imaginaries: The Political Limits of Legal and Computational Reasoning

Ben Green (University of Michigan; Harvard Berkman Klein Center) has posted “Algorithmic Imaginaries: The Political Limits of Legal and Computational Reasoning” on SSRN. Here is the abstract:

As debates about how to promote a more egalitarian society have become increasingly salient, one approach that has gained traction is to inform socially consequential policy decisions using algorithms. Algorithmic reasoning suffers from many similar deficiencies as legal thought in the era of the “Twentieth-Century Synthesis,” which rendered questions of political economy, power, and structural inequality invisible or irrelevant. What might be the tenets of a radical algorithmic imaginary, and how might we bring about such a praxis?

Price on Distributed Governance of Medical AI

W. Nicholson Price II (University of Michigan Law School) has posted “Distributed Governance of Medical AI” (25 SMU Sci. & Tech. L. Rev. (Forthcoming 2022)) on SSRN. Here is the abstract:

Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk.

Coglianese & Lai on Antitrust by Algorithm

Cary Coglianese (University of Pennsylvania Carey Law School) and Alicia Lai (University of Pennsylvania Law School ; U.S. Courts of Appeals) have posted “Antitrust by Algorithm” (Stanford Computational Antitrust, Vol. 2, p. 1, 2022) on SSRN. Here is the abstract:

Technological innovation is changing private markets around the world. New advances in digital technology have created new opportunities for subtle and evasive forms of anticompetitive behavior by private firms. But some of these same technological advances could also help antitrust regulators improve their performance in detecting and responding to unlawful private conduct. We foresee that the growing digital complexity of the marketplace will necessitate that antitrust authorities increasingly rely on machine-learning algorithms to oversee market behavior. In making this transition, authorities will need to meet several key institutional challenges—building organizational capacity, avoiding legal pitfalls, and establishing public trust—to ensure successful implementation of antitrust by algorithm.