Van Loo on Stress Testing Governance

Rory Van Loo (Boston University – School of Law; Yale ISP) has posted “Stress Testing Governance” (Vanderbilt Law Review, Vol. 75, No. 553, 2022) on SSRN. Here is the abstract:

In their efforts to guard against the world’s greatest threats, administrative agencies and businesses have in recent years increasingly used stress tests. Stress tests simulate doomsday scenarios to ensure that the organization is prepared to respond. For example, agencies role-played a deadly pandemic spreading from China to the United States the year before COVID-19, acted out responses to a hypothetical hurricane striking New Orleans months before Hurricane Katrina devastated the city, and required banks to model their ability to withstand a recession prior to the economic downturn of 2020. But too often these exercises have failed to significantly improve readiness for the subsequent crises. This Article shows that stress tests are used more widely than is commonly assumed, reaching well beyond financial regulation. It then argues that administrative stress tests should be seen as potentially powerful tools for administrative governance, but ones that suffer from significant shortcomings as currently deployed. Most notably, stress tests lack adequate transparency, oversight, and imagination. Also, they are too often voluntary for businesses and agencies whose performance failures could have great societal ramifications. By depriving stakeholders of crucial information about organizational readiness, these shortcomings weaken the nation’s ability to prevent and prepare for disasters. Preparing for disasters will only become more important as technologies transform everything from stock trading to elections and climate change creates more volatile weather. With improved design and wider deployment, stress tests have the potential to become a central tool for public and private accountability in an era of escalating societal risks.

Coglianese & Lai on Assessing Automated Administration

Cary Coglianese (University of Pennsylvania Carey Law School) and Alicia Lai (same) have posted “Assessing Automated Administration” (In Oxford Handbook on AI Governance (Justin Bullock et al. eds., forthcoming)). Here is the abstract:

To fulfill their responsibilities, governments rely on administrators and employees who, simply because they are human, are prone to individual and group decision-making errors. These errors have at times produced both major tragedies and minor inefficiencies. One potential strategy for overcoming cognitive limitations and group fallibilities is to invest in artificial intelligence (AI) tools that allow for the automation of governmental tasks, thereby reducing reliance on human decision-making. Yet as much as AI tools show promise for improving public administration, automation itself can fail or can generate controversy. Public administrators face the question of when exactly they should use automation. This paper considers the justifications for governmental reliance on AI along with the legal concerns raised by such reliance. Comparing AI-driven automation with a status quo that relies on human decision-making, the paper provides public administrators with guidance for making decisions about AI use. After explaining why prevailing legal doctrines present no intrinsic obstacle to governmental use of AI, the paper presents considerations for administrators to use in choosing when and how to automate existing processes. It recommends that administrators ask whether their contemplated uses meet the preconditions for the deployment of AI tools and whether these tools are in fact likely to outperform the status quo. In moving forward, administrators should also consider the possibility that a contemplated AI use will generate public or legal controversy, and then plan accordingly. The promise and legality of automated administration ultimately depends on making responsible decisions about when and how to deploy this technology.

Coglianese & Lai on Algorithm vs. Algorithm

Cary Coglianese (University of Pennsylvania Carey Law School) and Alicia Lai (University of Pennsylvania Law School ; U.S. Courts of Appeals) have posted “Algorithm vs. Algorithm” (Duke Law Journal, Vol. 72, p. 1281, 2022) on SSRN. Here is the abstract:

Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent. On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free-riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones.

Ranchordas on Empathy in the Digital Administrative State

Sofia Ranchordas (University of Groningen, Faculty of Law; LUISS) has posted “Empathy in the Digital Administrative State” (Duke Law Journal, Forthcoming) on SSRN. Here is the abstract:

Humans make mistakes. Humans make mistakes especially while filling out tax returns, benefit applications, and other government forms, which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitalization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (for example, disabled individuals, seniors, and underrepresented minorities). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy.

This Article argues that empathy, which in this context is the ability to relate to others and understand a situation from multiple perspectives, is a key value of administrative law deserving of legal protection in the digital administrative state. Empathy can contribute to the advancement of procedural due process, the promotion of equal treatment, and the legitimacy of automation. The concept of administrative empathy does not aim to create arrays of exceptions, nor imbue law with emotions and individualized justice. Instead, this concept suggests avenues for humanizing digital government and automated decision-making through a more complete understanding of citizens’ needs. This Article explores the role of empathy in the digital administrative state at two levels: First, it argues that empathy can be a partial response to some of the shortcomings of digital bureaucracy. At this level, administrative empathy acknowledges that citizens have different skills and needs, and this requires the redesign of pre-filled application forms, government platforms, algorithms, as well as assistance. Second, empathy should also operate ex post as a humanizing measure which can help ensure that administrative mistakes made in good faith can be forgiven under limited circumstances, and vulnerable individuals are given second chances to exercise their rights.

Drawing on comparative examples of empathic measures employed in the United States, the Netherlands, Estonia, and France, this Article’s contribution is twofold: first, it offers an interdisciplinary reflection on the role of empathy in administrative law and public administration for the digital age, and second, it operationalizes the concept of administrative empathy. These goals combine to advance the position of vulnerable citizens in the administrative state.

Recommended.

Marks on Automating FDA Regulation

Mason Marks (Harvard Law School; Yale Law School; University of New Hampshire Franklin Pierce School of Law; Leiden Law School, Center for Law and Digital Technologies) has posted “Automating FDA Regulation” (Duke Law Journal, Forthcoming) on SSRN. Here is the abstract:

In the twentieth century, the Food and Drug Administration (“FDA”) rose to prominence as a respected scientific agency. By the middle of the century, it transformed the U.S. medical marketplace from an unregulated haven for dangerous products and false claims to a respected exemplar of public health. More recently, the FDA’s objectivity has increasingly been questioned. Critics argue the agency has become overly political and too accommodating to industry while lowering its standards for safety and efficacy. The FDA’s accelerated pathways for product testing and approval are partly to blame. They require lower quality evidence, such as surrogate endpoints, and shift the FDA’s focus from premarket clinical trials toward postmarket surveillance, requiring less evidence up front while promising enhanced scrutiny on the back end. To further streamline product testing and approval, the FDA is adopting algorithmic predictions, from computer models and simulations enhanced by artificial intelligence (“AI”), as surrogates for direct evidence of safety and efficacy.

This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents, and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation. Specifically, the agency should incorporate principles of AI ethics into simulation guidelines. Until better tools for evaluating models are available, and robust standards are implemented to ensure their safe and equitable implementation, computer models should be limited to academic research, and FDA decisions should rely on them only when there are no suitable alternatives.

Glaze et al. on AI for Adjudication in the Social Security Administration

Kurt Glaze (US Gov – SSA), Daniel E. Ho (Stanford Law School), Gerald K. Ray (SSA), and Christine Tsang (Stanford Law School) have posted “Artificial Intelligence for Adjudication: The Social Security Administration and AI Governance” (Oxford University Press, Handbook on AI Governance (Forthcoming) on SSRN. Here is the abstract:

Despite widespread skepticism of data analytics and artificial intelligence (AI) in adjudication, the Social Security Administration (SSA) pioneered path breaking AI tools that became embedded in multiple levels of its adjudicatory process. How did this happen? What lessons can we draw from the SSA experience for AI in government?

We first discuss how early strategic investments by the SSA in data infrastructure, policy, and personnel laid the groundwork for AI. Second, we document how SSA overcame a wide range of organizational barriers to develop some of the most advanced use cases in adjudication. Third, we spell out important lessons for AI innovation and governance in the public sector. We highlight the importance of leadership to overcome organizational barriers, “blended expertise” spanning technical and domain knowledge, operational data, early piloting, and continuous evaluation. AI should not be conceived of as a one-off IT product, but rather as part of continuous improvement. AI governance is quality assurance.

Coglianese on Regulating New Tech: Problems, Pathways, and People

Cary Coglianese (University of Pennsylvania Carey Law School) has posted “Regulating New Tech: Problems, Pathways, and People” (TechREG Chronicle, Issue 1) on SSRN. Here is the abstract:

New technologies bring with them many promises, but also a series of new problems. Even though these problems are new, they are not unlike the types of problems that regulators have long addressed in other contexts. The lessons from regulation in the past can thus guide regulatory efforts today. Regulators must focus on understanding the problems they seek to address and the causal pathways that lead to these problems. Then they must undertake efforts to shape the behavior of those in industry so that private sector managers focus on their technologies’ problems and take actions to interrupt the causal pathways. This means that regulatory organizations need to strengthen their own technological capacities; however, they need most of all to build their human capital. Successful regulation of technological innovation rests with top quality people who possess the background and skills needed to understand new technologies and their problems.

Sharkey on AI for Retrospective Review

Catherine M. Sharkey (NYU School of Law) has posted “AI for Retrospective Review” (8 Belmont Law Review 374 (2021)) on SSRN. Here is the abstract:

This article explores the significant administrative law issues that agencies will face as they devise and implement AI-enhanced strategies to identify rules that should be subject to retrospective review. Against the backdrop of a detailed examination of HHS’s “AI for Deregulation” pilot and the very first use of AI-driven technologies in a published federal rule, the article proposes enhanced public participation and notice-and-comment processes as necessary features of AI-driven retrospective review. It challenges conventional wisdom that divides uses of AI technologies into those that “support” agency action—and therefore do not implicate the APA’s directives—and those that “determine” agency actions and thus should be subject to the full panoply of APA demands. In so doing, it takes aim at the talismanic significance of “human in the loop” that shields AI uses from disclosure and review by casting them in a merely supportive role.

Levy, Chasalow & Riley on Algorithms and Decision-Making in the Public Sector

Karen Levy (Cornell University), Kyla Chasalow (University of Oxford), and Sarah Riley (Cornell University) have posted “Algorithms and Decision-Making in the Public Sector” (Annual Review of Law and Social Science, Vol. 17 (2021)) on SSRN. Here is the abstract:

This article surveys the use of algorithmic systems to support decision-making in the public sector. Governments adopt, procure, and use algorithmic systems to support their functions within several contexts—including criminal justice, education, and benefits provision—with important consequences for accountability, privacy, social inequity, and public participation in decision-making. We explore the social implications of municipal algorithmic systems across a variety of stages, including problem formulation, technology acquisition, deployment, and evaluation. We highlight several open questions that require further empirical research.