Wu on AI Whistleblowers

Henry Wu (Yale Law School) has posted “AI Whistleblowers” on SSRN. Here is the abstract:

Advocates, technologists, and public officials have increasingly warned about the potential risks from the rapid deployment of artificial intelligence (AI). AI has been developed for use in biotech and life sciences, critical infrastructure, healthcare, social services, and lethal autonomous weapons. In these domains, researchers and policymakers have raised questions about transparency, bias, and accountability for potential harms. Much of the attention towards AI governance, however, has focused on public regulatory regimes focused on standards, guidance, and mandated audits. Less attention has been paid to private regimes of AI governance, including corporate self-regulation. And little has been said about the possibilities for public-private AI governance, such as through the public regulation of private enforcement as in the financial services context. This essay broadens our understanding of AI governance by focusing on whistleblower protection.

Whistleblower protection is often discussed but rarely explained. Many technology companies have internal whistleblower policies. But activists and former whistleblowers have argued that these internal policies are not enough and have publicly called for regulation that protects whistleblowing. In the context of AI governance, researchers have long called for incentives and protections for whistleblowers. Despite these references to whistleblower protection regimes, little has been theorized about the scope and limits of AI whistleblowing. Can whistleblowing help address the myriad of risks that are potentially posed by AI? How might existing whistleblower regimes inform the regulatory design around AI whistleblowers? What incentives, if any, should there be for AI whistleblowing? And how might we design a whistleblower regime in light of cybersecurity concerns from the potential leak of sensitive information?

This essay argues for enhanced whistleblower protections in the context of AI governance. Building upon recent scholarship identifying the challenges of whistleblowing in the technology sector, I discuss several problems unique to algorithmic whistleblowers. Algorithmic whistleblowers could range from scientists working on machine learning models in the biosciences context, defense contractors implementing AI in weapons systems, or private employees at AI labs. I canvas an array of AI-related risks and discuss recent governance proposals, explaining the need for whistleblowing as a governance mechanism. I consider how whistleblowing might work to address AI-associated risks in various sectors and sketch a novel regulatory scheme.