Ryan Budish (Harvard, Berkman Klein Center) has posted “AI’s Risky Business: Embracing Ambiguity in Managing the Risks of AI” (16 J. Bus. & Tech. L. 259 (2021)) on SSRN. Here is the abstract:
There are over 160 different sets of artificial intelligence (AI) governance principles from public and private organizations alike. These principles aspire to enhance AI’s transformative potential and limit its negative consequences. Increasingly, these principles and strategies have invoked the language of “risk management” as a mechanism for articulating concrete guardrails around AI technologies. Unfortunately, what “risk management” means in practice is largely undefined and poorly understood. In fact, there are two very different approaches to how we measure risk. One approach emphasizes quantification and certainty. The other approach eschews the false certainty of quantification and instead embraces the inherently qualitative (and correspondingly imprecise) measures of risk expressed through social and political dialogue across stakeholders. This paper argues that the emerging field of AI governance should embrace a more responsive, inclusive, and qualitative approach that is better tailored to the inherent uncertainties and dynamism of AI technology and its societal impacts. And yet this paper also describes how doing so will be difficult because computer science and digital technologies (and, by extension, efforts to govern those technologies) inherently push toward certainty and the elimination of ambiguity. This paper draws upon experiences from other scientific fields that have long had to grapple with how best to manage the risks of new technologies to show how qualitative approaches to risk may be better tailored to the challenges of emerging technologies like AI, despite the potential tradeoffs of unpredictability and uncertainty.