Gaske on Regulation Priorities for Artificial Intelligence Foundation Models

Matthew Gaske has posted “Regulation Priorities for Artificial Intelligence Foundation Models” (26 VAND. J. ENT. & TECH. L. 1, Forthcoming (2023)) on SSRN. Here is the abstract:

This Article responds to the call in techlaw literature for high-level frameworks to guide regulation of the development and use of Artificial Intelligence (AI) technologies. Accordingly, it adapts a generalized form of the fintech Innovation Trilemma framework to argue that a regulator can prioritize only two of three aims when considering AI oversight: 1) promoting innovation, 2) mitigating systemic risk, and 3) providing clear regulatory requirements. Specifically, this Article expressly connects legal scholarship to research in other fields focusing on “foundation model” AI and explores this kind of system’s implications for regulation priorities from the geopolitical and commercial competitive contexts. These models are so named because they have a novel ability to easily apply their resources across a broad variety of use cases, unlike prior AI technologies. These systems, such as OpenAI’s ChatGPT or Alphabet’s LaMDA, have recently rocketed to popularity and have the potential to fundamentally change many areas of life. Yet legal scholarship examining AI has insufficiently recognized the role of international and corporate competition in such a transformational field. Considering that competitive context and the Trilemma, this Article argues from a descriptive perspective that solely one policy prioritization choice is needed: whether to emphasize systemic risk mitigation or clear requirements, given that prioritizing innovation is effectively a given for many governmental and private actors. Next, regulation should prioritize systemic risk over clarity because foundation models present a substantive change in the potential for, and nature of, systemic disruption. Finally, the Article considers ways to mitigate regulators’ lack of legal clarity, examines the potential role of the public and private sectors in AI regulation under these assumptions, and argues instead for harm-based liability for AI providers when reasonably implementable, known technological advances could have prevented injury. This tradeoff thus promotes innovation and mitigates systemic risk from foundation AI models.