Mason Marks (Harvard Law School; Yale Law School; University of New Hampshire Franklin Pierce School of Law; Leiden Law School, Center for Law and Digital Technologies) has posted “Automating FDA Regulation” (Duke Law Journal, Forthcoming) on SSRN. Here is the abstract:
In the twentieth century, the Food and Drug Administration (“FDA”) rose to prominence as a respected scientific agency. By the middle of the century, it transformed the U.S. medical marketplace from an unregulated haven for dangerous products and false claims to a respected exemplar of public health. More recently, the FDA’s objectivity has increasingly been questioned. Critics argue the agency has become overly political and too accommodating to industry while lowering its standards for safety and efficacy. The FDA’s accelerated pathways for product testing and approval are partly to blame. They require lower quality evidence, such as surrogate endpoints, and shift the FDA’s focus from premarket clinical trials toward postmarket surveillance, requiring less evidence up front while promising enhanced scrutiny on the back end. To further streamline product testing and approval, the FDA is adopting algorithmic predictions, from computer models and simulations enhanced by artificial intelligence (“AI”), as surrogates for direct evidence of safety and efficacy.
This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents, and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation. Specifically, the agency should incorporate principles of AI ethics into simulation guidelines. Until better tools for evaluating models are available, and robust standards are implemented to ensure their safe and equitable implementation, computer models should be limited to academic research, and FDA decisions should rely on them only when there are no suitable alternatives.