Kingsman on the UK’s Public Sector AI Transparency Standard

Nigel Kingsman (Holistic AI) et al. have posted “Public Sector AI Transparency Standard” on SSRN. Here is the abstract:

In releasing the Algorithmic Transparency Standard, the UK government has reiterated its commitment to greater algorithmic transparency in the public sector. The Standard signals that the UK government is both pushing forward with the AI standards agenda and ensuring that those standards benefit from empirical practitioner-led experience, enabling coherent, widespread adoption. The two tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, facilitating trust across algorithm stakeholders. Moreover, it can be understood that implementation of the Standard within the UK’s public sector will inform standards more widely, influencing best practice in the private sector. This article provides a summary and commentary of the text.

Marks on Automating FDA Regulation

Mason Marks (Harvard Law School; Yale Law School; University of New Hampshire Franklin Pierce School of Law; Leiden Law School, Center for Law and Digital Technologies) has posted “Automating FDA Regulation” (Duke Law Journal, Forthcoming) on SSRN. Here is the abstract:

In the twentieth century, the Food and Drug Administration (“FDA”) rose to prominence as a respected scientific agency. By the middle of the century, it transformed the U.S. medical marketplace from an unregulated haven for dangerous products and false claims to a respected exemplar of public health. More recently, the FDA’s objectivity has increasingly been questioned. Critics argue the agency has become overly political and too accommodating to industry while lowering its standards for safety and efficacy. The FDA’s accelerated pathways for product testing and approval are partly to blame. They require lower quality evidence, such as surrogate endpoints, and shift the FDA’s focus from premarket clinical trials toward postmarket surveillance, requiring less evidence up front while promising enhanced scrutiny on the back end. To further streamline product testing and approval, the FDA is adopting algorithmic predictions, from computer models and simulations enhanced by artificial intelligence (“AI”), as surrogates for direct evidence of safety and efficacy.

This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents, and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation. Specifically, the agency should incorporate principles of AI ethics into simulation guidelines. Until better tools for evaluating models are available, and robust standards are implemented to ensure their safe and equitable implementation, computer models should be limited to academic research, and FDA decisions should rely on them only when there are no suitable alternatives.

Restrepo-Amariles et al. on Computational Indicators in the Legal Profession: Can Artificial Intelligence Measure Lawyers’ Performance?

David Restrepo-Amariles (HEC Paris) et al. have posted “Computational Indicators in the Legal Profession: Can Artificial Intelligence Measure Lawyers’ Performance?” (Journal of Law, Technology and Policy, Vol. 2021, No. 2, 2021) on SSRN. Here is the abstract:

The assessment of the legal professionals’ performance is increasingly important in the market of legal services to provide relevant information both to consumers and to law firms regarding the quality of legal services. In this article, we explore how computational indicators are produced to assess lawyers’ performance in courtroom litigation, analyzing the specific types of information they can generate. We capitalize on artificial intelligence (AI) methods to analyze a sample of 8,045 cases from the French Courts of Appeal, explore different associations involving lawyers, courts, and cases, and assess the strengths and flaws of the resulting metrics to evaluate the performance of legal professionals. The methods we use include natural language processing, machine learning, graph mining and advanced visualization. Based on the examination of the resulting analytics, we uncover both the advantages and challenges of assessing performance in the legal profession through AI methods. We argue that computational indicators need to address deficiencies regarding their methodology and diffusion to users to become effective means of information in the market of legal services. We conclude proposing adjustments to computational indicators and existing regulatory tools to achieve this purpose, seeking to pave the way for further research on this topic.