Alarie & Cockfield on Machine-Authored Texts and the Future of Scholarship

Benjamin Alarie (University of Toronto – Faculty of Law) and Arthur J. Cockfield (Queen’s University – Faculty of Law) have posted “Will Machines Replace Us? Machine-Authored Texts and the Future of Scholarship” (Law, Technology and Humans, volume 3(2) (2021 Forthcoming) on SSRN. Here is the abstract:

We present here the first machine-generated law review article. Our self-interest motivates us to believe that knowledge workers who write complex articles drawing upon years of research and effort are safe from AI developments. However, how reasonable is it to persist in this belief given recent advances in AI research? With that topic in mind, we caused GPT-3, a state-of-the-art AI, to generate a paper that explains “why humans will always be better lawyers, drivers, CEOs, presidents, and law professors than artificial intelligence and robots can ever hope to be.” The resulting paper, with no edits apart from giving it a title and bolding the headings generated by GPT-3, is reproduced below. It is imperfect in a humorous way. Ironically, it is publishable “as-is” only because it is machine-generated. Nevertheless, the resulting paper is good enough to give us some pause for thought. Although GPT-3 is not up to the task of replacing law review authors currently, we are far less confident that GPT-5 or GPT- 100 might not be up to the task in the future.

Gutierrez on Trends in the Enforcement of Soft Law for the Governance of Artificial Intelligence

Carlos Ignacio Gutierrez (ASU Law) has posted “Transitioning from Ideas to Action: Trends in the Enforcement of Soft Law for the Governance of Artificial Intelligence” on SSRN. Here is the abstract:

As a governance tool, the advantages of soft law (e.g. lack of jurisdiction, minimal barriers of entry, and disposition for experimentation) make it a viable alternative to manage emerging technologies that are continuously evolving. A barrier to soft law’s utilization is its most cited weakness, a reliance on the alignment of incentives for its enforcement. Nevertheless, organizations throughout the globe have created mechanisms to ensure that the ideas within programs are transformed into action. This article explores the trends in the use of such mechanisms within soft law programs to govern methods and applications of artificial intelligence (AI). Using a database of over 600 AI soft law programs, this piece identifies the diverse array of options available to organizations in their efforts to implement and enforce their programs.

Kingsman on the UK’s Public Sector AI Transparency Standard

Nigel Kingsman (Holistic AI) et al. have posted “Public Sector AI Transparency Standard” on SSRN. Here is the abstract:

In releasing the Algorithmic Transparency Standard, the UK government has reiterated its commitment to greater algorithmic transparency in the public sector. The Standard signals that the UK government is both pushing forward with the AI standards agenda and ensuring that those standards benefit from empirical practitioner-led experience, enabling coherent, widespread adoption. The two tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, facilitating trust across algorithm stakeholders. Moreover, it can be understood that implementation of the Standard within the UK’s public sector will inform standards more widely, influencing best practice in the private sector. This article provides a summary and commentary of the text.

Marks on Automating FDA Regulation

Mason Marks (Harvard Law School; Yale Law School; University of New Hampshire Franklin Pierce School of Law; Leiden Law School, Center for Law and Digital Technologies) has posted “Automating FDA Regulation” (Duke Law Journal, Forthcoming) on SSRN. Here is the abstract:

In the twentieth century, the Food and Drug Administration (“FDA”) rose to prominence as a respected scientific agency. By the middle of the century, it transformed the U.S. medical marketplace from an unregulated haven for dangerous products and false claims to a respected exemplar of public health. More recently, the FDA’s objectivity has increasingly been questioned. Critics argue the agency has become overly political and too accommodating to industry while lowering its standards for safety and efficacy. The FDA’s accelerated pathways for product testing and approval are partly to blame. They require lower quality evidence, such as surrogate endpoints, and shift the FDA’s focus from premarket clinical trials toward postmarket surveillance, requiring less evidence up front while promising enhanced scrutiny on the back end. To further streamline product testing and approval, the FDA is adopting algorithmic predictions, from computer models and simulations enhanced by artificial intelligence (“AI”), as surrogates for direct evidence of safety and efficacy.

This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents, and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation. Specifically, the agency should incorporate principles of AI ethics into simulation guidelines. Until better tools for evaluating models are available, and robust standards are implemented to ensure their safe and equitable implementation, computer models should be limited to academic research, and FDA decisions should rely on them only when there are no suitable alternatives.

Restrepo-Amariles et al. on Computational Indicators in the Legal Profession: Can Artificial Intelligence Measure Lawyers’ Performance?

David Restrepo-Amariles (HEC Paris) et al. have posted “Computational Indicators in the Legal Profession: Can Artificial Intelligence Measure Lawyers’ Performance?” (Journal of Law, Technology and Policy, Vol. 2021, No. 2, 2021) on SSRN. Here is the abstract:

The assessment of the legal professionals’ performance is increasingly important in the market of legal services to provide relevant information both to consumers and to law firms regarding the quality of legal services. In this article, we explore how computational indicators are produced to assess lawyers’ performance in courtroom litigation, analyzing the specific types of information they can generate. We capitalize on artificial intelligence (AI) methods to analyze a sample of 8,045 cases from the French Courts of Appeal, explore different associations involving lawyers, courts, and cases, and assess the strengths and flaws of the resulting metrics to evaluate the performance of legal professionals. The methods we use include natural language processing, machine learning, graph mining and advanced visualization. Based on the examination of the resulting analytics, we uncover both the advantages and challenges of assessing performance in the legal profession through AI methods. We argue that computational indicators need to address deficiencies regarding their methodology and diffusion to users to become effective means of information in the market of legal services. We conclude proposing adjustments to computational indicators and existing regulatory tools to achieve this purpose, seeking to pave the way for further research on this topic.

Deng & Hernandez on Algorithmic Pricing in Horizontal Merger Review

Ai Deng (Johns Hopkins University; Charles River Associates) and Cristián Hernández (NERA Economic Consulting) have posted “Algorithmic Pricing in Horizontal Merger Review: An Initial Assessment” on SSRN. Here is the abstract:

While the possibility of algorithmic price discrimination and algorithmic collusion has been extensively discussed in the global antitrust community in recent years, there has been much more limited discussion in the context of mergers. In this article, we aim to fill this gap by discussing some potential implications of algorithmic pricing on market definition, unilateral effects, coordinated effects, and remedies. Specifically, we discuss the following topics and related questions:

– Market definition. How to deal with algorithm-enhanced market/customer segmentation and how to identify relevant antitrust markets when prices are set by a “blackbox” algorithm.

– Unilateral effects. How to use merging parties’ pricing algorithms to conduct merger simulations and why there are important antitrust issues related to integrating merging parties’ pricing algorithms and their data.

– Coordinated effects. What some of the recent scholarship tells us about potentially coordinated effects in a merger context.

– Remedies. Why data compatibility and collusion risk are important considerations when “divesting” merging parties’ pricing algorithm.

Almada & Dymitruk on Data Protection and Judicial Automation

Marco Almada (EUI) and Maria Dymitruk (University of Wroclaw) have posted “Data Protection and Judicial Automation” (Eleni Kosta and Ronald E Leenes (eds), Research handbook on EU data protection (Edward Elgar)) on SSRN. Here is the abstract:

The words “judicial automation” invoke a broad range of images, ranging from time-saving tools to decision-aiding tools or even quixotic ideas of robot judges. As the development of artificial intelligence technologies expands the range of possible automation, it also raises questions about the extent to which automation is admissible in judicial contexts and the safeguards required for the safe use of AI in judicial contexts. This chapter argues that these applications raise specific challenges for data protection law, as the use of personal data for judicial automation requires the adoption of safeguards against risks to the right to a fair trial. The chapter discusses current and proposed uses of judicial automation, identifying how they use personal data in their operation and the issues that arise from this use, such as algorithmic biases and system opacity. By connecting these issues to the safeguards required for automated decision-making and data protection by design, the chapter shows how data protection law may contribute to a fair trial in contexts of judicial automation and highlights open research questions in the interface between procedural rights and data protection.

Kershaw et al. on An Initial Examination of Computer Programs as Creative Works

Trina Kershaw (UMass), Ralph D. Clifford (UMass), Firas Khatib (UMass), and Adnan El-Nasan (UMass) have posted “An Initial Examination of Computer Programs as Creative Works” on SSRN. Here is the abstract:

Products from many domains (art, music, engineering design, literature, etc.) are considered to be creative works, but there is a misconception that computer programs are limited by set expressions and thus have no room for creativity. To determine whether computer programs are creative works, we collected programs from 23 advanced graduate students that were written to solve simple and complex bioinformatics problems. These programs were assessed for their variability of expression using a new measurement that we designed. They were also evaluated on several elements of their creativity using a version of Cropley and Kaufman’s (2012) Creative Solution Diagnosis Scale that was modified to refer to programming. We found a high degree of variation in the programs that were produced, with 11 unique solutions for the simple problem and 20 unique solutions for the complex problem. We also found higher ratings of propulsion-genesis and problematization for the complex problem than for the simple problem. This combination of variation in expression and differences in level of creativity based on program complexity suggests that computer programs, like many other products, count as creative works. Implications for the creativity literature, computer science education, and intellectual property law, particularly copyright, are discussed.