Stuart Hargreaves (The Chinese University of Hong Kong (CUHK) – Faculty of Law) has posted “‘Words Are Flowing Out Like Endless Rain Into a Paper Cup’: ChatGPT & Law School Assessments” on SSRN. Here is the abstract:
ChatGPT is a sophisticated large-language model able to answer high-level questions in a way that is undetectable by conventional plagiarism detectors. Concerns have been raised it poses a significant risk of academic dishonesty in ‘take-home’ assessments in higher education. To evaluate this risk in the context of legal education, this project had ChatGPT generate answers to twenty-four different exams from an English-language law school based in a common law jurisdiction. It found that the system performed best on exams that were essay-based and asked students to discuss international legal instruments or general legal principles not necessarily specific to any jurisdiction. It performed worst on exams that featured problem-style or “issue spotting” questions asking students to apply an invented factual scenario to local legislation or jurisprudence. While the project suggests that for the most part conventional law school assessments are for the time being relatively immune from the threat ChatGPT brings, this is unlikely to remain the case as the technology advances. However, rather than attempt to block students from using AI as part of learning and assessment, this paper instead proposes three ways students may be taught to use it in appropriate and ethical ways. While it is clear that ChatGPT and similar AI technologies will change how universities teach and assess (across disciplines), a solution of prevention or denial is no solution at all.
Rebecca Kunkel (Rutgers Law) has posted “Artificial Intelligence, Automation, and Proletarianization of the Legal Profession” (Creighton Law Review, Vol. 56, 2022) on SSRN. Here is the abstract:
Recent advances in computer programming, broadly categorized as “artificial intelligence,” (“Al”) have renewed debates over machines as viable replacements for human lawyers. Some prominent lawyers and legal scholars now adhere to a vision of the future heavily seasoned with Silicon Valley-style techno-utopianism: the legal profession may endure but only in a form in which it would be almost unrecognizable today, while legal innovators will need to immerse themselves in the possibilities opened up by artificial intelligence in order to survive. For others, the view of artificial intelligence and its potential application to law is more limited, as they argue for the impossibility of automating many essential aspects of legal service. These views share key assumptions about the nature of Al technology: that technological development follows its own course and that the widespread adoption of technologies is primarily determined by objective measures of efficacy. This essay offers an alternate Marxian account of legal Al which places it in the larger history of automation and proletarianization.
Ilias Chalkidis (University of Copenhagen) has posted “ChatGPT May Pass the Bar Exam Soon, but Has a Long Way to Go for the LexGLUE Benchmark” on SSRN. Here is the abstract:
Following the hype around OpenAI’s ChatGPT conversational agent, the last straw in the recent development of Large Language Models (LLMs) that demonstrate emergent unprecedented zero-shot capabilities, we audit the latest OpenAI’s GPT-3.5 model, ‘gpt-3.5-turbo’, the first available ChatGPT model, in the LexGLUE benchmark in a zero-shot fashion providing examples in a templated instruction-following format. The results indicate that ChatGPT achieves an average micro-F1 score of 49.0% across LexGLUE tasks, surpassing the baseline guessing rates. Notably, the model performs exceptionally well in some datasets, achieving micro-F1 scores of 62.8% and 70.1% in the ECtHR B and LEDGAR datasets, respectively. The code base and model predictions are available at https://github.com/coastalcph/zeroshot_lexglue.