Cortez & Sage on The Disembodied First Amendment

Nathan Cortez (SMU – Dedman School of Law) and William M. Sage (Texas A&M University School of Law) have posted “The Disembodied First Amendment” (100 Washington University Law Review 707 (2023)) on SSRN. Here is the abstract:

First Amendment doctrine is becoming disembodied—increasingly detached from human speakers and listeners. Corporations claim that their speech rights limit government regulation of everything from product labeling to marketing to ordinary business licensing. Courts extend protections to commercial speech that ordinarily extended only to core political and religious speech. And now, we are told, automated information generated for cryptocurrencies, robocalling, and social media bots are also protected speech under the Constitution. Where does it end? It begins, no doubt, with corporate and commercial speech. We show, however, that heightened protection for corporate and commercial speech is built on several “artifices” – dubious precedents, doctrines, assumptions, and theoretical grounds that have elevated corporate and commercial speech rights over the last century. This Article offers several ways to deconstruct these artifices, re-tether the First Amendment to natural speakers and listeners, and thus reclaim the individual, political, and social objectives of the First Amendment.

Dothan on Facing Up To Internet Giants

Shai Dothan (University of Copenhagen – iCourts – Centre of Excellence for International Courts) has posted “Facing Up To Internet Giants” (Duke Journal of Comparative & International Law, Forthcoming) on SSRN. Here is the abstract:

Mancur Olson claimed that concentrated interests win against diffuse interests even in advanced democracies. Multinational companies, for example, work well in unison to suit their interests. The rest of the public is not motivated or informed enough to resist them. In contrast, other scholars argued that diffuse interests may be able to fight back, but only when certain conditions prevail. One of the conditions for the success of diffuse interests is the intervention of national and international courts. Courts are able to fix problems affecting diffuse interests. Courts can also initiate deliberation that can indirectly empower diffuse interests by getting them informed. This paper investigates the jurisprudence of the European Court of Human Rights (ECHR) and the Court of Justice of the European Union (CJEU). It argues that these international courts help consumers, a diffuse interest group, to succeed in their struggle against internet companies, a concentrated interest group.

Hartzog, Selinger & Gunawan on Privacy Nicks: How the Law Normalizes Surveillance

Woodrow Hartzog (Boston University School of Law; Stanford Law School Center for Internet and Society), Evan Selinger
(Rochester Institute of Technology – Department of Philosophy), and Johanna Gunawan (Northeastern University Khoury College of Computer Sciences) have posted “Privacy Nicks: How the Law Normalizes Surveillance” (101 Washington University Law Review, Forthcoming) on SSRN. Here is the abstract:

Privacy law is failing to protect individuals from being watched and exposed, despite stronger surveillance and data protection rules. The problem is that our rules look to social norms to set thresholds for privacy violations, but people can get used to being observed. In this article, we argue that by ignoring de minimis privacy encroachments, the law is complicit in normalizing surveillance. Privacy law helps acclimate people to being watched by ignoring smaller, more frequent, and more mundane privacy diminutions. We call these reductions “privacy nicks,” like the proverbial “thousand cuts” that lead to death.

Privacy nicks come from the proliferation of cameras and biometric sensors on doorbells, glasses, and watches, and the drift of surveillance and data analytics into new areas of our lives like travel, exercise, and social gatherings. Under our theory of privacy nicks as the Achilles heel of surveillance law, invasive practices become routine through repeated exposures that acclimate us to being vulnerable and watched in increasingly intimate ways. With acclimation comes resignation, and this shift in attitude biases how citizens and lawmakers view reasonable measures and fair tradeoffs.

Because the law looks to norms and people’s expectations to set thresholds for what counts as a privacy violation, the normalization of these nicks results in a constant re-negotiation of privacy standards to society’s disadvantage. When this happens, the legal and social threshold for rejecting invasive new practices keeps getting redrawn, excusing ever more aggressive intrusions. In effect, the test of what privacy law allows is whatever people will tolerate. There is no rule to stop us from tolerating everything. This article provides a new theory and terminology to understand where privacy law falls short and suggests a way to escape the current surveillance spiral.

Macey-Dare on How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession

Rupert Macey-Dare (St Cross College – University of Oxford; Middle Temple; Minerva Chambers) has posted “How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession” on SSRN. Here is the abstract:

In this paper, ChatGPT, is asked to provide c.150+ paragraphs of detailed prediction and insight into the following overlapping questions, concerning the potential impact of ChatGPT and successor generative AI systems on the evolving practice of law and the legal professions as we know them:

• Which are the individual legal business areas where ChatGPT could make a significant/ transformative impact and reduce costs and increase efficiencies?
• Where can ChatGPT use its special NLP abilities to assist in legal analysis and advice?
• Which are the specific areas where generative AI systems like ChatGPT can revolutionize and improve the legal profession?
• How can systems like ChatGPT help ordinary people with legal questions and legal problems?
• What is the likely timeframe for ChatGPT and other generative AI systems to transform legal services and the legal profession?
• What are the potential implications for new and intending law students?
• How will ChatGPT and similar systems impact professional lawyers in future?

Some of ChatGPT’s key insights and predictions (see full paper attached for detailed responses and analysis) are as follows:

ChatGPT identifies the following key individual legal business areas where it could make a significant/ transformative impact and reduce costs and increase efficiencies: Alternative dispute resolution, Automated billing, Case analysis, Case management, Compliance monitoring, Contract management, Contract review, Document automation, Document review, Discovery and E-discovery, Drafting legal documents, Due diligence, Expertise matching, Intellectual Property and IP management, Legal advice, Legal analytics, Legal chatbots, Legal drafting, Legal document review, Legal education, Legal marketing, Legal research, Litigation support, Natural language processing (NLP), Patent analysis, Predictive analytics, Regulatory compliance, Research, Risk assessment, Training and education, Translation and Virtual assistants.

ChatGPT flags up its special NLP abilities to assist in legal analysis and advice, particularly in the following key areas: Contract analysis, Document classification, Document summarization, Due diligence, Legal chatbots, Legal document review, Legal document summarization, Legal drafting, Legal language translation, Legal research, Named entity recognition, Predictive analytics, Regulatory compliance, Sentiment analysis and Topic modelling.

On the question of which are the specific areas where generative AI systems like ChatGPT can revolutionize and improve the legal profession, ChatGPT identifies: Accessibility, Accuracy, Collaboration, Cost reduction, Customization, Decision-making, Efficiency, Error-reduction, New business and innovation, Job displacement potential, Legal research, Risk management and Scalability.

On the question of how can systems like ChatGPT help ordinary people with legal questions and legal problems, ChatGPT identifies the following areas: 24/7 availability, Automated legal services, Consistency of advice, Contract review, Cost-effectiveness, Court filings, Customization, Document preparation, Education, Empowerment, Faster response times, Language translation, Legal advice, Legal chatbots, Legal education, Legal research, Mediation and dispute resolution, Privacy, Scalability and Simplified language.

On the question of the likely timeframe for ChatGPT and other generative AI systems to transform legal services and the legal profession, Chat GPT comments that “It is difficult to predict with certainty, as it will depend on a variety of factors such as technological advancements, regulatory changes, and market demand. [However] There are several reasons to believe that the transformation of legal services through generative AI systems like ChatGPT will happen relatively quickly…the transformation of legal services through generative AI systems is likely to happen relatively quickly, potentially within the next 5-10 years.”

On the potential implications for new and intending law students, ChatGPT comments that: “It is essential for students contemplating studying law to be aware of the ability of ChatGPT and generative AI systems to perform legal jobs instead of humans in the future. These technologies are already transforming the legal industry, and their impact is likely to continue to grow in the years to come. Understanding how these systems work and how they can be used in legal practice will be crucial for anyone seeking a career in law, and may also provide a competitive advantage in the job market. By understanding the potential impact of these technologies on the legal industry, students can better prepare themselves for the changing nature of legal work and take advantage of the new opportunities that are emerging as a result of these developments.”

On the all-important question for lawyers of how ChatGPT and similar systems will impact professional lawyers in future, ChatGPT comments that: “Generative AI systems like ChatGPT have the potential to significantly impact the demand and payment for professional lawyers in the future. As these systems become more advanced and capable of performing a wider range of legal tasks, it is likely that they will begin to replace some of the work that is currently performed by human lawyers.”

“One area where this is already happening is in document review. AI systems can review and analyze large volumes of documents much more quickly and accurately than human lawyers, which can save significant time and cost for law firms and their clients. As these systems become more sophisticated, they may also be able to perform other tasks such as legal research and analysis, drafting legal documents, and even providing legal advice.”

“The impact of generative AI systems on the demand for lawyers will depend on the specific tasks and areas of law that they are able to perform. It is likely that some areas of law will be more heavily impacted than others. For example, areas such as contract law and intellectual property may be more susceptible to automation, while litigation and dispute resolution may be less so.”

“In terms of payment for professional lawyers, the impact of generative AI systems is also likely to vary depending on the specific tasks and areas of law. In some cases, these systems may allow lawyers to perform their work more quickly and efficiently, which could potentially lead to higher billable hours and increased income. However, if these systems are able to replace some of the work that is currently performed by human lawyers, it could also lead to a reduction in demand for these services and a decrease in fees.”

“One potential impact of systems like ChatGPT on the legal industry is a reduction in the demand for certain types of legal work that can be automated or performed more efficiently by AI systems. For example, tasks like document review, contract drafting, and legal research may be performed more accurately and quickly by AI systems than by humans, leading to a decrease in the number of lawyers needed to perform these tasks.”

“It is also possible that the development of AI systems like ChatGPT will lead to changes in the way that legal services are priced and delivered. As these technologies become more common, it is likely that clients will begin to expect lower costs and faster turnaround times for certain types of legal work. This could lead to increased competition among legal service providers, which in turn could put pressure on lawyers to lower their rates or find ways to deliver legal services more efficiently….it is clear that these technologies have the potential to significantly change the legal industry, and that lawyers will need to adapt in order to remain competitive and relevant in a rapidly changing market. This may involve developing new skills and knowledge related to working alongside AI systems, or focusing on areas of law that are less susceptible to automation.”

Interestingly, although ChatGPT does discuss practical contract management, IP and evidence, it does not seem to predict inroads being made into academic legal analysis, statutory construction, complex case analysis or the development of new legal thinking and principles, so not into the theoretical domain of law professors and senior lawyers and judges, (although there are additional reasons why there are likely to be knock-on reductions in demand for these specialist lawyers too).

But for the vast majority of procedural (routinely turning-the-handle type) practitioner law and practice, ChatGPT seems to be predicting a seismic sectoral shock, a reduction in human-centric legal work, an increase in legal self-help for clients and the public, and a technological transformation in and fundamental repricing and manpower shock for the legal sector within a timeframe of 5-10 years.

N.B. This is only one set of predictions, which could prove right or wrong, indeed from an unconscious chatbot machine ChatGPT. However it has the credibility of being made based on both a huge body of knowledge data, and on the consistent rules programmed into ChatGPT itself, and by apparently coherently reasoned responses. Time will soon tell of course…

Pettinato Oltz on ChatGPT as a Law Professor

Tammy Pettinato Oltz (University of North Dakota School of Law) has posted “ChatGPT, Professor of Law” on SSRN. Here is the abstract:

Although ChatGPT was just released by OpenAI in November 2022, legal scholars have already been delving into the implications of the new tool for legal education and the legal profession. Several scholars have recently written fascinating pieces examining ChatGPT’s ability to pass the bar, write a law review article, create legal documents, or pass a law school exam. In the spirit of those experiments, I decided to see whether ChatGPT had potential for lightening the service and teaching loads of law school professors.

To conduct my experiment, I created an imaginary law school professor with a tough but typical week of teaching- and service- related tasks ahead of her. I chose seven common tasks: creating a practice exam question, designing a hand-out for a class, writing a letter of recommendation, submitting a biography for a speaking engagement, writing opening remarks for a symposium, developing a document for a law school committee, and designing a syllabus for a new course. I then ran prompts for each task through ChatGPT to see how well the system performed the tasks.

Remarkably, ChatGPT was able to provide useable first drafts for six out of seven of the tasks assigned in only 23 minutes. Overall and unsurprisingly, ChatGPT proved to be best at those tasks that are most routine. Tasks that require more sophistication, particularly those related to teaching, were harder for ChatGPT, but still showed potential for time savings.

In this paper, I describe a typical work scenario for a hypothetical law professor, show how she might use ChatGPT, and analyze the results. I conclude that ChatGPT can drastically reduce the service-related workload of law school faculty and can also shave off time on back-end teaching tasks. This freed-up time could be used to either enhance scholarly productivity or further develop more sophisticated teaching skills.

Sarel on Restraining ChatGPT

Roee Sarel (Institute of Law and Economics, University of Hamburg) has posted “Restraining ChatGPT” (UC Law SF Journal (formerly Hastings Law Journal), Forthcoming) on SSRN. Here is the abstract:

ChatGPT is a prominent example of how Artificial Intelligence (AI) has stormed into our lives. Within a matter of weeks, this new AI—which produces coherent and human-like textual answers to questions—has managed to become an object of both admiration and anxiety. Can we trust generative AI systems, such as ChatGPT, without regulatory oversight?

Designing an effective legal framework for AI requires answering three main questions: (i) is there a market failure that requires legal intervention? (ii) should AI be governed through public regulation, tort liability, or a mixture of both? and (iii) should liability be based on strict liability or a fault-based regime such as negligence? The law and economics literature offers clear considerations for these choices, focusing on the incentives of injurers and victims to take precautions, engage in efficient activity levels, and acquire information.

This Article is the first to comprehensively apply these considerations to ChatGPT as a leading test case. As the United States is lagging behind in its response to the AI revolution, I focus on the recent proposals in the European Union to restrain AI systems, which apply a risk-based approach and combine regulation and liability. The analysis reveals that this approach does not map neatly onto the relevant distinctions in law and economics, such as market failures, unilateral versus bilateral care, and known versus unknown risks. Hence, the existing proposals may lead to various incentive distortions and inefficiencies. The Article, therefore, calls upon regulators to place a stronger emphasis on law and economics concepts in their design of AI policy.

Hargreaves on ChatGPT, Law School Exams, and Cheating

Stuart Hargreaves (The Chinese University of Hong Kong (CUHK) – Faculty of Law) has posted “‘Words Are Flowing Out Like Endless Rain Into a Paper Cup’: ChatGPT & Law School Assessments” on SSRN. Here is the abstract:

ChatGPT is a sophisticated large-language model able to answer high-level questions in a way that is undetectable by conventional plagiarism detectors. Concerns have been raised it poses a significant risk of academic dishonesty in ‘take-home’ assessments in higher education. To evaluate this risk in the context of legal education, this project had ChatGPT generate answers to twenty-four different exams from an English-language law school based in a common law jurisdiction. It found that the system performed best on exams that were essay-based and asked students to discuss international legal instruments or general legal principles not necessarily specific to any jurisdiction. It performed worst on exams that featured problem-style or “issue spotting” questions asking students to apply an invented factual scenario to local legislation or jurisprudence. While the project suggests that for the most part conventional law school assessments are for the time being relatively immune from the threat ChatGPT brings, this is unlikely to remain the case as the technology advances. However, rather than attempt to block students from using AI as part of learning and assessment, this paper instead proposes three ways students may be taught to use it in appropriate and ethical ways. While it is clear that ChatGPT and similar AI technologies will change how universities teach and assess (across disciplines), a solution of prevention or denial is no solution at all.

Kunkel on Artificial Intelligence, Automation, and Proletarianization of the Legal Profession

Rebecca Kunkel (Rutgers Law) has posted “Artificial Intelligence, Automation, and Proletarianization of the Legal Profession” (Creighton Law Review, Vol. 56, 2022) on SSRN. Here is the abstract:

Recent advances in computer programming, broadly categorized as “artificial intelligence,” (“Al”) have renewed debates over machines as viable replacements for human lawyers. Some prominent lawyers and legal scholars now adhere to a vision of the future heavily seasoned with Silicon Valley-style techno-utopianism: the legal profession may endure but only in a form in which it would be almost unrecognizable today, while legal innovators will need to immerse themselves in the possibilities opened up by artificial intelligence in order to survive. For others, the view of artificial intelligence and its potential application to law is more limited, as they argue for the impossibility of automating many essential aspects of legal service. These views share key assumptions about the nature of Al technology: that technological development follows its own course and that the widespread adoption of technologies is primarily determined by objective measures of efficacy. This essay offers an alternate Marxian account of legal Al which places it in the larger history of automation and proletarianization.

Chalkidis on ChatGPT Cannot (Yet) Pass LexGLUE Benchmark

Ilias Chalkidis (University of Copenhagen) has posted “ChatGPT May Pass the Bar Exam Soon, but Has a Long Way to Go for the LexGLUE Benchmark” on SSRN. Here is the abstract:

Following the hype around OpenAI’s ChatGPT conversational agent, the last straw in the recent development of Large Language Models (LLMs) that demonstrate emergent unprecedented zero-shot capabilities, we audit the latest OpenAI’s GPT-3.5 model, ‘gpt-3.5-turbo’, the first available ChatGPT model, in the LexGLUE benchmark in a zero-shot fashion providing examples in a templated instruction-following format. The results indicate that ChatGPT achieves an average micro-F1 score of 49.0% across LexGLUE tasks, surpassing the baseline guessing rates. Notably, the model performs exceptionally well in some datasets, achieving micro-F1 scores of 62.8% and 70.1% in the ECtHR B and LEDGAR datasets, respectively. The code base and model predictions are available at https://github.com/coastalcph/zeroshot_lexglue.

Asay on the DMCA’s Anti-Circumvention Provisions

Clark D. Asay (Brigham Young Law) has posted “An Empirical Study of the DMCA’s Anti-Circumvention Provisions” on SSRN. Here is the abstract:

The DMCA has been a flashpoint during most of its twenty-five-year existence. One of the most controversial parts of the DMCA is Section 1201. Among other things, Section 1201 prohibits third parties from circumventing certain controls to copyrighted content or trafficking in tools that enable circumvention of technological controls. However, despite its nearly quarter-of-a-century lifespan, we know very little about Section 1201 empirically. While certain aspects of the broader DMCA have received empirical assessments, Section 1201 has not. Our understanding of Section 1201 is largely based on anecdotal evidence, in the form of leading opinions from historically prominent copyright circuits. But this anecdotal evidence is hardly a solid basis for ongoing discussions about how Section 1201 is performing and whether it needs revising. In this Article, we seek to address these and other issues.

To do so, we conducted a broad-based search of Westlaw to collect every issued opinion, whether reported or not, where a court purported to apply some part of Section 1201. We then reviewed these cases to glean as much useful information about Section 1201 as possible. This review led to a number of important and, in some cases, surprising results. First, Section 1201 opinions are a relative rarity. In the nearly quarter of a century since the DMCA’s enactment, we could find only a little over 200 opinions, with only about sixty of those being published. The average number of opinions during the DMCA’s existence has been around nine annually, which pales in comparison to other types of copyright cases. Second, despite the Second Circuit receiving much attention in anecdotal accountings of Section 1201, courts within it issue Section 1201 opinions infrequently. The Ninth Circuit is the dominant Section 1201 court, both in terms of citations to its opinions and overall number of opinions, and the Sixth and Eleventh Circuits both issue more Section 1201 opinions than the Second Circuit. This result stands in contrast to other types of copyright litigation, where the Second Circuit is a behemoth. Third, the most common subject matter in dispute in Section 1201 cases is computer software, followed distantly by audiovisual material such as movies. Music stands in last place, showing up in only a couple issued opinions. Debates at the time of the DMCA’s enactment were informed by widespread fears of copyright infringement relating to digital music and other types of digital content. Yet Section 1201 has resulted in but few litigations involving those subject matters. Fourth, suits and defaults against individuals happen frequently in the Section 1201 context, with courts often assessing large statutory damages against those individuals. As we discuss in the paper, this result raises important equity issues. Fifth, despite Section 1201 including a number of statutory exceptions, these exceptions basically never make their way into issued opinions. Fair use, too, only infrequently enters courts’ Section 1201 discussions. This means, effectively, that the primary way to escape Section 1201 liability is through administrative exceptions granted by the Library of Congress on a triennial basis. But as we shall see, this process has significant holes. Finally, plaintiffs disproportionately win Section 1201 cases. This result is somewhat bloated because of the frequency of defaults against individuals. Setting these aside, plaintiffs still enjoy tremendous success under Section 1201. However, when looking at opinions only outside of the Ninth Circuit, win rates become mostly even.

I conclude with several calls for DMCA reform. These include bolstering statutory exceptions and more closely tying Section 1201 to copyright infringement. Pursuing these reforms, I argue, will more faithfully align Section 1201 with its purported objectives.