Yilmaz, Naumovska & Aggarwal on AI-Driven Labor Substitution: Evidence from Google Translate and ChatGPT

Erdem Dogukan Yilmaz (Erasmus Univeristy Rotterdam), Ivana Naumovska (INSEAD), and Vikas A. Aggarwal (INSEAD) have posted “AI-Driven Labor Substitution: Evidence from Google Translate and ChatGPT” on SSRN. Here is the abstract:

Although artificial intelligence (AI) has the potential to significantly disrupt businesses across a range of industries, we have limited empirical evidence for its substitution effect on human labor. We use Google’s introduction of neural network-based translation (GNNT) in 2016-2017 as a natural experiment to examine the substitution of human translators by AI in the context of a large online labor market. Using a difference-in-differences design, we show that the introduction of GNNT reduced the number of (human translation) transactions at both the overall market and individual translator levels. In addition, we show that GNNT had a stronger effect on translation tasks with analytical elements, as compared to those with cultural and emotional elements. In supplemental analyses, we document a similar pattern after the launch of ChatGPT using question and answer patterns in Stack Exchange forums. Our study thus offers robust and causal empirical evidence for a heterogeneous substitution effect of human tasks by skilled knowledge workers. We discuss the relevance of our findings for research on competitive advantage, technology adoption, and strategy microfoundations.

Goldman on The United States’ Approach to ‘Platform’ Regulation

Eric Goldman (Santa Clara University – School of Law) has posted “The United States’ Approach to ‘Platform’ Regulation” on SSRN. Here is the abstract:

This paper summarizes the United States’ legal framework governing Internet “platforms” that publish third-party content. It highlights three key features of U.S. law: the constitutional protections for free speech and press, the statutory immunity provided by 47 U.S.C. § 230 (“Section 230”), and the limits on state regulation of the Internet. It also discusses U.S. efforts to impose mandatory transparency obligations on Internet “platforms.”

G’sell on The Digital Services Act

Florence G’sell (Sciences Po; University of Lorraine) has posted “The Digital Services Act (DSA): A General Assessment” (in Antje von Ungern-Sternberg (ed.), Content Regulation in the European Union – The Digital Services Act (Trier 2023)) on SSRN. Here is the abstract:

Effective since November 16, 2022, the Digital Services Act (DSA) introduces an innovative and pragmatic regulatory approach, utilizing novel and ingenious mechanisms to update and complement the current rules governing online platforms while adapting to their present characteristics. This article presents and comments the main features of the DSA, while highlighting the potential challenges that could arise during its implementation. The first section outlines the five key aspects of the DSA, including the asymmetric nature of the Regulation, which adjusts rules and obligations to suit the size and activities of regulated entities; the preservation of the exemption from liability established by the E-Commerce Directive, along with the inclusion of a new Good Samaritan clause; the creation of new obligations in content moderation to ensure the effective combating of objectionable content and the protection of users’ rights; the establishment of specific obligations to protect users and consumers and respond to crisis situations; and finally, the original provisions concerning the enforcement of the DSA. The second part of the article concentrates on identifying the potential challenges of implementing the DSA, focusing specifically on obstacles that could hinder the text’s effective application, potential difficulties arising from provisions related to managing systemic risks, and the complex adaptation of the DSA to emerging technologies. Ultimately, while the DSA is undoubtedly an innovative, necessary, and commendable initiative, its ability to address the most pressing issues of the contemporary internet will only become clear upon its practical implementation.

Cortez & Sage on The Disembodied First Amendment

Nathan Cortez (SMU – Dedman School of Law) and William M. Sage (Texas A&M University School of Law) have posted “The Disembodied First Amendment” (100 Washington University Law Review 707 (2023)) on SSRN. Here is the abstract:

First Amendment doctrine is becoming disembodied—increasingly detached from human speakers and listeners. Corporations claim that their speech rights limit government regulation of everything from product labeling to marketing to ordinary business licensing. Courts extend protections to commercial speech that ordinarily extended only to core political and religious speech. And now, we are told, automated information generated for cryptocurrencies, robocalling, and social media bots are also protected speech under the Constitution. Where does it end? It begins, no doubt, with corporate and commercial speech. We show, however, that heightened protection for corporate and commercial speech is built on several “artifices” – dubious precedents, doctrines, assumptions, and theoretical grounds that have elevated corporate and commercial speech rights over the last century. This Article offers several ways to deconstruct these artifices, re-tether the First Amendment to natural speakers and listeners, and thus reclaim the individual, political, and social objectives of the First Amendment.

Dothan on Facing Up To Internet Giants

Shai Dothan (University of Copenhagen – iCourts – Centre of Excellence for International Courts) has posted “Facing Up To Internet Giants” (Duke Journal of Comparative & International Law, Forthcoming) on SSRN. Here is the abstract:

Mancur Olson claimed that concentrated interests win against diffuse interests even in advanced democracies. Multinational companies, for example, work well in unison to suit their interests. The rest of the public is not motivated or informed enough to resist them. In contrast, other scholars argued that diffuse interests may be able to fight back, but only when certain conditions prevail. One of the conditions for the success of diffuse interests is the intervention of national and international courts. Courts are able to fix problems affecting diffuse interests. Courts can also initiate deliberation that can indirectly empower diffuse interests by getting them informed. This paper investigates the jurisprudence of the European Court of Human Rights (ECHR) and the Court of Justice of the European Union (CJEU). It argues that these international courts help consumers, a diffuse interest group, to succeed in their struggle against internet companies, a concentrated interest group.

Hartzog, Selinger & Gunawan on Privacy Nicks: How the Law Normalizes Surveillance

Woodrow Hartzog (Boston University School of Law; Stanford Law School Center for Internet and Society), Evan Selinger
(Rochester Institute of Technology – Department of Philosophy), and Johanna Gunawan (Northeastern University Khoury College of Computer Sciences) have posted “Privacy Nicks: How the Law Normalizes Surveillance” (101 Washington University Law Review, Forthcoming) on SSRN. Here is the abstract:

Privacy law is failing to protect individuals from being watched and exposed, despite stronger surveillance and data protection rules. The problem is that our rules look to social norms to set thresholds for privacy violations, but people can get used to being observed. In this article, we argue that by ignoring de minimis privacy encroachments, the law is complicit in normalizing surveillance. Privacy law helps acclimate people to being watched by ignoring smaller, more frequent, and more mundane privacy diminutions. We call these reductions “privacy nicks,” like the proverbial “thousand cuts” that lead to death.

Privacy nicks come from the proliferation of cameras and biometric sensors on doorbells, glasses, and watches, and the drift of surveillance and data analytics into new areas of our lives like travel, exercise, and social gatherings. Under our theory of privacy nicks as the Achilles heel of surveillance law, invasive practices become routine through repeated exposures that acclimate us to being vulnerable and watched in increasingly intimate ways. With acclimation comes resignation, and this shift in attitude biases how citizens and lawmakers view reasonable measures and fair tradeoffs.

Because the law looks to norms and people’s expectations to set thresholds for what counts as a privacy violation, the normalization of these nicks results in a constant re-negotiation of privacy standards to society’s disadvantage. When this happens, the legal and social threshold for rejecting invasive new practices keeps getting redrawn, excusing ever more aggressive intrusions. In effect, the test of what privacy law allows is whatever people will tolerate. There is no rule to stop us from tolerating everything. This article provides a new theory and terminology to understand where privacy law falls short and suggests a way to escape the current surveillance spiral.

Macey-Dare on How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession

Rupert Macey-Dare (St Cross College – University of Oxford; Middle Temple; Minerva Chambers) has posted “How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession” on SSRN. Here is the abstract:

In this paper, ChatGPT, is asked to provide c.150+ paragraphs of detailed prediction and insight into the following overlapping questions, concerning the potential impact of ChatGPT and successor generative AI systems on the evolving practice of law and the legal professions as we know them:

• Which are the individual legal business areas where ChatGPT could make a significant/ transformative impact and reduce costs and increase efficiencies?
• Where can ChatGPT use its special NLP abilities to assist in legal analysis and advice?
• Which are the specific areas where generative AI systems like ChatGPT can revolutionize and improve the legal profession?
• How can systems like ChatGPT help ordinary people with legal questions and legal problems?
• What is the likely timeframe for ChatGPT and other generative AI systems to transform legal services and the legal profession?
• What are the potential implications for new and intending law students?
• How will ChatGPT and similar systems impact professional lawyers in future?

Some of ChatGPT’s key insights and predictions (see full paper attached for detailed responses and analysis) are as follows:

ChatGPT identifies the following key individual legal business areas where it could make a significant/ transformative impact and reduce costs and increase efficiencies: Alternative dispute resolution, Automated billing, Case analysis, Case management, Compliance monitoring, Contract management, Contract review, Document automation, Document review, Discovery and E-discovery, Drafting legal documents, Due diligence, Expertise matching, Intellectual Property and IP management, Legal advice, Legal analytics, Legal chatbots, Legal drafting, Legal document review, Legal education, Legal marketing, Legal research, Litigation support, Natural language processing (NLP), Patent analysis, Predictive analytics, Regulatory compliance, Research, Risk assessment, Training and education, Translation and Virtual assistants.

ChatGPT flags up its special NLP abilities to assist in legal analysis and advice, particularly in the following key areas: Contract analysis, Document classification, Document summarization, Due diligence, Legal chatbots, Legal document review, Legal document summarization, Legal drafting, Legal language translation, Legal research, Named entity recognition, Predictive analytics, Regulatory compliance, Sentiment analysis and Topic modelling.

On the question of which are the specific areas where generative AI systems like ChatGPT can revolutionize and improve the legal profession, ChatGPT identifies: Accessibility, Accuracy, Collaboration, Cost reduction, Customization, Decision-making, Efficiency, Error-reduction, New business and innovation, Job displacement potential, Legal research, Risk management and Scalability.

On the question of how can systems like ChatGPT help ordinary people with legal questions and legal problems, ChatGPT identifies the following areas: 24/7 availability, Automated legal services, Consistency of advice, Contract review, Cost-effectiveness, Court filings, Customization, Document preparation, Education, Empowerment, Faster response times, Language translation, Legal advice, Legal chatbots, Legal education, Legal research, Mediation and dispute resolution, Privacy, Scalability and Simplified language.

On the question of the likely timeframe for ChatGPT and other generative AI systems to transform legal services and the legal profession, Chat GPT comments that “It is difficult to predict with certainty, as it will depend on a variety of factors such as technological advancements, regulatory changes, and market demand. [However] There are several reasons to believe that the transformation of legal services through generative AI systems like ChatGPT will happen relatively quickly…the transformation of legal services through generative AI systems is likely to happen relatively quickly, potentially within the next 5-10 years.”

On the potential implications for new and intending law students, ChatGPT comments that: “It is essential for students contemplating studying law to be aware of the ability of ChatGPT and generative AI systems to perform legal jobs instead of humans in the future. These technologies are already transforming the legal industry, and their impact is likely to continue to grow in the years to come. Understanding how these systems work and how they can be used in legal practice will be crucial for anyone seeking a career in law, and may also provide a competitive advantage in the job market. By understanding the potential impact of these technologies on the legal industry, students can better prepare themselves for the changing nature of legal work and take advantage of the new opportunities that are emerging as a result of these developments.”

On the all-important question for lawyers of how ChatGPT and similar systems will impact professional lawyers in future, ChatGPT comments that: “Generative AI systems like ChatGPT have the potential to significantly impact the demand and payment for professional lawyers in the future. As these systems become more advanced and capable of performing a wider range of legal tasks, it is likely that they will begin to replace some of the work that is currently performed by human lawyers.”

“One area where this is already happening is in document review. AI systems can review and analyze large volumes of documents much more quickly and accurately than human lawyers, which can save significant time and cost for law firms and their clients. As these systems become more sophisticated, they may also be able to perform other tasks such as legal research and analysis, drafting legal documents, and even providing legal advice.”

“The impact of generative AI systems on the demand for lawyers will depend on the specific tasks and areas of law that they are able to perform. It is likely that some areas of law will be more heavily impacted than others. For example, areas such as contract law and intellectual property may be more susceptible to automation, while litigation and dispute resolution may be less so.”

“In terms of payment for professional lawyers, the impact of generative AI systems is also likely to vary depending on the specific tasks and areas of law. In some cases, these systems may allow lawyers to perform their work more quickly and efficiently, which could potentially lead to higher billable hours and increased income. However, if these systems are able to replace some of the work that is currently performed by human lawyers, it could also lead to a reduction in demand for these services and a decrease in fees.”

“One potential impact of systems like ChatGPT on the legal industry is a reduction in the demand for certain types of legal work that can be automated or performed more efficiently by AI systems. For example, tasks like document review, contract drafting, and legal research may be performed more accurately and quickly by AI systems than by humans, leading to a decrease in the number of lawyers needed to perform these tasks.”

“It is also possible that the development of AI systems like ChatGPT will lead to changes in the way that legal services are priced and delivered. As these technologies become more common, it is likely that clients will begin to expect lower costs and faster turnaround times for certain types of legal work. This could lead to increased competition among legal service providers, which in turn could put pressure on lawyers to lower their rates or find ways to deliver legal services more efficiently….it is clear that these technologies have the potential to significantly change the legal industry, and that lawyers will need to adapt in order to remain competitive and relevant in a rapidly changing market. This may involve developing new skills and knowledge related to working alongside AI systems, or focusing on areas of law that are less susceptible to automation.”

Interestingly, although ChatGPT does discuss practical contract management, IP and evidence, it does not seem to predict inroads being made into academic legal analysis, statutory construction, complex case analysis or the development of new legal thinking and principles, so not into the theoretical domain of law professors and senior lawyers and judges, (although there are additional reasons why there are likely to be knock-on reductions in demand for these specialist lawyers too).

But for the vast majority of procedural (routinely turning-the-handle type) practitioner law and practice, ChatGPT seems to be predicting a seismic sectoral shock, a reduction in human-centric legal work, an increase in legal self-help for clients and the public, and a technological transformation in and fundamental repricing and manpower shock for the legal sector within a timeframe of 5-10 years.

N.B. This is only one set of predictions, which could prove right or wrong, indeed from an unconscious chatbot machine ChatGPT. However it has the credibility of being made based on both a huge body of knowledge data, and on the consistent rules programmed into ChatGPT itself, and by apparently coherently reasoned responses. Time will soon tell of course…

Pettinato Oltz on ChatGPT as a Law Professor

Tammy Pettinato Oltz (University of North Dakota School of Law) has posted “ChatGPT, Professor of Law” on SSRN. Here is the abstract:

Although ChatGPT was just released by OpenAI in November 2022, legal scholars have already been delving into the implications of the new tool for legal education and the legal profession. Several scholars have recently written fascinating pieces examining ChatGPT’s ability to pass the bar, write a law review article, create legal documents, or pass a law school exam. In the spirit of those experiments, I decided to see whether ChatGPT had potential for lightening the service and teaching loads of law school professors.

To conduct my experiment, I created an imaginary law school professor with a tough but typical week of teaching- and service- related tasks ahead of her. I chose seven common tasks: creating a practice exam question, designing a hand-out for a class, writing a letter of recommendation, submitting a biography for a speaking engagement, writing opening remarks for a symposium, developing a document for a law school committee, and designing a syllabus for a new course. I then ran prompts for each task through ChatGPT to see how well the system performed the tasks.

Remarkably, ChatGPT was able to provide useable first drafts for six out of seven of the tasks assigned in only 23 minutes. Overall and unsurprisingly, ChatGPT proved to be best at those tasks that are most routine. Tasks that require more sophistication, particularly those related to teaching, were harder for ChatGPT, but still showed potential for time savings.

In this paper, I describe a typical work scenario for a hypothetical law professor, show how she might use ChatGPT, and analyze the results. I conclude that ChatGPT can drastically reduce the service-related workload of law school faculty and can also shave off time on back-end teaching tasks. This freed-up time could be used to either enhance scholarly productivity or further develop more sophisticated teaching skills.

Sarel on Restraining ChatGPT

Roee Sarel (Institute of Law and Economics, University of Hamburg) has posted “Restraining ChatGPT” (UC Law SF Journal (formerly Hastings Law Journal), Forthcoming) on SSRN. Here is the abstract:

ChatGPT is a prominent example of how Artificial Intelligence (AI) has stormed into our lives. Within a matter of weeks, this new AI—which produces coherent and human-like textual answers to questions—has managed to become an object of both admiration and anxiety. Can we trust generative AI systems, such as ChatGPT, without regulatory oversight?

Designing an effective legal framework for AI requires answering three main questions: (i) is there a market failure that requires legal intervention? (ii) should AI be governed through public regulation, tort liability, or a mixture of both? and (iii) should liability be based on strict liability or a fault-based regime such as negligence? The law and economics literature offers clear considerations for these choices, focusing on the incentives of injurers and victims to take precautions, engage in efficient activity levels, and acquire information.

This Article is the first to comprehensively apply these considerations to ChatGPT as a leading test case. As the United States is lagging behind in its response to the AI revolution, I focus on the recent proposals in the European Union to restrain AI systems, which apply a risk-based approach and combine regulation and liability. The analysis reveals that this approach does not map neatly onto the relevant distinctions in law and economics, such as market failures, unilateral versus bilateral care, and known versus unknown risks. Hence, the existing proposals may lead to various incentive distortions and inefficiencies. The Article, therefore, calls upon regulators to place a stronger emphasis on law and economics concepts in their design of AI policy.

Hargreaves on ChatGPT, Law School Exams, and Cheating

Stuart Hargreaves (The Chinese University of Hong Kong (CUHK) – Faculty of Law) has posted “‘Words Are Flowing Out Like Endless Rain Into a Paper Cup’: ChatGPT & Law School Assessments” on SSRN. Here is the abstract:

ChatGPT is a sophisticated large-language model able to answer high-level questions in a way that is undetectable by conventional plagiarism detectors. Concerns have been raised it poses a significant risk of academic dishonesty in ‘take-home’ assessments in higher education. To evaluate this risk in the context of legal education, this project had ChatGPT generate answers to twenty-four different exams from an English-language law school based in a common law jurisdiction. It found that the system performed best on exams that were essay-based and asked students to discuss international legal instruments or general legal principles not necessarily specific to any jurisdiction. It performed worst on exams that featured problem-style or “issue spotting” questions asking students to apply an invented factual scenario to local legislation or jurisprudence. While the project suggests that for the most part conventional law school assessments are for the time being relatively immune from the threat ChatGPT brings, this is unlikely to remain the case as the technology advances. However, rather than attempt to block students from using AI as part of learning and assessment, this paper instead proposes three ways students may be taught to use it in appropriate and ethical ways. While it is clear that ChatGPT and similar AI technologies will change how universities teach and assess (across disciplines), a solution of prevention or denial is no solution at all.