Trautman on International Business, Terrorism, and the Impact of Rapid Technological Change

Lawrence J. Trautman (Prairie View A&M U College Business) has posted “International Business, Terrorism, and the Impact of Rapid Technological Change” on SSRN. Here is the abstract:

As global conflict flourishes, technological advances have dramatically changed the economics of geopolitical conflict. During recent years, U.S. government agencies have invested heavily in facial recognition, fingerprint databases, investigative tools that provide for searching through gigabytes of text messages, email data, similar files, and the unlocking of phones. Other significant technological developments are now on the horizon and promise additional disruption. Many of these technologies fall into the hands of multinational criminal organizations and deployed against entities conducting international business. These are the issues and topic of this paper.

Conklin & Houston on Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship

Michael Conklin (Angelo State U Business Law) and Christopher Houston (Angelo State U) have posted “Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship” on SSRN. Here is the abstract:

The rapid advancement of artificial intelligence (AI) has had a profound impact on nearly every industry, including legal academia. As AI-driven tools like ChatGPT become more prevalent, they raise critical questions about authorship, academic integrity, and the evolving nature of legal writing. While AI offers promising benefits—such as improved efficiency in research, drafting, and analysis—it also presents ethical dilemmas related to originality, bias, and the potential homogenization of legal discourse.

One of the challenges in assessing AI’s influence on legal scholarship is the difficulty of identifying AI-generated content. Traditional plagiarism-detection methods are often inadequate, as AI does not merely copy existing text but generates novel outputs based on probabilistic language modeling. This first-of-its-kind study uses the existence of an AI idiosyncrasy to measure the use of AI in legal scholarship. This provides the first-ever empirical evidence of a sharp increase in the use of AI in legal scholarship, thus raising pressing questions about the proper role of AI in shaping legal scholarship and the practice of law. By applying a novel framework to highlight the rapidly evolving challenges at the intersection of AI and legal academia, this Essay will hopefully spark future debate on the careful balance in this area.

Chang & Lu on Balancing Mission and Market: OpenAI’s Struggle With Profit vs. Purpose

Cheng-Chi Chang (Emory U Law) and Yilin “jenny” Lu (U Florida Levin College Law) have posted “Balancing Mission and Market: OpenAI’s Struggle With Profit vs. Purpose” (6 Corporate and Business Law Journal (Arizona State University) 1 (2025)) on SSRN. Here is the abstract:

This article examines OpenAI’s unique organizational structure, which juxtaposes its non-profit mission with for-profit business practices, and the legal and ethical implications arising from this hybrid model. Initially established under Section 501(c)(3) of the Internal Revenue Code to advance artificial general intelligence (AGI) for the collective benefit of humanity, OpenAI has increasingly integrated commercial interests, leading to the creation of a for-profit subsidiary, OpenAI LP. This evolution has sparked scrutiny regarding the alignment of OpenAI’s activities with its original charitable objectives.

The article is structured in three parts. Part I outlines the legal framework governing 501(c)(3) organizations, emphasizing the conditions under which they may establish for-profit subsidiaries while maintaining tax-exempt status. Part II explores the shifts in OpenAI’s mission statements, the consolidation of CEO Sam Altman’s influence, and the deepening relationship with Microsoft, which has invested heavily in OpenAI and gained significant strategic influence. This section also addresses the controversies surrounding increased secrecy in OpenAI’s operations, particularly concerning AGI safety and ethical considerations. Part III discusses the potential ramifications of OpenAI losing its non-profit status, including the legal requirements for distributing its charitable assets and the precedent set by the conversion of charitable healthcare organizations into for-profit entities. Part IV explores future implications and recommendations, proposing innovative governance structures, tailored regulatory approaches, and global frameworks for overseeing AGI development.

By analyzing OpenAI’s trajectory, this article contributes to the broader discourse on the governance of non-profit entities engaged in high-stakes technological development. It underscores the importance of balancing innovation with ethical responsibilities, ensuring that the pursuit of AGI does not compromise the foundational mission of benefiting humanity. The article concludes by emphasizing the need for comprehensive legal, ethical, and governance frameworks to address the unique challenges posed by organizations operating at the intersection of cutting-edge AI technology and public benefit.

Wilf-Townsend on Artificial Intelligence and Aggregate Litigation

Daniel Wilf-Townsend (Georgetown U Law Center) has posted “Artificial Intelligence and Aggregate Litigation” (103 Wash. U. L. Rev. __ (forthcoming 2026)) on SSRN. Here is the abstract:

The era of AI litigation has begun, and it is already clear that the class action will have a distinctive role to play. AI-powered tools are often valuable because they can be deployed at scale. And the harms they cause often exist at scale as well, pointing to the class action as a key device for resolving the correspondingly numerous potential legal claims. This article presents the first general account of the complex interplay between aggregation and artificial intelligence. 

First, the article identifies a pair of effects that the use of AI tools is likely to have on the availability of class actions to pursue legal claims. While the use of increased automation by defendants will tend to militate in favor of class certification, the increased individualization enabled by AI tools will cut against it. These effects, in turn, will be strongly influenced by the substantive laws governing AI tools—especially by whether liability attaches “upstream” or “downstream” in a given course of conduct, and by the kinds of causal showings that must be made to establish liability. 

After identifying these influences, the article flips the usual script and describes how, rather than merely being a vehicle for enforcing substantive law, aggregation could actually enable new types of liability regimes. AI tools can create harms that are only demonstrable at the level of an affected group, which is likely to frustrate traditional individual claims. Aggregation creates opportunities to prove harm and assign remedies at the group level, providing a path to address this difficult problem. Policymakers hoping for fair and effective regulations should therefore attend to procedure, and aggregation in particular, as they write the substantive laws governing AI use.

Falletti on Using Predictive and Generative Algorithms in Family Law: A Comparative Perspective

Elena Falletti (Carlo Cattaneo LIUC U) has posted “Using Predictive and Generative Algorithms in Family Law: A Comparative Perspective” on SSRN. Here is the abstract:

The article discusses the use of algorithms, both predictive and generative artificial intelligence systems, in the context of fighting family abuse and child maltreatment. The research approach is based on comparative case law analysis, examining the real-world impact of these algorithms on individuals, potential biases in predictive software, and the perceived authority of GenAI in judicial decisions.

Bar-Gill & Sunstein on Algorithmic Harm: Protecting People in the Age of Artificial Intelligence

Oren Bar-Gill (Harvard Law) and Cass R. Sunstein (Harvard Law) have posted “Algorithmic Harm: Protecting People in the Age of Artificial Intelligence” on SSRN. Here is the abstract:

Will algorithms help people or hurt them? What about artificial intelligence in general? If consumers know what they need to know and do not suffer from behavioral biases, algorithms and AI are likely to be helpful. Consumers will be more likely to get what they want and need. But if consumers lack information, algorithms in particular will be able to convince them to make harmful or foolish choices. And if consumers suffer from behavioral biases, such as unrealistic optimism or a focus on the short term, algorithms will be able to produce serious harms.

This is the Introductory chapter of Algorithmic Harm: Protecting People in the Age of Artificial Intelligence, in which Oren Bar-Gill and Cass Sunstein consider the harms and benefits of AI and algorithms and catalog the different ways in which algorithms are being or may be used in consumer and other markets. The authors identify the market conditions under which these uses injure consumers and consider policy and regulatory responses that could reduce the risks consumers, investors, workers, and voters face now—and in the future. Democracy and self-government are at risk; there is a great deal that can be done to reduce that risk.

Schwarcz et al. on AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice

Daniel Schwarcz (U Minnesota Law) et al. have posted “AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice” on SSRN. Here is the abstract:

Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.

Perlman on Generative AI and the Future of Legal Scholarship

Andrew M. Perlman (Suffolk U Law) has posted “Generative AI and the Future of Legal Scholarship” on SSRN. Here is the abstract:

Since ChatGPT’s release in November 2022, legal scholars have grappled with generative AI’s implications for the law, lawyers, and legal education. Articles have examined the technology’s potential to transform the delivery of legal services, explored the attendant legal ethics concerns, identified legal and regulatory issues arising from generative AI’s widespread use, and discussed the impact of the technology on teaching and learning in law school.

By late 2024, generative AI has become so sophisticated that legal scholars now need to consider a new set of issues that relate to a core feature of the law professor’s work: the production of legal scholarship itself.

To demonstrate the growing ability of generative AI to yield new insights and draft sophisticated scholarly text, the rest of this piece contains a new theory of legal scholarship drafted exclusively by ChatGPT. In other words, the article simultaneously articulates the way in which legal scholarship will change due to AI and uses the technology itself to demonstrate the point.

The entire piece, except for the epilogue, was created by ChatGPT (OpenAI o1) in December 2024. The full transcript of the prompts and outputs is available here,https://chatgpt.com/share/676cc449-af50-8002-9145-efbfdf8ebb02, but every word of the article was drafted by generative AI. Moreover, there was no effort to generate multiple responses and then publish the best ones, though ChatGPT had to be prompted in one instance to rewrite a section in narrative form rather than as an outline.

The methodology for generating the piece was intentionally simple and started with the following prompt:

“Develop a novel conception of the future of legal scholarship that rivals some of the leading conceptions of legal scholarship. The new conception should integrate developments in generative AI and explain how scholars might use it. It should end with a series of questions that legal scholars and law schools will need to address in light of this new conception.”

After ChatGPT provided an extensive overview of its response, it was asked to generate each section of the piece using text “suitable for submission to a highly selective law review.” The first such prompt asked only for a draft of the introduction. The introduction identified four parts to the article, so ChatGPT was then asked to draft Parts I, II, III and IV in separate prompts until the entire piece was completed. Because of output limits that restrict how much content can be generated in response to a single prompt, each section of the article is relatively brief. A much more thorough version of the article could have been generated if ChatGPT had been prompted to create each sub-part of the article separately rather prompting it to produce entire parts all at once.

The epilogue offers my own reflections on the resulting draft, which (in my view) demonstrates the creativity and linguistic sophistication of a competent legal scholar. Of course, as with any competent piece of scholarship, the article has gaps and flaws. In other words, it is far from perfect. But then again, very few pieces of legal scholarship are otherwise. Rather than focusing on these flaws, scholars should consider the profound implications of these new tools for the scholarly enterprise. I discuss some of those implications in the epilogue, but apropos of the theme of the piece, generative AI has some useful ideas for us to consider in this regard.

Gutowski & Hurley on Forging Ahead or Proceeding with Caution; Developing Policy for Generative Artificial Intelligence in Legal Education

Nachman N. Gutowski (U Nevada) and Jeremy Hurley (Appalachian Law) have posted “Forging Ahead or Proceeding with Caution; Developing Policy for Generative Artificial Intelligence in Legal Education” (Forthcoming, University of Louisville Law Review Spring 2025) on SSRN. Here is the abstract:

Generative Artificial Intelligence is rapidly being integrated into every facet of society, including a growing impact on law schools. It has become abundantly clear that there is a need to develop clear governing policies for its use and adoption in legal education. This article offers an introductory analysis of related approaches currently taken in various law schools, exploring the factors influencing these policies and their ethical implication. A comparative review of institutional policies reveals both similarities and unique approaches. Common themes include the need for balance between limited use and outright reliance, as well as the need for transparency and the promotion of academic integrity. Similarly, additional recurring concerns and considerations are explored, such as the potential impact on curricular integration and academic rigor.

Ethical and professional implications surrounding using these tools and platforms in legal education set the stage; delving into the importance of understanding the limitations and risks, a discussion of educating students about the appropriate contexts for using AI as a learning tool is presented. Additionally, the unique role of law school faculty governance in shaping these policies is explored, emphasizing the critical decision-making processes involved in establishing enforceable and implementable guardrails and guidelines. By looking at the focus behind policies across multiple institutions, best practices and approaches begin to emerge. Takeaways include future implications and recommendations for law schools and faculty in effectively governing the emerging use of generative artificial intelligence in legal education. The implications go beyond the walls of academia and impact practicing attorneys significantly. To prepare for this reality, law schools must think carefully about, and generate policy approaches in line with universal goals and considerations. This article aims to provide valuable insights and recommendations for prudent governance, ultimately contributing to the ongoing discourse on its responsible and effective use within the legal academic sphere.

Emerson on Assessing Information Literacy in the Age of Generative AI: A Call to the National Conference of Bar Examiners

Amy Emerson (Villanova U Charles Widger Law) has posted “Assessing Information Literacy in the Age of Generative AI: A Call to the National Conference of Bar Examiners” on SSRN. Here is the abstract:

Information literacy is crucial to satisfying a lawyer’s duty of technology competence by virtue of its inherent role in conducting legal research-a skill now recognized by the National Conference of Bar Examiners (NCBE) as a priority as it prepares for the NextGen Bar Exam. In light of the rapid rise in the number of attorneys facing disciplinary issues across the country, it is the NCBE’s responsibility to draw upon its rich history to address information literacy as a technological competency on the Multistate Professional Responsibility Exam to protect the public from newly licensed lawyers’ incompetent use of generative artificial intelligence.