Fitas et al. on Leveraging AI in Education: Benefits, Responsibilities, and Trends

Ricardo Fitas (Technical U Darmstadt) et al. have posted “Leveraging AI in Education: Benefits, Responsibilities, and Trends” on SSRN. Here is the abstract:

This chapter presents a review of the role of Artificial Intelligence (AI) in enhancing education outcomes for both students and teachers. This review includes the most recent papers discussing the impact of AI tools, including ChatGPT and other technologies, in the educational landscape. It explores the benefits of AI integration, such as personalized learning and increased efficiency, highlighting how these technologies contribute to the learning experiences of individual student needs and administrative processes to enhance educational delivery. Adaptive learning systems and intelligent tutoring systems are also reviewed. Nevertheless, it is known that important responsibilities and ethical considerations intrinsic to the deployment of AI technologies must be included in such an integration. Therefore, a critical analysis of AI’s ethical considerations and potential misuse in education is also carried out in the present chapter. By presenting real-world case studies of successful AI integration, the chapter offers evidence of AI’s potential to positively transform educational outcomes while cautioning against adoption without addressing these ethical considerations. Furthermore, this chapter’s novelty relates to exploring emerging trends and predictions in the fields of AI and education. This study shows that, based on the success cases, it is possible to benefit from the positive impacts of AI while implementing protection against detrimental outcomes for the users. The chapter is significantly relevant, as it provides the stakeholders, users, and policymakers with a deeper understanding of the role of AI in contemporary education as a technology that aligns with educational values and the needs of society.

Duhl on Embedding AI in the Law School Classroom

Gregory M. Duhl (Mitchell Hamline School of Law) has posted “All In: Embedding AI in the Law School Classroom” on SSRN. Here is the abstract:

What is the irreducibly human element in legal education when AI can pass the bar exam, generate effective lectures, and provide personalized learning and academic support? This Article confronts that question head-on by documenting the planning and design of a comprehensive transformation of a required doctrinal law school course—first-year Contracts—with AI fully embedded throughout the course design. Instead of adding AI exercises to conventional pedagogy or creating a stand-alone AI course, this approach reimagines legal education for the AI era by integrating AI as a learning enhancer rather than a threat to be managed. The transformation serves Mitchell Hamline School of Law’s access-driven mission: AI helps create equity for diverse learners, prepares practice-ready professionals for legal practice transformed by AI, and shifts the institutional narrative from policing technology use to leveraging it pedagogically.

This Article details the roadmap I have followed for AI integration in a course that I am teaching in Spring 2026. It documents the beginning of my experience with throwing out the traditional legal education playbook and rethinking how I approach teaching using AI pedagogy within a profession in flux. Part I establishes the pedagogical rationale grounded in learning science and institutional mission. Part II describes the implementation strategy, including partnerships with instructional designers, faculty innovators, and legal technology companies. Part III details a course-wide series of specific exercises that develop AI literacy alongside doctrinal and skill mastery. Part IV addresses legitimate objections about bar preparation, analytical skills, academic integrity, and scalability beyond transactional courses. The Article concludes with a commitment to transparent empirical research through a pilot study launching in Spring 2026, acknowledging both the promise and the uncertainty of this pedagogical innovation. For legal educators grappling with AI’s rapid transformation of both education and practice, this Article offers a mission-driven, evidence-informed, yet still preliminary template for intentional change—and an invitation to experiment, adapt, and share results.

Lorteau & Sarro on Artificial Intelligence in Legal Education: A Scoping Review

Steve Lorteau (University of Ottawa – Common Law Section) and Douglas Sarro (same) have posted “Artificial Intelligence in Legal Education: A Scoping Review” (The Law Teacher, forthcoming) on SSRN. Here is the abstract:

There is a lack of consolidated knowledge regarding the potential, best practices, and limitations associated with artificial intelligence (AI) in legal education. This review synthesises 82 academic works published between January 2020 and April 2025 originating from 26 jurisdictions. Our review yields four main themes: First, current empirical evidence suggests that AI tools (e.g., large language models, chatbots) alone have so far performed below average on law school evaluations, though detailed prompts can substantially improve outputs. Second, the literature provides concrete use cases for AI tools as teaching aids, facilitators of interactive exercises, legal writing aids, and skill development. Third, the literature highlights the risks of passive reliance on AI and diverse perspectives over appropriate AI use. Fourth, the literature suggests that AI will make legal educational content more accessible but perhaps also less transparent and more formalistic. These themes underscore the importance of evidence-based approaches to AI integration in legal education.

Strong on Responsible Regulation of Artificial Intelligence in the Legal Profession Through A Split Bar: Implications for Legal Educators

S.I. Strong (Emory U Law) has posted “Responsible Regulation of Artificial Intelligence in the Legal Profession Through A Split Bar: Implications for Legal Educators” (79 Washington University Journal of Law and Policy __ (forthcoming 2025)) on SSRN. Here is the abstract:

Artificial intelligence (AI)-particularly generative AI-poses a number of unique challenges to the legal profession and legal education. As discussed in numerous empirical studies, generative AI negatively affects the performance of both students and knowledge workers, causing harm to both individuals and society at large. 

This is not to say that generative AI does not have its benefits. Indeed, AI’s ability to reduce time and costs has led many people within the legal profession to become so enamored of AI that it is impossible to envision a future without automation. 

Given these realities, it would be futile to propose the elimination of generative AI from the justice sector. Instead, the goal of the legal profession and of this Essay must be to find a way to maximize the appropriate use of generative AI in law while minimizing the dangers to human autonomy and creativity. 

Even a cursory analysis of the extent and nature of the dangers of generative AI suggest that simply tweaking existing systems will not be enough. Instead, fundamental reforms of the legal profession and legal education are needed to ensure adequate protections are in place. 

This Essay proposes a new way of structuring both the legal profession and legal education, building on time-tested techniques used in England while incorporating various modifications that take the special nature of generative AI into account. In so doing, the proposal contained herein not only complies with cautions enunciated by empirical scholars concerning the use of generative AI, it also takes the legal profession and legal education into the twenty-first century in a logical and responsible manner.

Dooling on Ghostwriting the Government

Bridget C.E. Dooling (The Ohio State U) has posted “Ghostwriting the Government” (109 Marq. L. Rev. (forthcoming 2026)) on SSRN. Here is the abstract:

Ghostwriting is when a writer prepares materials to be issued under someone else’s name. It is very common and sometimes unseemly, but why? Ghostwriting describes a politician’s use of a speechwriter, a student’s purchase of a term paper, or a tongue-twisted admirer asking a poet to craft a love letter on his behalf. It is also what happens inside organizations every day: staff draft documents for others “up the chain” to sign. Lots of people in institutions ghostwrite, but we don’t tend to call it that. We don’t call it anything, really; it’s just writing. But when legislators rely on staff and lobbyists to draft bills, when an agency head relies on staff or contractors to write a rule, and when a judge relies on her clerk for a draft opinion, the benefits of ghostwriting come into tension with the duties of government decisionmakers. This Article argues that when a government decisionmaker has a duty to reason, ghostwriting can violate that duty. A critique based on duty enhances our ability to assess governmental ghostwriting, and it comes just in time. In the quest for government efficiency, generative AI looms large. If it doesn’t matter who writes what, so long as someone “signs off” at the end, why not hand governmental drafting over to the algorithm?

Conklin & Houston on Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship

Michael Conklin (Angelo State U Business Law) and Christopher Houston (Angelo State U) have posted “Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship” on SSRN. Here is the abstract:

The rapid advancement of artificial intelligence (AI) has had a profound impact on nearly every industry, including legal academia. As AI-driven tools like ChatGPT become more prevalent, they raise critical questions about authorship, academic integrity, and the evolving nature of legal writing. While AI offers promising benefits—such as improved efficiency in research, drafting, and analysis—it also presents ethical dilemmas related to originality, bias, and the potential homogenization of legal discourse.

One of the challenges in assessing AI’s influence on legal scholarship is the difficulty of identifying AI-generated content. Traditional plagiarism-detection methods are often inadequate, as AI does not merely copy existing text but generates novel outputs based on probabilistic language modeling. This first-of-its-kind study uses the existence of an AI idiosyncrasy to measure the use of AI in legal scholarship. This provides the first-ever empirical evidence of a sharp increase in the use of AI in legal scholarship, thus raising pressing questions about the proper role of AI in shaping legal scholarship and the practice of law. By applying a novel framework to highlight the rapidly evolving challenges at the intersection of AI and legal academia, this Essay will hopefully spark future debate on the careful balance in this area.

Schwarcz et al. on AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice

Daniel Schwarcz (U Minnesota Law) et al. have posted “AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice” on SSRN. Here is the abstract:

Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.

Perlman on Generative AI and the Future of Legal Scholarship

Andrew M. Perlman (Suffolk U Law) has posted “Generative AI and the Future of Legal Scholarship” on SSRN. Here is the abstract:

Since ChatGPT’s release in November 2022, legal scholars have grappled with generative AI’s implications for the law, lawyers, and legal education. Articles have examined the technology’s potential to transform the delivery of legal services, explored the attendant legal ethics concerns, identified legal and regulatory issues arising from generative AI’s widespread use, and discussed the impact of the technology on teaching and learning in law school.

By late 2024, generative AI has become so sophisticated that legal scholars now need to consider a new set of issues that relate to a core feature of the law professor’s work: the production of legal scholarship itself.

To demonstrate the growing ability of generative AI to yield new insights and draft sophisticated scholarly text, the rest of this piece contains a new theory of legal scholarship drafted exclusively by ChatGPT. In other words, the article simultaneously articulates the way in which legal scholarship will change due to AI and uses the technology itself to demonstrate the point.

The entire piece, except for the epilogue, was created by ChatGPT (OpenAI o1) in December 2024. The full transcript of the prompts and outputs is available here,https://chatgpt.com/share/676cc449-af50-8002-9145-efbfdf8ebb02, but every word of the article was drafted by generative AI. Moreover, there was no effort to generate multiple responses and then publish the best ones, though ChatGPT had to be prompted in one instance to rewrite a section in narrative form rather than as an outline.

The methodology for generating the piece was intentionally simple and started with the following prompt:

“Develop a novel conception of the future of legal scholarship that rivals some of the leading conceptions of legal scholarship. The new conception should integrate developments in generative AI and explain how scholars might use it. It should end with a series of questions that legal scholars and law schools will need to address in light of this new conception.”

After ChatGPT provided an extensive overview of its response, it was asked to generate each section of the piece using text “suitable for submission to a highly selective law review.” The first such prompt asked only for a draft of the introduction. The introduction identified four parts to the article, so ChatGPT was then asked to draft Parts I, II, III and IV in separate prompts until the entire piece was completed. Because of output limits that restrict how much content can be generated in response to a single prompt, each section of the article is relatively brief. A much more thorough version of the article could have been generated if ChatGPT had been prompted to create each sub-part of the article separately rather prompting it to produce entire parts all at once.

The epilogue offers my own reflections on the resulting draft, which (in my view) demonstrates the creativity and linguistic sophistication of a competent legal scholar. Of course, as with any competent piece of scholarship, the article has gaps and flaws. In other words, it is far from perfect. But then again, very few pieces of legal scholarship are otherwise. Rather than focusing on these flaws, scholars should consider the profound implications of these new tools for the scholarly enterprise. I discuss some of those implications in the epilogue, but apropos of the theme of the piece, generative AI has some useful ideas for us to consider in this regard.

Gutowski & Hurley on Forging Ahead or Proceeding with Caution; Developing Policy for Generative Artificial Intelligence in Legal Education

Nachman N. Gutowski (U Nevada) and Jeremy Hurley (Appalachian Law) have posted “Forging Ahead or Proceeding with Caution; Developing Policy for Generative Artificial Intelligence in Legal Education” (Forthcoming, University of Louisville Law Review Spring 2025) on SSRN. Here is the abstract:

Generative Artificial Intelligence is rapidly being integrated into every facet of society, including a growing impact on law schools. It has become abundantly clear that there is a need to develop clear governing policies for its use and adoption in legal education. This article offers an introductory analysis of related approaches currently taken in various law schools, exploring the factors influencing these policies and their ethical implication. A comparative review of institutional policies reveals both similarities and unique approaches. Common themes include the need for balance between limited use and outright reliance, as well as the need for transparency and the promotion of academic integrity. Similarly, additional recurring concerns and considerations are explored, such as the potential impact on curricular integration and academic rigor.

Ethical and professional implications surrounding using these tools and platforms in legal education set the stage; delving into the importance of understanding the limitations and risks, a discussion of educating students about the appropriate contexts for using AI as a learning tool is presented. Additionally, the unique role of law school faculty governance in shaping these policies is explored, emphasizing the critical decision-making processes involved in establishing enforceable and implementable guardrails and guidelines. By looking at the focus behind policies across multiple institutions, best practices and approaches begin to emerge. Takeaways include future implications and recommendations for law schools and faculty in effectively governing the emerging use of generative artificial intelligence in legal education. The implications go beyond the walls of academia and impact practicing attorneys significantly. To prepare for this reality, law schools must think carefully about, and generate policy approaches in line with universal goals and considerations. This article aims to provide valuable insights and recommendations for prudent governance, ultimately contributing to the ongoing discourse on its responsible and effective use within the legal academic sphere.

Emerson on Assessing Information Literacy in the Age of Generative AI: A Call to the National Conference of Bar Examiners

Amy Emerson (Villanova U Charles Widger Law) has posted “Assessing Information Literacy in the Age of Generative AI: A Call to the National Conference of Bar Examiners” on SSRN. Here is the abstract:

Information literacy is crucial to satisfying a lawyer’s duty of technology competence by virtue of its inherent role in conducting legal research-a skill now recognized by the National Conference of Bar Examiners (NCBE) as a priority as it prepares for the NextGen Bar Exam. In light of the rapid rise in the number of attorneys facing disciplinary issues across the country, it is the NCBE’s responsibility to draw upon its rich history to address information literacy as a technological competency on the Multistate Professional Responsibility Exam to protect the public from newly licensed lawyers’ incompetent use of generative artificial intelligence.