Lorteau & Sarro on Artificial Intelligence in Legal Education: A Scoping Review

Steve Lorteau (University of Ottawa – Common Law Section) and Douglas Sarro (same) have posted “Artificial Intelligence in Legal Education: A Scoping Review” (The Law Teacher, forthcoming) on SSRN. Here is the abstract:

There is a lack of consolidated knowledge regarding the potential, best practices, and limitations associated with artificial intelligence (AI) in legal education. This review synthesises 82 academic works published between January 2020 and April 2025 originating from 26 jurisdictions. Our review yields four main themes: First, current empirical evidence suggests that AI tools (e.g., large language models, chatbots) alone have so far performed below average on law school evaluations, though detailed prompts can substantially improve outputs. Second, the literature provides concrete use cases for AI tools as teaching aids, facilitators of interactive exercises, legal writing aids, and skill development. Third, the literature highlights the risks of passive reliance on AI and diverse perspectives over appropriate AI use. Fourth, the literature suggests that AI will make legal educational content more accessible but perhaps also less transparent and more formalistic. These themes underscore the importance of evidence-based approaches to AI integration in legal education.

Huntington on AI Companions and the Lessons of Family Law

Clare Huntington (Columbia Law) has posted “AI Companions and the Lessons of Family Law” (110 Minn. L. Rev. (Forthcoming 2025)) on SSRN. Here is the abstract:

Virtual friends and lovers powered by artificial intelligence are rapidly moving to the center of our emotional and social lives. Millions of people turn to AI companions every day for conversation, romance, sexual intimacy, therapy, and education. AI companionship holds promise, potentially reducing loneliness, supporting people without access to mental health treatment, helping students learn, and offering a judgment-free space for sensitive conversations. But AI companionship also raises significant concerns. The technology’s addictiveness can undermine human relationships. Therapy bots may prove more harmful than helpful. AI companions can be emotionally abusive. And their access to the most intimate aspects of users’ lives poses distinct privacy challenges.

As lawmakers and policy experts reckon with the benefits and serious risks of AI companionship, they must account for the distinctive aspects of AI companionship.  Unlike interacting with other forms of AI—being driven in an autonomous vehicle, say, or getting help with coding—people are in a relationship with their AI companion. Any regulatory approach must address this relationality, especially the human drive to attach to others and the vulnerability that comes with that attachment.

Legal scholars have long argued that the regulation of technology must account for relationality, and this Article demonstrates that family law—the law of relationships—is a ready means to do so. As a foundational matter, any effort to regulate AI companionship must explain why the legal system should act. Family law helps answer this question by debunking the widespread belief that relationships are purely a private matter. Family law establishes the strong state interest in nurturing positive relationships and addressing harm in abusive and neglectful relationships. These state interests apply not only to human relationships but also to human-AI relationships.

Family law also helps answer the question of how to regulate AI companionship. Family law recognizes, for example, that legal intervention is often necessary to shift the power imbalance that facilitates harmful relationships—a lesson that should be applied to the power imbalance between technology companies and people using AI companions. And family law teaches that expertise and licensing are necessary for mental health experts to work with a person at any age, although AI companions marketed for therapeutic purposes have not been subject to similar gatekeeping. Finally, family law holds lessons for advocacy, showing that it is possible to advance reasonable regulation notwithstanding the polarized political climate and considerable antipathy to regulating the technology industry, at least at the federal level. Family law points, for example, towards state-level interventions rather than action by Congress or federal agencies, and it demonstrates the broader acceptance of regulations targeted at minors than at adults.

In short, AI companionship is a new kind of relationship, bringing profound and unrecognized change to the landscape of our intimate lives. Legal scholars and policymakers must start grappling with this new world now. Family law holds great promise to accelerate that reckoning. 

Martínez, Mollica & Gibson on How Poor Writing, not Specialized Concepts, Drives Processing Difficulty in Legal Language

Eric Martínez (MIT), Frank Mollica (Edinburgh), and Edward Gibson (MIT) have posted “Poor Writing, not Specialized Concepts, Drives Processing Difficulty in Legal Language” (Cognition 2022) on SSRN. Here is the abstract:

Despite their ever-increasing presence in everyday life, contracts remain notoriously inaccessible to laypeople. Why? Here, a corpus analysis (n≈225 million words) revealed that contracts contain startlingly high proportions of certain difficult-to-process features–including low-frequency jargon, center-embedded clauses (leading to long-distance syntactic dependencies), passive voice structures, and non-standard capitalization–relative to nine other baseline genres of written and spoken English. An experiment (N=184) further revealed that excerpts containing these features were recalled and comprehended at lower rates than excerpts without these features, even for experienced readers, and that center-embedded clauses inhibited recall more-so than other features. These findings (a) undermine the specialized concepts account of legal theory, according to which law is a system built upon expert knowledge of technical concepts; (b) suggest such processing difficulties result largely from working-memory limitations imposed by long-distance syntactic dependencies (i.e., poor writing) as opposed to a mere lack of specialized legal knowledge; and (c) suggest editing out problematic features of legal texts would be tractable and beneficial for society at-large.

Tucker on Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder

Emily Tucker (Center on Privacy & Technology at Georgetown Law) has posted “Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder” (New York University Review of Law & Social Change, Vol. 46, No. 1, 2022) on SSRN. Here is the abstract:

In the many debates about whether and how algorithmic technologies should be used in law enforcement, all sides seem to share one assumption: that, in the struggle for justice and equity in our systems of governance, the subjectivity of human judgment is something to be overcome. While there is significant disagreement about the extent to which, for example, a machine-generated risk assessment might ever be unpolluted by the problematic biases of its human creators and users, no one in the scholarly literature has so far suggested that if such a thing were achievable, it would be undesirable.

This essay argues that it only becomes possible for policing to be something other than mere brutality when the activities of policing are themselves a way of deliberating about what policing is and should be, and that algorithms are definitionally opposed to such deliberation. An algorithmic process, whether carried out by a human brain or by a computer, can only operate at all if the terms that govern its operations have fixed definitions. Fixed definitions may be useful or necessary for human endeavors—like getting bread to rise or designing a sturdy foundation for a building—which can be reduced to techniques of measurement and calculation. But the fixed definitions that underlie policing algorithms (what counts as transgression, which transgressions warrant state intervention, etc) relate to an ancient, fundamental, and enduring political question, one that cannot be expressed by equation or recipe: the question of justice. The question of justice is not one to which we can ever give a final answer, but one that must be the subject of ongoing ethical deliberation within human communities.

Recommended.

Lee on Investor Protection on Crowdfunding Platforms

Joseph Lee (School of Law, University of Manchester) has posted “Investor Protection on Crowdfunding Platforms” (The EU Crowdfunding Regulation, OUP) on SSRN. Here is the abstract:

This paper discusses the protection of investors on crowdfunding platforms under the Crowdfunding Regulation. Although there are many provisions in the regulation that protect investors, this paper concentrates specifically on those included under the heading of Caper IV ‘Investor protection’ of the Crowdfunding Regulation.

This paper focuses on how investor protection can contribute to the objectives of crowdfunding and, in particular, how the provisions of the Crowdfunding Regulation serve this purpose. To this end, Section 2 discusses the investor-focused objectives of crowdfunding, and the role that technology can play in realising these objectives. Section 3 considers the meaning of investor protection within the scope of the Crowdfunding Regulation, and identifies areas where the current regime might be extended in the future. Section 4 discusses the categorisation of investors and the relevance thereof for the investor protection. Major provisions pertinent to investor protection are subsequently discussed in Sections 5 to 9, including the information to be provided to clients, default rate disclosure, the entry knowledge test and the simulation of ability to bear loss, the pre-contractual reflection period, and the key investment information sheet. The Sections also contain reflections pertinent to the different topics discussed in order to put them in a greater context. Section 10 concludes.

Alarie & Griffin on Using Machine Learning to Crack the Tax Code

Benjamin Alarie (University of Toronto – Faculty of Law) and Bettina Xue Griffin (Blue J Legal) have posted “Using Machine Learning to Crack the Tax Code” (Tax Notes Federal, January 31, 2022, p. 661) on SSRN. Here is the abstract:

In this article, we provide general observations about how tax practitioners are beginning to learn how to leverage the insights of machine learning to “crack the tax code.” We also examine how tax practitioners are using machine learning to quantify risks for their clients and ensure that tax advice can properly withstand scrutiny from the IRS and the courts. The goal is to guide tax experts in their tax planning and to help them devise the most effective ways to resolve tax disputes, leveraging new tools and technologies.

Tiamiyu on The Impending Battle for the Soul of Online Dispute Resolution

Oladeji Tiamiyu (Harvard Law School) has posted “The Impending Battle for the Soul of Online Dispute Resolution” (Cardozo J. Conflict Resol. 21) on SSRN. Here is the abstract:

Legal professionals and disputants are increasingly recognizing the value of online dispute resolution (“ODR”). While the coronavirus pandemic forced many to resolve disputes exclusively online, potentially resulting in long-term changed preferences for different stakeholders, the pre-pandemic trend has involved a dramatic increase in technological tools that can be used for resolving disputes, particularly with facilitative technologies, artificial intelligence, and blockchains. Though this has the added benefit of increasing optionality in the dispute resolution process, these novel technologies come with their own limitations and also raise challenging ethical considerations for how ODR should be designed and implemented. In considering whether the pandemic’s tectonic shifts will have a permanent impact, this piece has important implications for the future of the legal profession, as greater reliance on ODR technologies may change what it means to be a judge, lawyer, and disputant. The impending battle for the soul of ODR raises important considerations for fairness, access to justice, and effective dispute resolution—principles that will continue to be ever-present in the field.

Moerland & Kafrouni on Online Shopping with Artificial Intelligence: What Role to play for Trade Marks?

Anke Moerland (Maastricht University – Department of International and European Law) and Christie Kafrouni have posted “Online Shopping with Artificial Intelligence: What Role to play for Trade Marks?” on SSRN. Here is the abstract:

The debate on how artificial intelligence (AI) influences intellectual property protection has so far mainly focussed on its effects for patent and copyright protection. Not much attention has been paid to the effects of artificial intelligence technology for trade mark law. In particular, what has not yet been sufficiently investigated is the question as to whether trade marks still fulfil their role in a world in which consumers are assisted by AI technology when purchasing in the online market place. In how far do we still need trade marks to avoid consumer confusion? Or do the other functions of trade marks justify their continuous protection? In view of the fact that intellectual property rights have a market-distorting effect, it is in society’s interest to question whether trade mark protection is still justified.

Roberts et al. on Governing Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical Outcomes

Huw Roberts (University of Oxford – Oxford Internet Institute et al. have posted “Governing Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical Outcomes” on SSRN. Here is the abstract:

In this article, we compare the artificial intelligence (AI) strategies of China and the European Union (EU), assessing the key similarities and differences regarding what the high-level aims of each government’s strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterise China’s strategy by its current primary focus on fostering innovation and the EU’s on promoting ethical outcomes. Building on this comparative analysis, we consider where China’s AI strategy could learn from, and improve upon, the EU’s ethics-first approach to AI governance. We outline three recommendations which are to i) agree within government as to where responsibility for the ethical governance of AI should lie, ii) explicate high-level principles in an ethical manner, and iii) define and regulate high-risk applications of AI. Adopting these recommendations would enable the Chinese government better to fulfil its stated aim of governing AI ethically.