Johnson & Shen on Teaching Law and AI

Brendan Johnson (University of Minnesota Law School) and Francis X. Shen (University of Minnesota Law School) have posted “Teaching Law and Artificial Intelligence” (22 Minnesota Journal of Law, Science & Technology) on SSRN. Here is the abstract:

In this Essay we present the first detailed analysis of how U.S. law schools are beginning to offer more courses in Law and Artificial Intelligence. Based on a review of 197 law school course catalogs available online, we find that 26% of law schools offer at least one course with significant coverage of Law & AI, and that 13% of schools offer more than one such course. Analysis of the data suggests that Law & AI courses are more likely to be offered at higher ranked law schools.

Based on this analysis, and in light of the growing importance of AI in legal domains, we offer four recommendations. First, for those schools that do not currently offer a course, we advocate for creation of at least one introductory course that directly engages AI issues. For those schools that already have an introductory course, we suggest that AI issues be more broadly engaged throughout the curriculum. Third, to facilitate these two goals, we argue that law schools must continue to improve interdisciplinary partnerships with other university departments and local institutions that can provide expertise in AI and machine learning. Finally, to catalyze law school investment in this area, we suggest that U.S. News and World Report create a new ranking category: Best Law & AI Programs.

Recommended.

Aguirre et al. on AI Loyalty by Design

Anthony Aguirre (UC Santa Cruz), Peter Bart Reiner (University of British Columbia), Harry Surden (University of Colorado Law School), and Gaia Dempsey have posted “AI Loyalty by Design: A Framework for Governance of AI” (Oxford Handbook on AI Governance, Forthcoming 2022) on SSRN. Here is the abstract:

Personal and professional relationships between people take a wide variety of forms, with many including both socially and legally-enforced powers, responsibilities, and protections. Artificial intelligence (AI) systems are increasingly supplementing or even replacing people in such roles including as advisors, assistants, and (soon) doctors, lawyers, and therapists. Yet it can be quite unclear to what degree they are bound by the same sorts of responsibilities. Much has been written about fairness, accountability, and transparency in the context of AI use and trust. But largely missing from this conversation is the concept of “AI loyalty”: for whom does an AI system work? AI systems are often created by corporations or other organizations, and may be operated by an intermediary party such as a government agency or business, but the end-users are often distinct individuals. This leads to potential conflict between the interests of the users and those of the creators or intermediaries, and, problematically, to AI systems that appear to act purely in users’ interest even when they are not. Here, we investigate the concept of “loyalty” both in human and AI systems, and advocate its central consideration in AI design. Systems for which high loyalty is appropriate should be designed, from the outset, to primarily and transparently benefit their end users, or at minimum transparently communicate unavoidable conflict-of-interest tradeoffs. We discuss both market and social advantages of high-loyalty AI systems, and potential governance frameworks in which AI loyalty can be encouraged and – in appropriate contexts – required.

Seng on Artificial Intelligence and Information Intermediaries

Daniel Kiat Boon Seng (National University of Singapore Centre for Technology, Robotics, AI and the Law) has posted “Artificial Intelligence and Information Intermediaries” (The Cambridge Handbook of Private Law and Artificial Intelligence, Ernest Lim and Phillip Morgan (eds)) on SSRN. Here is the abstract:

The explosive growth of the Internet was supported by the Communications Decency Act (CDA) and the Digital Millennium Copyright Act (DMCA). Together, these pieces of legislation have been credited with shielding Internet intermediaries from onerous liabilities, and, in doing so, enabled the Internet to flourish. However, the use of machine learning systems by Internet intermediaries in their businesses threatens to upend this delicate legal balance. Would this affect the intermediaries’ CDA and DMCA immunities, or expose them to greater liability for their actions? Drawing on both substantive and empirical research, this paper concludes that automation used by intermediaries largely reinforces their immunities. In the consequence of this is that intermediaries are left with little incentive to exercise their discretion to filter out illicit, harmful and invalid content. These developments brought about by AI are worrisome and require a careful recalibration of the immunity rules in both the CDA and DMCA to ensure the continued relevance of these rules.