Wasil et al. on Governing Dual-use Technologies: Case Studies of International Security Agreements and Lessons for AI Governance

Akash Wasil (Georgetown U) et al. have posted “Governing Dual-use Technologies: Case Studies of International Security Agreements and Lessons for AI Governance” on SSRN. Here is the abstract:

International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.

Murray on Artificial Intelligence for Learning the Law: Generative AI for Academic Support in Law Schools and Universities – Report of Experiments

Michael D. Murray (U Kentucky) has posted “Artificial Intelligence for Learning the Law: Generative AI for Academic Support in Law Schools and Universities – Report of Experiments” on SSRN. Here is the abstract:

This document reports research conducted from December 2022 to August 2024, and in particular, Part I experiments conducted from May 20 to July 12, 2024, and Part II from August 15-27, 2024, on the use of generative AI in legal education and academic support. This study was a cross-sectional, latitudinal, qualitative evaluation of generative AI systems at a certain point in time and at the level of development of each system at that point in time. Although the topic of this study is learning the law, the results and overall approach to using an AI as a personalized learning tutor can be applied to many graduate and undergraduate programs in universities and other levels of education. This paper reports the Part I experiments and their qualitative and comparative findings comparing the performance of public-facing general purpose LLMs—Claude 3.5 Sonnet, Copilot, Gemini 1.5 Pro, and GPT-4o Omni—and a law-specific LLM with a curated legal dataset, Lexis+ AI, and it will reveal which systems performed the best as personalized, self-guided, one-on-one law tutors. It also reports the Part II experiments on using a generative AI system, Claude 3.5 Sonnet, as a personalized one-on-one tutor to improve a novice learner’s performance on objective examinations in subjects the learner has never studied.

Although the topic of this study is learning the law, the results and overall approach to using an AI as a personalized learning tutor can be applied to many graduate and undergraduate programs in universities and other levels of education. The advancements in tutoring represented by generative AI systems have increased the pace of adoption of AI technologies to the point that GAI tools can play a significant role in academic support in law schools and universities. Generative AI tools can help a student learn and understand material better, more deeply, and notably faster than traditional means of reading, rereading, notetaking, and outlining. GAI tools, particularly Intelligent Tutoring Systems (ITS), adaptive learning platforms, and AI-augmented tutoring solutions, have shown promise in enhancing student engagement, improving learning outcomes, and providing tailored academic support. AI can explain, elaborate on, and summarize course material. It can write and administer formative assessments, and, if desired, it can write self-guided summative evaluations and grade them. AI can translate material into and from foreign languages with a fidelity to context, usage, and nuances of meaning not previously seen in machine learning or neural network translation services. AI also can visualize material using the tools of visual generative AI that literally paint pictures of the subjects and situations in the material that can overcome students’ literacy issues both in the native language of the communication and in the students’ own native languages.

Blaszczyk on Impossibility of Artificial Inventors

Matt Blaszczyk (U Michigan Law) has posted “Impossibility of Artificial Inventors” (16 Hastings Sci. & Tech. L.J. (forthcoming, Dec. 2024).) on SSRN. Here is the abstract:

Recently, the United Kingdom Supreme Court decided that only natural persons can be considered inventors. A year before, the United States Court of Appeals for the Federal Circuit issued a similar decision. In fact, so have many the courts all over the world. This Article analyses these decisions, argues that the courts got it right, and finds that artificial inventorship is at odds with patent law doctrine, theory, and philosophy. The Article challenges the intellectual property (IP) post-humanists, exposing the analytical and normative perils of their argumentation, and recommends against getting rid of the nominally central place of humans in the law. This response to IP post-humanism rests in equal measure on patent doctrine, legal causation, and the mythology which creates and justifies the law.

Sun on The Right to Know Social Media Algorithms

Haochen Sun (The U Hong Kong Law) has posted “The Right to Know Social Media Algorithms” (18 Harvard Law & Policy Review 1 (2023)) on SSRN. Here is the abstract:

One of the most important legal issues in the age of social media is how to tackle algorithmic secrecy. Social media algorithms permeate society, yet most are developed and applied in a black-box manner with a range of serious social consequences. For example, the amplification of fake news by social media algorithms has caused tremendous harm to democratic governance and undermined pandemic relief measures. 

In addressing the problems of algorithmic secrecy, the legal protection of social media algorithms as trade secrets is a major obstacle. This article explores the possibility of recognizing a right to know algorithms as the legal basis for requiring proportionate disclosure of trade secrets pertaining to social media algorithms. This new legal right would promote algorithmic transparency in the public interest. 

The right to know, a civil liberty that enables citizens to obtain information held by the government and certain private entities, lends strong policy support to recognition of the right to know social media algorithms. As the article shows, this new right would function to protect democratic participation, public safety, and social equality, the three kinds of public interest that are of crucial importance in the algorithmic society. 

The article then discusses how this new legal right could prevail over the trade secret protection of social media algorithms, paving the way to a multi-stakeholder approach to regulating algorithmic secrecy. This new approach would empower the legislature, administration, and judiciary to determine how social media companies should effect proportionate disclosure of information on their algorithms. Its primary aim is to promote transparency of social media algorithms, to make them more intelligible, and to hold social media companies accountable should they fail to fulfil their disclosure responsibility.

Sag & Yu on The Globalization of Copyright Exceptions for AI Training

Matthew Sag (Emory U Law) and Peter K. Yu (Texas A&M U Law) have posted “The Globalization of Copyright Exceptions for AI Training” (Emory Law Journal, Vol. 74, 2025, Forthcoming) on SSRN. Here is the abstract:

Generative AI, machine learning and other computational uses of copyrighted works pose profound questions for copyright law. This article conducts a global survey of multiple countries with different legal traditions and local conditions to explore how they have attempted to answer these questions in relation to the unauthorized use of copyrighted works for AI training.

Although the world has yet to achieve international consensus on this issue, an international equilibrium is emerging. Jurisdictions with common law and civil law traditions, and with varying economic conditions, technological capabilities, political systems and cultural backgrounds, have found ways to reconcile copyright law and AI training. In this equilibrium, countries recognize that text and data mining, computational data analysis and AI training can be socially valuable and may not inherently prejudice the copyright holders’ legitimate interests. Such uses should therefore be allowed without express authorization in some, but not all, circumstances.

We identify three forces driving toward this equilibrium: (1) the centrality of the idea-expression distinction; (2) global competition in AI; and (3) the race to the middle. However, we also address factors that may upset this emerging equilibrium, including ongoing copyright litigation, partnerships and licensing deals in the United States as well as legislative and regulatory efforts in both the United States and the European Union, including the EU AI Act.

A key lesson of our cross-country survey is that globally, the binary policy debate that assumes that text and data mining and AI training must be categorically condemned or applauded has been eclipsed by a more granular debate about the specific circumstances in which the unauthorized use of copyrighted works for AI training should be allowed or prohibited. Countries that have hesitated until now to modernize their copyright laws in the area of AI training have several templates open to them and little reason for hesitation.

Orbach & Orbach on The US Is Not Prepared for the AI Electricity Demand Shock

Barak Orbach (U Arizona) and Eli Orbach (Phillips Exeter Academy) have posted “The US Is Not Prepared for the AI Electricity Demand Shock” on SSRN. Here is the abstract:

The United States power grid is increasingly strained by the surging electricity demand driven by the AI boom. Efforts to modernize the power infrastructure are unlikely to keep pace with the rising demand in the coming years. We explore why competition in AI markets may create an electricity demand shock, examine the associated social costs, and offer several policy recommendations.

Mittelsteadt on Artificial Intelligence: An Introduction for Policymakers (revised edition)

Matthew Mittelsteadt (George Mason U Mercatus Center) has posted “Artificial Intelligence: An Introduction for Policymakers (revised edition)” on SSRN. Here is the abstract:

This introduction seeks to equip a diversity of policymakers with the core concepts needed to identify, understand, and solve artificial intelligence (AI) policy challenges. AI is best conceived as an often ill-defined goal, not a monolithic general-purpose technology, driven by a diverse and ever-evolving constellation of input technologies. The document first introduces a sample of AI-related challenges to ground the importance of understanding this technology, the diversity of issues it will create, and its potential to transform law and policy. Next it introduces AI, key terms such as machine learning, and ways that AI progress can be assessed. Finally, it introduces and explains how three key input technologies-data, microchips, and algorithms-work and make AI possible. These core technologies are known as the AI triad. Intended to serve a variety of audiences, these explanations are presented with multiple levels of depth. Technical concepts are tied to relevant policy questions, thereby guiding the application of this knowledge while illustrating the value of understanding this emerging technology beyond a surface level. This introduction to AI appears both in written form and as an ever-evolving website supported by the Mercatus Center: https://www.mercatus.org/ai-policy-guide.

Krupiy on How the Electric Toothbrush, Search Engine, Smartphone, Social Media and Artificial Intelligence Decision-Making Processes Amplify the Exercise of Power at State and Global Levels: a Media Ecology Analysis

Tetyana Krupiy (Newcastle U) has posted “How the Electric Toothbrush, Search Engine, Smartphone, Social Media and Artificial Intelligence Decision-Making Processes Amplify the Exercise of Power at State and Global Levels: a Media Ecology Analysis” (New Explorations: Studies in Culture and Communication) on SSRN. Here is the abstract:

Scholars disagree over whether the employment of artificial intelligence technologies entails an inevitable exercise of power over people or whether such technologies can be configured in such a way as to allow a plurality of possible ways to engage in governance. This article uses the media ecology approach to analysis to demonstrate that the concern that artificial intelligence technologies that appear to be mundane are in fact involved in the exercise of power over people is valid. It contributes to the existing literature by showing that numerous applications of artificial intelligence that people use on an everyday basis interact to amplify one another’s effects. These technologies are the electric toothbrush, internet search engine, smartphone, social media and the use of artificial intelligence as part of the decisionmaking process. These effects occur at the levels of the individual, city, state and interstate. These effects are cascading and interconnected rather than occurring on distinct planes. The exercise of power over the individual by the state and the corporations becomes difficult to disentangle. Therefore, states need to cooperate regarding governing artificial intelligence and technology companies if they are to meaningfully protect people from harmful effects.

Singh on GenAI and Religion: Creation, Agency, and Meaning

Dr Preet Deep Singh (Invest India) has posted “GenAI and Religion: Creation, Agency, and Meaning” on SSRN. Here is the abstract:

This paper explores the parallels between Generative Artificial Intelligence (GenAI) and religious systems in three domains: creation, agency, and meaning-making. Both offer frameworks for human engagement but differ in intent, autonomy, and moral accountability. Despite these differences, GenAI and religion share roles as creators, influencers, and meaning facilitators. We address and counter rebuttals to these parallels, highlighting GenAI’s co-constructed outputs and its impact on modern meaning-making. The paper concludes with the societal implications of these parallels in shaping future thought and action.

Cheong on Transparency and Accountability in AI Systems: Safeguarding Wellbeing in the Age of Algorithmic Decision-Making

Ben Chester Cheong (Singapore U Social Sciences) has posted “Transparency and Accountability in AI Systems: Safeguarding Wellbeing in the Age of Algorithmic Decision-Making” (Frontiers in Human Dynamics, volume 6, 2024[10.3389/fhumd.2024.1421273]) on SSRN. Here is the abstract:

The rapid integration of artificial intelligence (AI) systems into various domains has raised concerns about their impact on individual and societal wellbeing, particularly due to the lack of transparency and accountability in their decision-making processes. This review aims to provide an overview of the key legal and ethical challenges associated with implementing transparency and accountability in AI systems. The review identifies four main thematic areas: technical approaches, legal and regulatory frameworks, ethical and societal considerations, and interdisciplinary and multi-stakeholder approaches. By synthesizing the current state of research and proposing key strategies for policymakers, this review contributes to the ongoing discourse on responsible AI governance and lays the foundation for future research in this critical area. Ultimately, the goal is to promote individual and societal wellbeing by ensuring that AI systems are developed and deployed in a transparent, accountable, and ethical manner.