Williams on Layered Alignment

Spencer Williams (California Western Law) has posted “Layered Alignment” on SSRN. Here is the abstract:

Most artificial intelligence (AI) researchers now believe that AI represents an existential threat to humanity. The most dangerous threat posed by AI is an issue known as the alignment problem: the risk that a sufficiently intelligent and capable AI system could become misaligned with the goals and values of its human creators and instead pursue its own objectives to the detriment of humanity, including the possibility of extinction. The tension at the heart of the alignment problem is familiar to scholars of agency, contracts, and corporate law, though it goes by a different name: the principal-agent problem. In the traditional principal-agent problem, an agent has an incentive to act in a way that advances their own personal interests rather than the interests of the principal. This divergence of interests gives rise to agency costs that decrease the value of the agency relationship. To reduce agency costs, the principal and the agent use a variety of alignment mechanisms to realign their interests, such as contracts, control rights, and fiduciary duties. Many of these alignment mechanisms have analogs in the AI context. For example, AI agents respond to incentives built into their reward functions similarly to how human agents respond to performance-based compensation. One of the most important lessons from the literature on the principal-agent problem is that no one alignment mechanism can completely align the interests of the principal and the agent. Instead, parties in an agency relationship use a variety of alignment mechanisms to respond to different types of agency costs. For example, corporations use a mix of contracts, shareholder voting, board oversight, and fiduciary duties to align the interests of managers and shareholders. The same is true of the alignment problem-no single alignment mechanism can prevent AI misalignment. Yet despite the growing literature on AI safety, little attention has been given to the complex, interconnected nature of the alignment problem and the need for a multifaceted solution. This Article aims to fill this gap in the literature. Drawing on complexity theory, the Article argues for a “layered” approach to AI alignment in which a variety of alignment mechanisms are layered together to respond to different aspects of the alignment problem. Layered alignment has significant implications for the governance and regulation of AI. Implementing a layered approach to AI alignment will require a high level of coordination and cooperation between public and private AI stakeholders. This need for coordination and cooperation comes at a time when there is an escalating AI arms race between leading AI companies as well as between nations. To facilitate coordination between AI stakeholders, the Article calls for the creation of an international AI regulatory agency. It is time for us all to start working together-before it is too late.

Simon on Bespoke Regulation of Artificial Intelligence

Brenda M. Simon (California Western Law) has posted “Bespoke Regulation of Artificial Intelligence” (Loyola of Los Angeles Law Review (forthcoming)) on SSRN. Here is the abstract:

The decision to regulate artificial intelligence (AI) has far reaching consequences. Determining how to address budding applications of AI technology should depend on their effects. This article describes how regulation should be carefully tailored to avoid harm while maximizing social welfare, building on Orly Lobel’s taxonomy of regulatory tools. Part I examines the foundational difficulties in governing AI, including industry influence in regulation and deficiencies in enforcement. Part II elaborates on Lobel’s framework, detailing the benefits and limitations of a variety of tools, such as voluntary standards, soft law mechanisms, and public-private partnerships. It describes how bringing in diverse stakeholders can achieve a more practical approach to AI governance but cautions against an evaluation of AI that overlooks its effects on areas such as access, autonomy, privacy, and the environment. Part III introduces the legislative carve-out as a potential instrument in AI governance. Using the 21st Century Cures Act’s exclusion of certain low-risk Clinical Decision Support (CDS) software from FDA oversight as a case study, it evaluates the carve-out’s implications for innovation, safety, and physician liability. The article concludes by advocating for a nuanced approach to AI governance that furthers innovation while mitigating risks, underscoring the importance of tailoring regulation based on the degree of likely harm.

Selbst on Artificial Intelligence and the Discrimination Injury

Andrew D. Selbst (UCLA Law) has posted “Artificial Intelligence and the Discrimination Injury” (78 Florida Law Review __ (forthcoming 2026)) on SSRN. Here is the abstract:

For a decade, scholars have debated whether discrimination involving artificial intelligence (AI) can be captured by existing discrimination laws. This article argues that the challenge that artificial intelligence poses for discrimination law stems not from the specifics of any statute, but from the very conceptual framework of discrimination law. Discrimination today is a species of tort, concerned with rectifying individual injuries, rather than a law aimed at broadly improving social or economic equality. As a result, the doctrine centers blameworthiness and individualized notions of injury. But it is also a strange sort of tort that does not clearly define its injury. Defining the discrimination harm is difficult and contested. As a result, the doctrine skips over the injury question and treats a discrimination claim as a process question about whether a defendant acted properly in a single decisionmaking event. This tort-with-unclear-injury formulation effectively merges the questions of injury and liability: If a defendant did not act improperly, then no liability attaches because a discrimination event did not occur. Injury is tied to the single decision event and there is no room for recognizing discrimination injury without liability.

This formulation directly affects regulation of AI discrimination for two reasons: First, AI decisionmaking is distributed; it is a combination of software development, its configuration, and its application, all of which are completed at different times and usually by different parties. This means that the mental model of a single decision and decisionmaker breaks down in this context. Second, the process-based injury is fundamentally at odds with the existence of “discriminatory” technology as a concept. While we can easily conceive of discriminatory AI as a colloquial matter, if there is legally no discrimination event until the technology is used in an improper way, then the technology cannot be considered discriminatory until it is improperly used.

The analysis leads to two ultimate conclusions. First, while the applicability of disparate impact law to AI is unknown, as no court has addressed the question head-on, liability will depend in large part on the degree to which a court is willing to hold a decisionmaker (e.g. and employer, lender, or landlord) liable for using a discriminatory technology without adequate attention to the effects, for a failure to either comparison shop or fix the AI. Given the shape of the doctrine, the fact that the typical decisionmaker is not tech savvy, and that they likely purchased the technology on the promise of it being non- discriminatory, whether a court would find such liability is an open question. Second, discrimination law cannot be used to create incentives or penalties for the people best able to address the problem of discriminatory AI—the developers themselves. The Article therefore argues for supplementing discrimination law with the application of a combination of consumer protection, product safety, and products liability—all legal doctrines meant to address the distribution of harmful products on the open market, and all better suited to directly addressing the products that create discriminatory harms.

Rookard on Inherently Human Functions

Landyn Rookard (Loyola U New Orleans College Law) has posted “Inherently Human Functions” (100 Tulane Law Review (forthcoming)) on SSRN. Here is the abstract:

Federal law prohibits agencies from outsourcing “inherently governmental functions,” defined as functions that are “so intimately related to the public interest as to require performance by Federal Government employees.” Behind that short definition are decades of evolving practices and guidance that have frequently vexed agencies attempting to comply with the directive. Despite the shortcomings of the inherently governmental functions framework, this Article argues that Congress and the Executive Branch should establish a similar designation for “inherently human functions,” one that aims to combat inappropriate governmental algorithmization and guarantees a human decisionmaker under certain circumstances. 

Despite some suggestions in the literature that the inherently governmental functions designation was inspired by the constitutional prohibition on private delegation, the historical record shows that Congress and the Executive Branch developed the designation to address policy concerns about the effect that outsourcing can have on the independence, accountability, and capacity of the federal government. Algorithmic governance poses similar threats and warrants similar safeguards.

The proposed inherently human function designation should learn from the shortcomings of the inherently governmental function designation. First, it should embrace a bottom-up process for filling in the details of the framework, one that prioritizes the voices of groups most directly impacted by algorithmization. Second, the definition should focus on protecting against algorithmization of functions that could cause lasting, difficult-to-remediate harm to individuals’ wellbeing. Finally, the inherently human functions designation should be backed by robust public and private enforcement mechanisms, such as ombuds offices, inspectors general, and a private right of action for individuals directly harmed by inappropriate algorithmization.

Ginsburg on Humanist Copyright

Jane C. Ginsburg (Columbia U Law) has posted “Humanist Copyright” (Forthcoming, 6 JOURNAL OF FREE SPEECH LAW (2025)) on SSRN. Here is the abstract:

This exploration of the role of authorship in copyright law proceeds in three parts: historical, doctrinal, and predictive. First, I will review the development of authorfocused property rights in the pre-copyright regimes of printing privileges and in early Anglo-American copyright law through the 1909 U.S. Copyright Act. Second, I will analyze the extent to which the present U.S. copyright law does (and does not) honor human authorship. Finally, I will consider the potential responses of copyright law to the claims of proprietary rights in AI-generated outputs. I will explain why the humanist orientation of US copyright law validates the position of the Copyright Office and the courts that the output of an AI system will not be a “work of authorship” unless human participation has determinatively caused the creation of the output.

Almada on Technical AI Transparency: A Legal View of the Black Box

Marco Almada (Universite du Luxembourg Law) has posted “Technical AI Transparency: A Legal View of the Black Box” on SSRN. Here is the abstract:

AI systems are often described as “black boxes” that must be made scrutable for oversight. This view tends to accompany a technical framing of the problem, under which some present or future approach can make AI systems transparent. In this paper, I argue that what the law expects from “transparency” is not something that can be provided by purely legal means. After proposing a taxonomy of technical approaches to transparency, I highlight how these approaches fall short of legal requirements. Many of these shortcomings are essential to the approaches in question, but they can be mitigated by combining approaches and by using other legal provisions to address manipulation risks. Legal transparency of AI is not a solely technical problem, but adequate design requirements can provide a valuable regulatory contribution.

Rättzén on Location Is All You Need: Copyright Extraterritoriality and Where to Train Your AI

Mattias Rättzén (Independent) has posted “Location Is All You Need: Copyright Extraterritoriality and Where to Train Your AI” (26 The Columbia Science and Technology Law Review 175-289 (2024)) on SSRN. Here is the abstract:

The development of artificial intelligence (“AI”) models requires vast quantities of data, which will often include copyrighted materials. The reproduction of copyrighted materials in the course of training AI models will infringe on copyright, unless there are applicable exceptions and limitations exempting such activities. There is so far considerable divergence between jurisdictions, including between the United States, EU, U.K., Japan, Singapore, Australia, India, Israel, and many more countries, in this regard. In the absence of international harmonization, there is therefore a high likelihood that the same type of training activity would be considered copyright infringement in some countries but not in others.

The AI community is not blind to that risk. If copyright law restricts the development and deployment of AI, developers may decide to relocate their operations elsewhere, where the reproduction of training data is clearly not infringing. This Article concludes that there is a loophole in the international copyright system, as it currently stands, that would permit large-scale copying of training data in one country where this activity is not infringing. Once the training is done and the model is complete, developers could then make the model available to customers in other countries, even if the same training activities would have been infringing if they had occurred there. Because copyright laws are territorial in nature, by default they can only restrict infringing conduct occurring in their respective countries. From that point of view for AI developers, location is indeed all you need.

The EU has become the first to respond to this problem by retroactively extending their text and data mining exception extraterritorially to training activities occurring in non-EU countries, once the completed AI model is placed on the EU market. While such an extraterritorial application benefits rightholders and closes the loophole now present, it makes the situation significantly more complex for developers. If other regulators decide to follow the same path as the EU, which previously happened in the data privacy context, then developers would be facing multiple, conflicting copyright laws targeting the same underlying activity. This could significantly complicate the development process for AI and potentially undermine the AI industry. This Article critically discusses these and related issues, and whether an extraterritorial application of copyright laws is compatible with territoriality norms that are supposed to respect foreign sovereignty. It also explores, in light of these difficulties, whether we should instead shift focus from regulating the inputs (i.e., the data used to train AI models) to regulating the outputs (i.e., the AI-generated content itself). Indeed, to the extent that the transnational data loophole cannot be closed without infringing upon foreign sovereignty, we may need to look at other regulatory means instead.

This Article urgently calls for a coordinated international effort in copyright law, which balances the interests of rightholders with the technical, regulatory, and economic realities faced by developers. How we resolve these issues could make or break the future of AI. If we cannot find a way to reconcile the interests of rightholders and AI stakeholders, the world may be left with a segregated and fragmented AI landscape, one in which there can only be losers and no winners.

Citation: Mattias Rättzén, Location Is All You Need: Copyright Extraterritoriality and Where To Train Your AI, 26 Colum. Sci. & Tech. L. Rev. 175 (2024)

Official source: https://journals.library.columbia.edu/index.php/stlr/article/view/13338/6542

Jayakar et al. on Same Goal, Different Paths: Contrasting Approaches to Ai Regulation in China and India

Krishna Jayakar (Pennsylvania State U) et al. have posted “Same Goal, Different Paths: Contrasting Approaches to Ai Regulation in China and India” on SSRN. Here is the abstract:

This paper is a comparative analysis of how two leading developing nations, China and India, are proposing to regulate artificial intelligence (AI) systems. Despite similarity in circumstances as large developing economies aiming to upgrade their technology sectors and create jobs, the two countries have taken significantly different approaches to AI regulation. We discuss the reasons why, based on review of agency reports and documents in the two countries.

Noguer I Alonso on AGENTS: A Historical Perspective 1948-2024

Miquel Noguer I Alonso (Artificial Intelligence Finance Institute) has posted “AGENTS: A Historical Perspective 1948-2024” on SSRN. Here is the abstract:

This paper provides an in-depth analysis of three fundamental types of agents in computational systems: Agent-Based Models (ABM), Reinforcement Learning (RL) agents, and Large Language Model (LLM) agents. We explore their theoretical foundations, mathematical formulations, and practical applications while examining their historical development. Through detailed mathematical analysis and case studies, we demonstrate how these agent paradigms can be integrated to create hybrid systems capable of addressing complex real-world challenges. Special attention is given to recent developments in multi-agent systems, emergence phenomena, and the convergence of different agent architectures. This work contributes to the growing body of research on intelligent agents by providing a unified framework for understanding and comparing different agent types, highlighting their strengths and limitations.

Cooper et al. on Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice

A. Feder Cooper (Microsoft Research) et al. have posted “Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice” on SSRN. Here is the abstract:

We articulate fundamental mismatches between technical methods for machine unlearning in Generative AI, and documented aspirations for broader impact that these methods could have for law and policy. These aspirations are both numerous and varied, motivated by issues that pertain to privacy, copyright, safety, and more. For example, unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model’s parameters, e.g., a particular individual’s personal data or in-copyright expression of Spiderman that was included in the model’s training data. Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs, e.g., generations that closely resemble a particular individual’s data or reflect the concept of “Spiderman.” Both of these goals-the targeted removal of information from a model and the targeted suppression of information from a model’s outputs-present various technical and substantive challenges. We provide a framework for thinking rigorously about these challenges, which enables us to be clear about why unlearning is not a general-purpose solution for circumscribing generative-AI model behavior in service of broader positive impact. We aim for conceptual clarity and to encourage more thoughtful communication among machine learning (ML), law, and policy experts who seek to develop and apply technical methods for compliance with policy objectives.