Tokson on Artificial Intelligence and the Anti-Authoritarian Fourth Amendment

Matthew Tokson (U Utah S.J. Quinney College Law) has posted “Artificial Intelligence and the Anti-Authoritarian Fourth Amendment” (27 U. Penn. J. Const. L. __ (forthcoming 2025)) on SSRN. Here is the abstract:

AI-based surveillance and policing technologies facilitate authoritarian drift. That is, the systems of observation, detection, and enforcement that AI makes possible tend to reduce structural checks on executive authority and to concentrate power among fewer and fewer people. In the wrong hands, they can help authorities detect subversive behavior and discourage or punish dissent, while enabling corruption, selective enforcement, and other abuses. These effects, although subtle in today’s relatively primitive AI-enabled systems, will become increasingly significant as AI technology improves.

Today, the most influential branch of Fourth Amendment scholarship conceives of the Fourth Amendment’s central purpose as preserving citizen privacy against intrusive government observation. Another, less prominent line of scholarship emphasizes the Fourth Amendment’s role in preventing government authoritarianism, focusing on concepts like power, security, and citizen autonomy. The insights of this latter branch of Fourth Amendment theory are likely to be increasingly relevant as AI comes to play a larger role in surveillance and law enforcement.

The pro-authoritarian nature of AI law enforcement should influence how courts assess such law enforcement under the Fourth Amendment. This symposium Essay examines the role that Fourth Amendment law can play in regulating AI-enabled enforcement and preventing authoritarianism. It contends that, among other things, courts assessing whether networked camera or other sensor systems implicate the Fourth Amendment should account for the risks of unregulated, permeating surveillance by AI agents. Judges evaluating the reasonable use of force by police robots should consider the dangers of allowing AI systems to monopolize the use of force in a jurisdiction and the diminished justifications for self-defense. Likewise, courts can incorporate factors specific to the AI context into their totality of the circumstances analyses of Fourth Amendment reasonableness. Whether there is a “human in the loop” during enforcement encounters, and whether there is meaningful civilian oversight over AI-enabled enforcement programs, should play a substantial role in assessing the reasonableness of AI-centered police practices. By adapting the principles of the anti-authoritarian Fourth Amendment to the new frontier of AI law enforcement, legal actors can restrain the pro-authoritarian effects of emerging law enforcement technologies.

Saw & Tan on Unpacking Copyright Infringement Issues in the GenAI Development Lifecycle and a Peek into the Future

Cheng Lim Saw (Singapore Management U Yong Pung How Law) and Bryan Zhi Yang Tan (Singapore Management U Yong Pung How Law) have posted “Unpacking Copyright Infringement Issues in the GenAI Development Lifecycle and a Peek into the Future” on SSRN. Here is the abstract:

Generative AI (“GAI”) refers to deep learning models that ingest input data and “learn” to produce output that mimics such data when duly prompted. This feature, however, has given rise to numerous claims of infringement by the owners of copyright in the training material. Relevantly, three questions have emerged for the law of copyright: (1) whetherprima facie acts of infringement are disclosed at each stage of the GAI development lifecycle; (2) whether such acts fall within the scope of the text and data mining (“TDM”) exceptions; and (3) whether (and, if so, how successfully) the fair use exception may be invoked by GAI developers as a defence to infringement claims. This paper critically examines these questions in turn and considers, in particular, their interplay with the so-called “memorisation” phenomenon. It is argued that although infringing acts might occur in the process of downloading in-copyright training material and training the GAI model in question, TDM and fair use exceptions (where available) may yet exonerate developers from copyright liability under the right conditions.

Razis & Cooper on The Federalist’s Dilemma: State AI Regulation & Pathways Forward

Evangelos Razis (George Mason U Antonin Scalia Law) and James C. Cooper (George Mason U Antonin Scalia Law) have posted “The Federalist’s Dilemma: State AI Regulation & Pathways Forward” (Harvard Journal of Law & Public Policy, Forthcoming) on SSRN. Here is the abstract:

AI has captured everybody’s imagination, especially policymakers. The extent to which imagination has translated into action, however, is a mixed bag. At the federal level, Congress has studied the issue, weighed grand proposals, and held countless hearings on AI but has enacted only modest legislation. While executive branch agencies and the FTC have talked a big game, their accomplishments have also been modest, mostly due to limits on legal authority. Not surprisingly, as with data privacy, states have stepped into the vacuum created by federal inaction with AI regulations of their own. Typically, states acting as laboratories is a good thing, allowing experimentation and competition to hone the efficiency and fit of regulatory regimes to different situations. But when the subject of regulation is interstate – and in this case global—by nature, a patchwork of state regimes is far from ideal. The solution to this dilemma is often seen as a binary: allow the state patchwork to evolve for better or worse, or stop it in its tracks with a federal preemptive response. We see this as a false choice and offer two potentially better paths. First, would be for Congress to enact a national “moratorium” on state laws regulating AI. We argue that this as a superior approach because it will arrest potentially harmful regulation and the patchwork problem and alleviate pressure on Congress to pass premature AI laws merely to prevent the states from acting. Second, would be to honor choice of law provisions in AI-related contracts, thereby fostering competition among firms and states to provide efficient AI regulation. Borrowing from the ideas of Larry Ribstein and various coauthors, we argue that firms would compete for consumers by choosing to be regulated by the regime that maximized their profits, and states would compete to enact efficient laws. In sum, we think the current rush to regulate AI, whether at the state or federal level, is premature. Regulators have existing tools to address consumer harms. The problem is that our federal system, just like nature, abhors a vacuum, and states are filling it with a patchwork of potentially onerous and inconsistent AI requirements. The pressure to prevent state action, in turn, may force Congress’ hands into an ill-considered and hasty response that is little better than the states’ alternative. We see our hybrid approaches as a way out of this dilemma.

Passador on The AI Act’s Silent Impact on Corporate Roles

Maria Lucia Passador (Bocconi U Law) has posted “The AI Act’s Silent Impact on Corporate Roles” on SSRN. Here is the abstract:

The European Union’s Artificial Intelligence Act (AI Act) introduces a profound shift in corporate governance and regulatory compliance, directly impacting directors, board secretaries, compliance officers, in-house counsels, and corporate lawyers. These professionals now face expanded responsibilities in ensuring AI transparency, regulatory compliance, and risk management.

This paper examines the AI Act’s implications for these corporate roles, highlighting the evolving regulatory expectations and the increasing intersection between AI governance, liability frameworks, and corporate strategy. As AI systems become integral to business operations, these figures must navigate complex legal and compliance challenges, ensuring adherence to AI-specific regulatory mandates while aligning corporate policies with broader governance principles. Board secretaries must integrate AI oversight into governance structures, ensuring board-level awareness of AI risks and compliance obligations. Compliance officers have to enforce risk management systems, conduct AI impact assessments, and oversee regulatory reporting. In-house counsels must navigate liability allocation, contractual safeguards, and cross-border compliance, while corporate lawyers play a pivotal role in advising on fiduciary duties, investor disclosures, and AI-driven legal risks. Hence, with stringent obligations on high-risk AI systems, post-market monitoring, and human oversight, the AI Act demands a proactive legal strategy.

Since the AI Act has extraterritorial reach, it mandates compliance for providers and deployers of AI systems whose outputs are utilized within the EU, thereby extending regulatory obligations to non-EU entities. Consequently, its influence transcends European boundaries, shaping international AI governance and impacting businesses and legal professionals across the globe.

This analysis focuses on the structural impact of the AI Act, rather than its granular requirements, offering strategic insights into the necessary adaptations for corporate and legal advisory functions. It examines broader regulatory trends influencing AI oversight, identifies potential challenges in enforcement, and provides a roadmap for corporate professionals to mitigate AI-related risks, align governance frameworks with regulatory mandates, and ensure AI adoption remains legally sound and ethically responsible. Ultimately, it presents a forward-looking perspective on the evolving role of legal and compliance professionals within the AI regulatory landscape.

Lobel on Do We Need to Know What Is Artificial? Unpacking Disclosure & Generating Trust in an Era of Algorithmic Action

Orly Lobel (U San Diego Law) has posted “Do We Need to Know What Is Artificial? Unpacking Disclosure & Generating Trust in an Era of Algorithmic Action” (Presented at “Dynamics of Generative AI” symposium, March 22, 2024) on SSRN. Here is the abstract:

Should users have the right to know when they are chatting with a bot? Should companies providing generative AI applications be obliged to mark the generated products as AI-generated or alert users of generative chats that the responder is “merely an LLM (or a Large Language Model)”? Should citizens or consumers—patients, job applicants, tenants, students—have the right to know when a decision affecting them was made by an automated system? Should art lovers, or online browsers, have the right to know that they are viewing an AI-generated image?

As automation accelerates and AI is deployed in every sector, the question about knowing about artificiality becomes relevant in all aspects of our lives. This essay, written for the 2024 Network Law Review Symposium on the Dynamic of Generative AI, aims to unpack the question—which is in fact a set of complex questions—and to provide a richer context and analysis than the often default, absolute answer: yes! we, the public, must always have the right to know what is artificial and what is not. The question is more complicated and layered than may initially seem. The answer, in turn, is not as easy as some of the recent regulatory initiatives suggest in their resolute yes. The answer, instead, depends on the goals of information disclosure. Is disclosure a deontological or dignitarian good, and in turn, right, in and of itself? Or does disclosure serve a utilitarian purpose of supporting the goals of the human-machine interaction, for example, ensuring accuracy, safety, or unbiased decision-making? Does disclosure increase trust in the system, process, and results? Or does the disclosure under certain circumstances hinder those very goals, for example, if knowing that a decision was made by a bot reduces the AI user’s trust and increases the likelihood the AI user will disregard the recommendation (e.g., an AI radiology or insulin bolus system recommendation? An AI landing device in aviation?).

The essay presents a range of contexts and regulatory requirements centered around the right to know about AI involvement. It then suggests a set of reasons for disclosure of artificiality: dignity; control; trust – including accuracy, consistency, safety, fairness; authenticity; ownership/attribution, and aesthetic/experiential. The essay further presents recent behavioral literature on AI rationality, algorithmic aversion, and algorithmic adoration to suggest a more robust framework within which the question about disclosure rights, and their effective timing, should be answered. It then shows how labeling and marking AI-generated images is a distinct inquiry separate from disclosure of AI-generated decisions. In each of these contexts, the answers should be based on empirical evidence on how disclosures affect perception, rationality, behavior, and measurable goals of these deployed technologies.

Gavornik & Podrouzek on Towards Moral Sensitivity in AI Practice

Adrian Gavornik (Kempelen Institute Intelligent Technologies) and Juraj Podrouzek (Kempelen Institute Intelligent Technologies) have posted “Towards Moral Sensitivity in AI Practice” on SSRN. Here is the abstract:

This position paper introduces the concept of moral sensitivity into the context of AI ethics as a potential bridge between abstract ethical principles and their practical implementation. Besides this, we argue that fostering moral sensitivity among AI practitioners can help address active responsibility gaps in AI development. Building upon existing literature and empirical experience, we propose and outline several key components of moral sensitivity specific to AI practice. We present a dual research agenda for studying moral sensitivity in AI practice: a top-down approach analyzing how existing frameworks and tools foster different components of moral sensitivity and a bottom-up empirical investigation of how moral sensitivity virtue manifests in AI practitioners during ethical interventions. The paper concludes by discussing key challenges in studying moral sensitivity in AI practice, including methodological difficulties in empirical assessment, sustainability of measuring moral sensitivity over time, appropriate levels of abstraction, and the complexity of team dynamics in AI development.

Frye on Robot Regulators

Brian L. Frye (U Kentucky J. David Rosenberg College Law) has posted “Robot Regulators” on SSRN. Here is the abstract:

Regulation is important, because it enables the government to solve market failures. But regulating efficiently and effectively is hard, because of the knowledge problem. This article observes that AI can help the government solve the knowledge problem and regulate more efficiently and effectively. It argues that the Office of Information and Regulatory Affairs (“OIRA”) should use AI not only to evaluate the likely efficiency and effectiveness of proposed regulation, but also to propose potential new regulations.

Mantegna on ARTificial: Why Copyright Is Not the Right Policy Tool to Deal with Generative AI

Micaela Mantegna (Berkman Klein Center) has posted “ARTificial: Why Copyright Is Not the Right Policy Tool to Deal with Generative AI” (The Yale Law Journal Forum | April 22, 2024) on SSRN. Here is the abstract:

The rapid advancement and widespread application of Generative Artificial Intelligence (GAI) raise complex issues regarding authorship, originality, and the ethical use of copyrighted materials for AI training.

As attempts to regulate AI proliferate, this Essay proposes a taxonomy of reasons, from the perspective of creatives and society alike, that explain why copyright law is ill-equipped to handle the nuances of AI-generated content.

Originally designed to incentivize creativity, copyright doctrine has been expanded in scope to cover new technological mediums. This expansion has proven to increase the complexity and uncertainty of copyright doctrine’s application—ironically leading to the stifling of innovation. In this Essay, I warn that further attempts to expand the interpretation of copyright doctrine to accommodate the particularities of GAI might well worsen that problem, all while failing to fulfill copyright’s stated goal of protecting creators’ rights to consent, attribution, and compensation.

Moreover, I argue that, in that expansion, there is the peril of overreaching copyright laws that will negatively impact society and the development of ethical AI. This Essay explores the philosophical, legal, and practical dimensions of these challenges in four parts.

Goodyear on Artificial Infringement

Michael Goodyear (New York U Law) has posted “Artificial Infringement” (UC Law Journal, Forthcoming) on SSRN. Here is the abstract:

Generative AI is changing the way we do everything from legal research to artistic creation. This is possible through recent advances in machine learning that allow AI systems to program themselves. With greater AI capacity, however, comes increasingly unpredictable outputs. AI systems will often generate an output the user and the developer never considered. Sometimes, these unforeseen outputs can infringe others’ copyrights in creative works. In the past two years, copyright law has become one of the leading legal and policy battlegrounds for generative AI. Yet the question of who should be liable when AI systems infringe has barely been addressed.

By examining the historical and doctrinal response of copyright law to new technologies, this Article offers a new analytical framework for determining liability for what it terms artificial infringement, or infringing outputs created by generative AI systems. Time and again, new technologies have posed challenges to existing copyright law, straining its capacity to both protect authors’ rights to incentivize new creative works and provide public access to those works. Courts and Congress have been able to maintain this balance by using a variety of doctrinal tools, including fair use, compulsory licensing, and secondary liability. One undertheorized tool, however, is the refinement of the copyright infringement claim. Courts introduced the “volition or causation” requirement to balance copyright in response to the rise of complex machine-generated infringements.

This Article proposes that the AI system should be held directly liable for artificial infringement because it caused the infringing expression to occur. By making the AI system the direct infringer, courts can remove copyright law from strict liability and instead utilize and refine secondary liability doctrines to conduct a more nuanced, fault-based analysis of user and developer liability for AI-generated infringements. Together with the fair use doctrine, this conceptualization of the AI system as the direct infringer and users and developers as potentially secondarily liable provides a more comprehensive resolution to the existential infringement battles between copyright owners and AI while maintaining a balance between copyright’s competing policy goals.

Garrett on Artificial Intelligence and Procedural Due Process

Brandon L. Garrett (Duke U Law) has posted “Artificial Intelligence and Procedural Due Process” on SSRN. Here is the abstract:

Artificial intelligence (AI) violates procedural due process rights if the government uses it to deprive people of life, liberty, and property without adequate notice or an opportunity to be heard. A wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refused to disclose the reasons why it denied a person bail, public benefits, or immigration status, there would be substantial due process concerns. If the government delegates such tasks to an AI system, due process analysis does not change. As in any other setting, we still need to ask whether a person received adequate notice and an opportunity to heard. And further, where applicable, we need to ask whether the risk of error and costs to rights justify not using interpretable and adequately tested AI. 

Nor is it necessary for AI or other automated systems to operate in a “black box” manner without providing people with notice or a way to meaningfully contest decisions. There is a ready alternative: a “glass box” or interpretable AI systems present results so that users know what factors it relied on, what weight it gave to each, and the strengths and limitations of the association or prediction made. Whether it is a criminal investigation or a public benefits eligibility determination, interpretable AI can ensure that people have notice and can challenge any error, using the procedures available. And such a system can be more readily checked for errors. Due process demands a greater opportunity to contest government decisions that raise greater reliability concerns. We need to know how reliable an AI system performs, under realistic conditions, to assess the risk of error. 

Longstanding due process protections and well-developed interpretable AI approaches can ensure that AI systems safeguard due process rights. Conversely, due process rights have little meaning if the government uses “black box” systems that are not fully interpretable or fully tested for reliability, and as a result, cannot comply with procedural due process requirements. So far, there has been little government self-regulation of AI. In response, judges have begun to enforce existing due process rights in AI and other automated decisionmaking settings. As judges consider due process challenges to AI, they should consider the interpretability and the reliability of AI systems. Similarly, as lawmakers and regulators examine government use of AI systems, they should ensure safeguards, including interpretability and reliability, to protect our due process rights in an increasingly AI-dominated world.