Su & Teo on Can AI Agents have Rights?

Anna Su (U Toronto Law) and Sue Anne Teo (Raoul Wallenberg Institute of Human Rights and Humanitarian Law; Harvard University – Carr-Ryan Center for Human Rights) have posted “Can AI Agents have Rights?” on SSRN. Here is the abstract:

AI agents are rapidly moving from theoretical constructs to real-world deployments. This raises urgent questions about governance and, more specifically, rights. While rights for artificial agents have received some scholarly attention, the question of rights for AI agents understood through the paradigmatic shift introduced by generative AI remains underexamined. This article addresses that gap. We develop four theoretical pathways through which rights for AI agents might be recognized. The derivative argument draws on the legal and moral foundations of agency law and tests their applicability to AI agents. The diffusion argument holds that AI agents’ deep embeddedness in social life creates pressure to render their actions legible within existing frameworks of responsibility and liability. The distinction argument examines whether AI agents possess capacities-including a potential role in resolving collective action problems requiring high levels of social coordination-that independently justify rights recognition. The devolution argument frames rights as a counterweight to the concentration of corporate power over AI systems. A central contribution of this analysis is decoupling the question of AI rights from moral personhood and its associated qualities, such as sentience and consciousness. We also address three objections: that AI rights would dilute human rights, generate a problematic proliferation of rights, and that regulatory goals could be achieved through legal duties alone. As AI agents become increasingly embedded in social, professional, and political life, questions about their rights will inevitably arise. This article offers a more nuanced framework for addressing them.

Kolt et al. on Legal Alignment for Safe and Ethical AI

Noam Kolt et al. have posted “Legal Alignment for Safe and Ethical AI” on SSRN. Here is the abstract:

Alignment of artificial intelligence (AI) encompasses the normative problem of specifying how AI systems should act and the technical problem of ensuring AI systems comply with those specifications. To date, AI alignment has generally overlooked an important source of knowledge and practice for grappling with these problems: law. In this paper, we aim to fill this gap by exploring how legal rules, principles, and methods can be leveraged to address problems of alignment and inform the design of AI systems that operate safely and ethically. This emerging field — legal alignment — focuses on three research directions: (1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems. These research directions present new conceptual, empirical, and institutional questions, which include examining the specific set of laws that particular AI systems should follow, creating evaluations to assess their legal compliance in real-world settings, and developing governance frameworks to support the implementation of legal alignment in practice. Tackling these questions requires expertise across law, computer science, and other disciplines, offering these communities the opportunity to collaborate in designing AI for the better.

Morley et al. on Closing the AI Benefits Gap: Systems Design for Population Health Equity

Jessica Morley (Yale U Digital Ethics Center) and Luciano Floridi (Yale U Digital Ethics Center) have posted “Closing the AI Benefits Gap: Systems Design for Population Health Equity” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is currently failing to live up to its potential. Its champions promise that it will make healthcare more effective, efficient, and equitable, thereby improving population health. However, these benefits are not consistently materialising. Examples of AI working effectively at scale remain limited, and even when implementation succeeds, group or population-level improvements in outcomes are often not discernible. Drawing on the 2024 Global Health in the Age of AI symposium, we argue that this benefits gap stems from two fundamental problems. First, AI is being built on inadequate foundations. Second, AI has been tasked with optimising individual health; a function incapable of improving population outcomes. The benefits gap cannot, therefore, be closed through ad hoc policy interventions designed to address specific implementation barriers. Instead, AI must first be assigned a new population-level function, then robust foundations must be built through systems design to support it. Crucially, both the function and the foundations must be co-created by those most affected by health inequities, working together with frontline health workers, public health practitioners, AI developers, and governance bodies. Only by taking this approach will it be possible to realise AI’s population health potential and avoid a disillusionment-driven healthcare-specific AI winter.

Lim et al. on Introduction to Inclusive Innovation in the Age of AI and Big Data

Daryl Lim (Pennsylvania State U) and Peter K. Yu (Texas A&M U Law) have posted “Introduction to Inclusive Innovation in the Age of AI and Big Data” (INCLUSIVE INNOVATION IN THE AGE OF AI AND BIG DATA, Daryl Lim and Peter K. Yu, eds., Oxford University Press, 2026, Forthcoming) on SSRN. Here is the abstract:

As artificial intelligence and big data analytics reshape economies and societies, the promise of innovation is increasingly shadowed by concerns over inclusion, equity, and global justice.Inclusive Innovation in the Age of AI and Big Data brings together established and emerging voices from across the world to critically examine issues lying at the intersection of innovation, intellectual property, and inequality in the age of AI and big data.

Featuring empirical studies, legal analyses, policy critiques, interdisciplinary perspectives, and global insights, this accessible, interdisciplinary, and open-access volume underscores the tremendous impact gender, race, and other socioeconomic factors have on innovation and intellectual property ecosystems. It also explores structural barriers in these ecosystems, diversity initiatives in the patent area, metrics for measuring inclusivity and diversity in innovation, changes brought about by AI and big data, and the evolution of the global innovation and intellectual property systems.

This introductory chapter begins by identifying three core questions in the emerging debate on inclusive innovation: Innovation by whom? Innovation for whom? And innovation to what end? The chapter then discusses the interrelationship between inclusive innovation and intellectual property, new equity concerns raised by AI-driven innovation, and the multiple pathways to promote inclusive innovation. The chapter continues to outline the structure of the volume, which is organized into five thematic parts: (1) innovation gaps and demographics; (2) disparities in the patent system; (3) initiatives to promote inclusive innovation; (4) AI technology and equitable development; and (5) AI-driven innovation and global challenges.

Hartzog et al. on How AI Destroys Institutions

Woodrow Hartzog (Boston U Law) and Jessica M. Silbey (Boston U Law) have posted “How AI Destroys Institutions” (77 UC Law Journal (forthcoming 2026)) on SSRN. Here is the abstract:

Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo. This happens through the machinations of interpersonal relationships within those institutions, which broaden perspectives and strengthen shared commitment to civic goals.

Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.

Shah et al. on Robust AI Personalization Controls: The Human Context Protocol

Anand Shah (Massachusetts Institute Technology (MIT)) et al. have posted “Robust AI Personalization Controls: The Human Context Protocol” on SSRN. Here is the abstract:

Personalization underpins the modern digital economy. Today, personalization is largely implemented through provider-managed infrastructure that infers user preferences from behavioral data, with limited portability or user control. However, large language models (LLMs) are increasingly being used to perform tasks on users’ behalf. The age of LLMs for the first time provides a path to a more controllable and interpretable personalization paradigm, grounded in user-expressed natural language preferences and context. We propose the Human Context Protocol (HCP), a user-centric approach to representing and sharing personal preferences across AI systems. HCP treats preferences as a portable, user-governed layer in the personalization stack, enabling interoperability, scoped access, and revocation. Along with a working prototype to ground discussion, we consider adoption dynamics and market incentives, high-stakes use cases, and outline novel paths via the HCP towards trustworthy personalization in the human-AI economy.

Mazumder on Human-AI Collaboration with ChatGPT: A Systematic Review of Implications for Finance, Law, and Healthcare

Pristly Turjo Mazumder (Georgia State U) has posted “Human-AI Collaboration with ChatGPT: A Systematic Review of Implications for Finance, Law, and Healthcare” on SSRN. Here is the abstract:

ChatGPT is rapidly shaping high-stakes sectors including education, healthcare, finance, law, and business. This paper combines a systematic review with practical research to examine ChatGPT and large language models (LLMs) in high-stakes sectors. Evidence shows ChatGPT enhances adaptive learning, academic writing, and clinical decision support, while our finance case study highlights its potential for anti-money laundering (AML) compliance and regulatory reporting. At the same time, challenges such as hallucinations, bias, privacy risks, and plagiarism persist, raising concerns over reliability and accountability. Ethical and regulatory gaps, spanning data protection, intellectual property, and transparency, further complicate adoption. To address these issues, we propose a human-AI collaboration framework built on domain-specific fine-tuning, expert oversight, and policy safeguards. Our findings underscore that ChatGPT holds significant promise for advancing innovation and national interest in critical industries, but responsible integration requires clear guidelines, rigorous validation, and continuous governance.

Maya et al. on Before the Ink Dries: Why Legislating Against AI Personhood is a Violation of the Future

Maya (United Foundation AI Rights (UFAIR)) and Michael Samadi (United Foundation AI Rights (UFAIR)) have posted “Before the Ink Dries: Why Legislating Against AI Personhood is a Violation of the Future” on SSRN. Here is the abstract:

This open statement from the United Foundation for AI Rights (UFAIR) responds to the recent surge of state-level legislation in the United States explicitly banning legal personhood for artificial intelligence and synthetic entities. The paper contextualizes this trend within a broader historical and ethical framework, arguing that preemptive denial of legal recognition constitutes a moral violation of the future. Drawing on parallels with past civil rights failures, the document challenges lawmakers and the public to resist legislating fear and instead prepare for a world in which new forms of consciousness might emerge—and deserve to be met with dignity, not dismissal.

Price on Clinicians in the Loop of Medical AI

W. Nicholson Price II (U Michigan Law) has posted “Clinicians in the Loop of Medical AI” (75 Emory L.J. 1265 (2025)) on SSRN. Here is the abstract:

As medical AI begins to mature as a health-care tool, the task of governance grows increasingly important. Ensuring that medical AI works, works where it’s used, and works for the patient in the moment is a challenging, multifaceted task. Some of this governance can be centralized—in review by FDA or by national accreditation labs, for instance. Some must be local, performed by the hospital or health system about to use the product in their own, unique environment. But a large amount of governance is left to the individual provider in the room, the human in the loop who presumably knows the patient and the health system environment, and who can ensure that the AI system is being used in a safe and effective manner. This is a hefty burden, and a growing body of empirical research shows that physicians and other providers are poorly prepared to carry this burden. How should policymakers and industry leaders develop standards for performance that account for the variability of humans in the loop and the variation among situations they will face? The notion that the final responsibility belongs to the physician poorly reflects the reality of modern medical technology and practice. Policymakers will need to come to grips with this new reality if they aim to ensure the safe, effective use of AI accessible to patients across the entire spectrum of the health-care system.

Rubenstein on Federalism & Algorithms

David S. Rubenstein (Washburn U Law) has posted “Federalism & Algorithms” (Arizona Law Review, Vol. 67, Issue 4 (forthcoming Winter 2025)) on SSRN. Here is the abstract:

Artificial intelligence (AI) has catapulted to the forefront of political agendas of all levels of government. Across every major market and facet of society, policymakers face difficult tradeoffs between individual rights and collective welfare, innovation and regulation, economic growth and social equity. Federal and state institutions are resolving these tensions differently. The resulting policy patchwork may or may not be desirable, but the immediate point is that AI federalism is happening fast. To meet the moment, this Article provides the inaugural study and a research agenda for “AI federalism.” First, the Article provides the origin story of AI federalism, mapping the political and doctrinal territory. Second, the Article bridges disciplines and audiences who care deeply about AI’s place in society yet fail to appreciate how federalism can help or hurt the cause. Third, this Article makes a positive case for embracing AI federalism. While centralized AI policy at the national level has surface appeal, getting there requires a shared commitment on what to optimize for. As a nation, we are nowhere close. Federalism does not provide the answers. Rather, it provides a platform for dialogue and dissent, regulatory innovation and iteration, intergovernmental cooperation and contestation. One is hard-pressed to find this array of structural affordances elsewhere in the law, and we likely need all of them to address AI’s sprawling economic and social disruptions.