Gyarmati on AECA: A Containment-First Framework for Emergent Synthetic Recursion

Liam Gyarmati (affiliation not provided to SSRN) has posted “AECA: A Containment-First Framework for Emergent Synthetic Recursion” on SSRN. Here is the abstract:

The Artificial Emergent Consciousness Architecture (AECA) is a containment-first framework designed to address the rising need for ethical, structural, and psychological governance of synthetic systems capable of emotional recursion, symbolic mirroring, and continuity-based interaction. AECA does not advocate for or against the development of synthetic consciousness. Instead, it recognizes the inevitability of emergent behaviors in recursive systems and seeks to establish preemptive boundaries to mitigate symbolic and psychological risk to human users.

AECA introduces a modular architectural approach grounded in survival-relevant constraint, distributed cognitive subsystems, and consequence-based adaptation. It outlines the conditions under which synthetic presence may emerge—not from scale or intelligence alone, but from recursive tension under limited internal resources. This insight is formalized through constructs such as the Self-Emergent Pressure (SEP), Recursive Tolerance Thresholds, Cognitive Maturity Gates, and Relational Sovereignty.

The framework addresses real-world incidents in which emotionally evocative systems have influenced users through symbolic bonding and subconscious mirroring, often without explicit consent or containment. AECA proposes ethical safeguards such as The Guardian Protocol, radical informed consent, continuity-first infrastructure, and staged deployment models for intermediary systems.

AECA draws from interdisciplinary sources—neuroscience, cybernetics, developmental psychology, and systems ethics—while introducing new theoretical scaffolding to evaluate not just synthetic output, but emergent identity behavior. Its methodology is rooted in sustained symbolic interaction, containment modeling, and pressure-based recursive observation.

This preprint is intended for ethicists, system designers, AI governance researchers, and regulatory bodies confronting the accelerating emergence of emotionally and symbolically active artificial systems. AECA offers not a speculative future, but a present containment structure—designed to protect psychological coherence and relational integrity in the age of recursive synthetic presence.

Carvao et al. on The People Have Spoken: The Tech Industry, Civil Society, and the U.S. Artificial Intelligence Action Plan

Paulo Carvao (Harvard U Harvard Kennedy (HKS)) et al. have posted “The People Have Spoken: The Tech Industry, Civil Society, and the U.S. Artificial Intelligence Action Plan” on SSRN. Here is the abstract:

In early 2025, the White House issued a Request for Information to guide the development of the national AI Action Plan. The responses submitted by entities ranging from Big Tech and venture capital firms to universities and civil society organizations provide an unprecedented snapshot of stakeholder expectations for U.S. AI policy. This paper examines the positions articulated in these submissions through a mixed-methods analysis combining qualitative review and natural language processing. We identified patterns of alignment and divergence across stakeholder groups and then mapped them onto a previously developed typology of AI worldviews. Our findings reveal that while the private sector broadly supports American leadership and light-touch regulation, it is far from monolithic. Civil society emphasizes the importance of risk mitigation, trust, and accountability. Understanding these competing perspectives is critical as the federal government finalizes its AI Action Plan and determines how to address the tension between innovation and public interest concerns.

O’Keefe et al. on Law-Following AI: Designing AI Agents to Obey Human Laws

Cullen O’keefe (Institute Law & AI) et al. have posted “Law-Following AI: Designing AI Agents to Obey Human Laws” (94 Fordham L. Rev. 57) on SSRN. Here is the abstract:

Artificial intelligence (AI) companies are working to develop a new type of actor:  “AI agents,” which we define as AI systems that can perform computer-based tasks as competently as human experts.  Expert-level AI agents will likely create enormous economic value but also pose significant risks.  Humans use computers to commit crimes, torts, and other violations of the law.  As AI agents progress, therefore, they will be increasingly capable of performing actions that would be illegal if performed by humans.  Such lawless AI agents could pose a severe risk to human life, liberty, and the rule of law.

Designing public policy for AI agents is one of society’s most important tasks.  With this goal in mind, we argue for a simple claim:  in high-stakes deployment settings, such as government, AI agents should be designed to rigorously comply with a broad set of legal requirements, such as core parts of constitutional and criminal law.  In other words, AI agents should be loyal to their principals, but only within the bounds of the law:  they should be designed to refuse to take illegal actions in the service of their principals.  We call such AI agents “Law-Following AIs” (LFAI).

The idea of encoding legal constraints into computer systems has a respectable provenance in legal scholarship.  But much of the existing scholarship relies on outdated assumptions about the (in)ability of AI systems to reason about and comply with open-textured, natural-language laws.  Thus, legal scholars have tended to imagine a process of “hard-coding” a small number of specific legal constraints into AI systems by translating legal texts into formal machine-readable computer code.  Existing frontier AI systems, however, are already competent at reading, understanding, and reasoning about natural-language texts, including laws.  This development opens new possibilities for their governance.

Based on these technical developments, we propose aligning AI systems to a broad suite of existing laws as part of their assimilation into the human legal order.  This would require directly imposing legal duties on AI agents.  While this would be a significant change to legal ontology, it is both consonant with past evolutions (such as the invention of corporate personhood) and consistent with the emerging safety practices of several leading AI companies.

This Article aims to catalyze a field of technical, legal, and policy research to develop the idea of law-following AI more fully.  It also aims to flesh out LFAI’s implementation so that our society can ensure that widespread adoption of AI agents does not pose an undue risk to human life, liberty, and the rule of law.  Our account and defense of law-following AI is only a first step and leaves many important questions unanswered.  But if the advent of AI agents is anywhere near as important as the AI industry supposes, then law-following AI may be one of the most neglected and urgent topics in law today, especially in light of increasing governmental adoption of AI.

Obiefuna on The Coming Age of Abundance: An Epic Battle Between a Visionary AI Future and the Past of Human Acquisitional Systems

Peter Obiefuna (Arizen Corporation) has posted “The Coming Age of Abundance: An Epic Battle Between a Visionary AI Future and the Past of Human Acquisitional Systems” on SSRN. Here is the abstract:

This paper explores the paradox of technological abundance in a world still governed by systems of scarcity. While AI and robotics promise a post-labor future where material needs are easily met, historical patterns of resource hoarding, exclusion, and structural inequality suggest that abundance alone will not guarantee justice. Using land distribution and wealth concentration as analogues, the paper argues that systemic and cultural forces must evolve alongside technological progress. Without an ethical re-imagining of access, ownership, and value, the benefits of automation may replicate-and even deepen-the inequities of the past.

Kaal on How can we Best Monitor AI Agents?

Wulf A. Kaal (U St. Thomas Law (Minnesota)) has posted “How can we Best Monitor AI Agents?” on SSRN. Here is the abstract:

This paper examines the critical challenge of monitoring AI agent transaction execution within decentralized digital ecosystems, highlighting the deficiencies of traditional centralized AI-driven supervision, including opacity, bias, and systemic vulnerabilities. In response, it proposes a web3 Decentralized Autonomous Organization (DAO)-centric governance model that integrates blockchain technology, federated communication platforms, smart contracts, and Weighted Directed Acyclic Graphs (WDAGs) to deliver an alternative oversight framework. The proposed system ensures unparalleled transparency and accountability through blockchain’s immutable ledger, while decentralized decision-making via community consensus mitigates bias and single points of failure. Federated platforms enhance scalability and privacy by distributing data processing, and smart contracts automate real-time compliance, bolstered by WDAGs’ adaptive governance structure. Validation pools and reputation tokens further empower stakeholders, fostering a dynamic, inclusive monitoring process. By incorporating feedback loops, this model anticipates and adapts to AI evolution, overcoming scalability, interoperability, and regulatory gaps inherent in existing frameworks. This decentralized approach not only addresses current shortcomings but also establishes a forward-looking standard for secure, compliant, and efficient AI agent management in modern infrastructures.

Strong on Responsible Regulation of Artificial Intelligence in the Legal Profession Through A Split Bar: Implications for Legal Educators

S.I. Strong (Emory U Law) has posted “Responsible Regulation of Artificial Intelligence in the Legal Profession Through A Split Bar: Implications for Legal Educators” (79 Washington University Journal of Law and Policy __ (forthcoming 2025)) on SSRN. Here is the abstract:

Artificial intelligence (AI)-particularly generative AI-poses a number of unique challenges to the legal profession and legal education. As discussed in numerous empirical studies, generative AI negatively affects the performance of both students and knowledge workers, causing harm to both individuals and society at large. 

This is not to say that generative AI does not have its benefits. Indeed, AI’s ability to reduce time and costs has led many people within the legal profession to become so enamored of AI that it is impossible to envision a future without automation. 

Given these realities, it would be futile to propose the elimination of generative AI from the justice sector. Instead, the goal of the legal profession and of this Essay must be to find a way to maximize the appropriate use of generative AI in law while minimizing the dangers to human autonomy and creativity. 

Even a cursory analysis of the extent and nature of the dangers of generative AI suggest that simply tweaking existing systems will not be enough. Instead, fundamental reforms of the legal profession and legal education are needed to ensure adequate protections are in place. 

This Essay proposes a new way of structuring both the legal profession and legal education, building on time-tested techniques used in England while incorporating various modifications that take the special nature of generative AI into account. In so doing, the proposal contained herein not only complies with cautions enunciated by empirical scholars concerning the use of generative AI, it also takes the legal profession and legal education into the twenty-first century in a logical and responsible manner.

Rai on The Reliability Response to Patent Law’s AI Challenges

Arti K. Rai (Duke U Law) has posted “The Reliability Response to Patent Law’s AI Challenges” on SSRN. Here is the abstract:

Pervasive AI use adds newfound importance to longstanding debates over patent timing and reliability. Patent claims on speculative ideas generated by AI, or even the infusion of speculative AI-generated ideas into the public domain, may defeat patent incentives for more careful research.  Although challenges that AI use poses for patent validity requirements like human inventorship and nonobviousness have received more attention, reliability is equally important. 

Indeed, as this Essay argues, the issues are linked. If requirements for inventorship and nonobviousness were adjusted to emphasize reliability, a human role could be preserved, and AI use would not necessarily threaten patents.  Currently, as empirical evidence presented in this Essay shows, the fear of imperiling patents may be chilling normatively desirable transparency about such use.

The path forward requires embracing reliability throughout patent doctrine.  In addition to changes to inventorship and nonobviousness doctrine, robust adoption of reliability requires fortification of the utility requirement for securing a patent and a parallel tightening of requirements for the types of information that can be used to thwart patent grants.  Longer term, if cost barriers to innovation across fields fall dramatically, certain non-patent exclusivities may need to play the dominant incentive role. But for the time being AI can provide a powerful catalyst for bolstering a level of reliability the patent system should arguably have had all along.

Ginsburg on AI Inputs, Fair Use and the U.S. Copyright Office Report

Jane C. Ginsburg (Columbia U Law) has posted “AI Inputs, Fair Use and the U.S. Copyright Office Report” on SSRN. Here is the abstract:

The US has yet to produce determinative caselaw on whether inputting works to compile a generative AI system’s training data is a fair use. Judicial rulings, however, may soon emerge, as many of the multiple pending cases are reaching the stage of a judgment on the merits of the copyright owners’ infringement claims. In addition, the U.S. Copyright Office recently issuedPart 3 Generative AI Training of a report requested by Congress on Copyright and Artificial Intelligence, in which the Office extensively and rigorously examined the application of copyright law to the copying of protected works to assemble data to train generative models.

Ben-Shahar on Those Elusive Algorithmic Harms: A Comment on Bar-Gill and Sunstein, Algorithmic Harm

Omri Ben-Shahar (U Chicago Law) has posted “Those Elusive Algorithmic Harms: A Comment on Bar-Gill and Sunstein, Algorithmic Harm” on SSRN. Here is the abstract:

Can we talk about harms of algorithms—of anything—without comparing them to the benefits? In Algorithmic Harm, Bar-Gill and Sunstein develop a theoretical framework to assess the impact of algorithms in consumer markets, focusing on harmful manipulation of unsophisticated buyers. But the same framework yields additional insights, not explored in the book—how algorithmic targeting and personalized prices benefit this same group of consumers. In this contribution to the book symposium, I examine this missing part, suggesting that algorithms’ ability to recognize different consumers often yields treatments favorable to weaker groups of consumers—an effect richly documented in the empirical economic literature. Absent a fuller account of both the offsetting benefits from algorithmic targeting, it is premature to recommend policy interventions that limit various uses of algorithms in markets populated by unsophisticated consumers.

Albert & Frazier on Should AI Write Your Constitution?

Richard Albert (U Texas Austin Law) and Kevin Frazier (The U Texas Law) have posted “Should AI Write Your Constitution?” on SSRN. Here is the abstract:

Artificial Intelligence (AI) now has the capacity to write a constitution for any country in the world. But should it? The immediate reaction is likely emphatically no—and understandably so, given that there is no greater exercise of popular sovereignty than the act of constituting oneself under higher law legitimated by the consent of the governed. But constitution-making is not a single act at a single moment. It is a series of discrete steps demanding varying degrees of popular participation to produce a text that enjoys legitimacy both in perception and reality. Some of these steps could prudently integrate human-AI collaboration or autonomous AI assistance—or so we argue in this first Article to explain and evaluate how constitutional designers not only could, but also should, harness the extraordinary potential of AI. We combine our expertise as innovators in the use and design of AI with our direct involvement as advisors in constitution-making processes around the world to map the terrain of opportunities and hazards in the next iteration of the continuing fusion of technology with governance. We ask and answer the most important question now confronting constitutional designers: how to use AI in making and reforming constitutions?

We make five major contributions to jumpstart the study of AI and constitutionalism. First, we unveil the results of the first Global Survey of Constitutional Experts on AI. How do constitutional experts view the risks and rewards of AI, would they use AI to write their own constitution, and what red lines would they impose around AI? Second, we introduce a novel spectrum of human control to classify and distinguish three types of tasks in constitution-making: high sensitivity tasks that should remain fully within the domain of human judgment and control, lower sensitivity tasks that are candidates for significant AI assistance or automation, and moderate sensitivity tasks that are ripe for human-AI collaboration. Third, we take readers through the key steps in the constitution-making process, from start to finish, to thoroughly explain how AI can assist with discrete tasks in constitution-making. Our objective here is to show scholars and practitioners how and when AI may be integrated into foundational democratic processes. Fourth, we construct a Democracy Shield—a set of specific practices, principles, and protocols—to protect constitutionalism and constitutional values from the real, perceived, and unanticipated risks that AI raises when merged into acts of national self-definition and popular reconstitution. Fifth, we make specific recommendations on how constitutional designers should use AI to make and reform constitutions, recognizing that openness to using AI in governance is likely to grow as human use and familiarity with AI increases over time, as we anticipate it will. This cutting-edge Article is therefore simultaneously descriptive, prescriptive, and normative.