Lévesque on Hallucinating Deregulation

Maroussia Lévesque (Harvard Law) has posted “Hallucinating Deregulation” on SSRN. Here is the abstract:

Deregulation appears to be the dominant paradigm when it comes to AI policy. The recent U.S. AI Action plan deems the technology “far too important to smother in bureaucracy at this early stage”. Even the EU, known for robust digital regulation, is losing steam on implementing its groundbreaking AI Act. With deregulatory narratives on the rise and comprehensive AI legislation on the back burner, one could assume that a laissez-faire approach effectively prevails. Yet nothing could be further than the truth. Underneath the surface, regulators engage in significant AI regulation. Chief among them are national security policymakers reaching deep into the AI stack, with heavy-handed intervention shaping access to the fundamental building blocks of AI systems.

While we typically confine AI regulation to endpoint application served to users – OpenAI’s ChatGPT, or Anthropic’s Claude – regulators actually intervene at each node of the AI supply chain to restrict models, their training data, and the underlying infrastructure of data centers and computer hardware. The concept of a technology stack disaggregates these elements operating in the background of user-facing applications, assessing how each is a target of regulation. This more granular view sets the record straight on a common misconception as to a deregulatory zeitgeist.

The goal is descriptive and prescriptive. As a diagnostic tool, a stack approach brings clarity, describing precisely what is being regulated – and what isn’t. Unpacking AI into its hardware, compute, model and application components, the stack anchors the analysis into the materiality of AI’s multiple components.

As a prescriptive tool, a full-stack approach considers different regulatory options along the supply chain. Building on existing practices already intervening on several aspects of the AI stack towards a single policy objective, this Article invites regulators to systematically consider all aspects of AI systems before settling on a regulatory strategy.

Canellas on Mo AI, Skidmore Problems: Governing in our Loper Bright Era

Marc Canellas (Maryland Office The Public Defender) has posted “Mo AI, Skidmore Problems: Governing in our Loper Bright Era” (Journal of Law and Politics, Volume 41 (forthcoming)) on SSRN. Here is the abstract:

Chevron is dead. Skidmore is dead-lettered. Long live Loper Bright. Under Loper Bright Enterprises v. Raimondo, judges have become judicial policymakers, required to determine the single, best, and only permissible interpretation of any statute, no matter how impenetrable, no matter whether they receive or weigh any external perspectives. As the Majority believed, Congress expects courts to handle technical statutory questions as agencies have no special competency to answer them. Despite the Majority’s praise of Skidmore at the expense of Chevron, they implicitly overruledSkidmore in rejecting the possibility that sometimes the cases where the agency had decisive interpretations. Loper Brightcrowned judges into judicial policymakers posing an incredible challenge to the future of the administrative state and federal governance of all kinds, a challenge, as Justice Kagan’s dissent showed, is best exemplified by artificial intelligence (AI). Congress already has difficulty governing and wants agencies to make policy choices, not courts with their long history of poor technical understanding. Given that ambiguity can be found almost any statute, any court can get to the Loper Bright step of statutory interpretation and justify their single, best meaning which will ossify and balkanize incorrect interpretations of statutes. But the federal government is not without policy choices for its response. Congress can codifyChevron deference generally or into individual legislation. Congress and agencies can embrace soft law-instruments like standards and certifications that create expectations but which are not directly enforceable. Lastly, agencies can categorize their decisions as factbound to protect them from Court interference; or reject rulemaking altogether and embrace jawboning, informal efforts by government to persuade non-government parties to take action.

Coglianese on On the Need for Digital Regulators

Cary Coglianese (U Pennsylvania Carey Law) has posted “On the Need for Digital Regulators” (in Research Handbook on Digital Regulatory Agencies, Martha Garcia-Murillo and Ian MacInnes eds., forthcoming) on SSRN. Here is the abstract:

The growing digital economy brings increasing recognition of the need for digital regulators. This chapter considers two senses of the term “digital regulators”: one of these refers to regulatorsof digital technology; the other refers to how any regulatory organization can improve its operationswith the use of digital technology. Today’s economy requires digital regulators of both types. The need for regulatorsof digital technology grows out of perennial concerns about market failures and other implicated social values, such as privacy. This chapter sketches the rationales that in the past have justified regulating digital technology, and then it explains how market-failure justifications continue to reveal a need for regulating today’s rapidly evolving digital technologies, including artificial intelligence. The chapter then shows how the need for regulatorswith digital technology has been evident since the advent of the internet and has grown even more compelling today with the possibilities created by artificial intelligence. One common thread from the past through to today is the need for multiple regulators both to oversee digital technologies and to use these technologies to improve their regulatory performance.

Frye on Robot Regulators

Brian L. Frye (U Kentucky J. David Rosenberg College Law) has posted “Robot Regulators” on SSRN. Here is the abstract:

Regulation is important, because it enables the government to solve market failures. But regulating efficiently and effectively is hard, because of the knowledge problem. This article observes that AI can help the government solve the knowledge problem and regulate more efficiently and effectively. It argues that the Office of Information and Regulatory Affairs (“OIRA”) should use AI not only to evaluate the likely efficiency and effectiveness of proposed regulation, but also to propose potential new regulations.

Di Porto et al. on Mining EU Consultations through AI

Fabiana Di Porto (Law and Economics) et al. have posted “Mining EU Consultations through AI” (Artificial Intelligence and Law, 0; 2024 [10.1007/s10506-024-09426-6]) on SSRN. Here is the abstract:

Consultations are key to gather evidence that informs rulemaking. When analysing the feedback received, it is essential for the regulator to appropriately cluster stakeholders’ opinions, as misclustering may alter the representativeness of the positions, making some of them appear majoritarian when they might not be. The European Commission (EC)’s approach to clustering opinions in consultations lacks a standardized methodology, leading to reduced procedural transparency, while making use of computational tools only sporadically. This paper explores how natural language processing (NLP) technologies may enhance the way opinion clustering is currently conducted by the EC. We examine 830 responses to three legislative proposals (the Artificial Intelligence Act, the Digital Markets Act and the Digital Services Act) using both a lexical and semantic approach. We find that some groups (like small and medium companies) have low similarity across all datasets and methodologies despite being clustered in one opinion group by the EC. The same happens for citizens and consumer associations for the consultation run over the DSA. These results suggest that computational tools actually help reduce misclustering of stakeholders’ opinions and consequently allow greater representativeness of the different positions expressed in consultations. They further suggest that the EC could identify a convergent methodology for all its consultations, where such tools are employed in a consistent and replicable rather than occasionally. Ideally, it should also explain when one methodology is preferred to another. This effort should find its way into the Better Regulation toolbox (EC 2023). Our analysis also paves the way for further research to reach a transparent and consistent methodology for group clustering.

Di Porto et al. on Mining EU Consultations through AI

Fabiana Di Porto (Law and Economics) et al. have posted “Mining EU Consultations through AI” (Artificial Intelligence and Law, 0; 2024 [10.1007/s10506-024-09426-6]) on SSRN. Here is the abstract:

Consultations are key to gather evidence that informs rulemaking. When analysing the feedback received, it is essential for the regulator to appropriately cluster stakeholders’ opinions, as misclustering may alter the representativeness of the positions, making some of them appear majoritarian when they might not be. The European Commission (EC)’s approach to clustering opinions in consultations lacks a standardized methodology, leading to reduced procedural transparency, while making use of computational tools only sporadically. This paper explores how natural language processing (NLP) technologies may enhance the way opinion clustering is currently conducted by the EC. We examine 830 responses to three legislative proposals (the Artificial Intelligence Act, the Digital Markets Act and the Digital Services Act) using both a lexical and semantic approach. We find that some groups (like small and medium companies) have low similarity across all datasets and methodologies despite being clustered in one opinion group by the EC. The same happens for citizens and consumer associations for the consultation run over the DSA. These results suggest that computational tools actually help reduce misclustering of stakeholders’ opinions and consequently allow greater representativeness of the different positions expressed in consultations. They further suggest that the EC could identify a convergent methodology for all its consultations, where such tools are employed in a consistent and replicable rather than occasionally. Ideally, it should also explain when one methodology is preferred to another. This effort should find its way into the Better Regulation toolbox (EC 2023). Our analysis also paves the way for further research to reach a transparent and consistent methodology for group clustering.

Chin on The Black Box Presidency

Andrew Chin (U North Carolina Law) has posted “The Black Box Presidency” on SSRN. Here is the abstract:

In February 2025, as wildfires ravaged Los Angeles, President Donald Trump threatened to withhold FEMA assistance unless California adopted voter ID laws and water deregulation policies-just one example of how executive power could weaponize administrative authority for political gain. Simultaneously, Elon Musk’s Department of Government Efficiency (DOGE) deployed artificial intelligence systems across multiple agencies to evaluate federal workers’ job justifications, with the stated goal of replacing “the human workforce with machines.” This article explores how these converging developments-the politicization of administrative functions and the algorithmic replacement of civil servants-foreshadow a constitutional crisis through the Strategic AI Governance Engine (SAGE), a hypothetical yet plausible system that would automate statutory interpretation and policy implementation across federal agencies. While no unified system like SAGE currently exists, the Biden administration disclosed over 2,000 siloed AI applications across the federal government, from regulatory enforcement targeting to benefits eligibility determinations. These existing deployments, combined with DOGE’s aggressive workforce reduction-over 40,000 federal employees have already accepted resignation offers-create the foundation for algorithmic governance at unprecedented scale. When paired with the Supreme Court’s dismantling of Chevron deference in Loper Bright Enterprises v. Raimondo (2023) and its embrace of unitary executive theory in Seila Law LLC v. CFPB (2020), these developments create the perfect constitutional storm: a presidency empowered to centralize administrative authority through algorithmic systems that operate at “machine speed,” beyond meaningful congressional oversight or judicial review. The constitutional implications are profound. SAGE’s reinforcement learning algorithms could optimize for presidential priorities rather than statutory mandates across numerous domains—from environmental protection to immigration enforcement to healthcare access.

Cohen et al. on Provisioning Digital Tools and Systems for Government Use

Julie E. Cohen (Georgetown U Law Center) et al. have posted “Provisioning Digital Tools and Systems for Government Use” (Redesigning the Governance Stack Project at Georgetown Law) on SSRN. Here is the abstract:

This document is part of a larger project aimed at reinventing the administrative state for effective governance of the digital, information-driven economy. It explores how the administrative state can more effectively equip itself with digital tools and systems that align with and improve government’s ability to serve public values.  Established approaches to digital provisioning fail in many important respects. Among others, they introduce thorny coordination problems while doing little to ensure design for broader public values; they cause obsolete and/or poorly conceived requirements to cascade through the development process for new tools and systems; they magnify the potential for technology-driven lock-in and vendor capture at scale; and they are unacceptably opaque to policymakers and the public. We trace some of these dysfunctions to the private-sector preference that underpins federal govtech provisioning and others to a top-down mode of development in which “solutions” are decreed at the outset rather than after consultation and conversation. The paper recommends a series of changes to the current policy landscape for govtech provisioning to correct these dysfunctions. One important recommendation involves rethinking the traditional “make vs. buy” dichotomy in public procurement and the underlying presumptions that have animated the dichotomy. Recentering public values and outcomes in govtech development also requires measures for ensuring the interoperability and transparency of govtech tools and systems. Another important recommendation involves reenvisioning processes for govtech development and implementation.

Pasquale & Malgieri on Gen AI and Administrative Law

Frank Pasquale (Cornell Law School; Cornell Tech) and Gianclaudio Malgieri (U Leiden Law; Free Uni Brussels) have posted “Generative AI, Explainability, and Score-Based Natural Language Processing in Benefits Administration” (J. Cross-Disciplinary Research in Computational Law (forthcoming 2024)) on SSRN. Here is the abstract:

Administrative agencies have developed computationally-assisted processes to speed benefits to persons with particularly urgent and obvious claims. One proposed extension of these programs would score claims based on the words that appear in them (and relationships between these words), identifying some sets of claims as particularly like known, meritorious claims, without understanding the meaning of any of these legal texts. This score-based natural language processing (SBNLP) may expand the range of claims categorized as urgent and obvious, but as its complexity advances, its practitioners may not be able to offer a narratively intelligible rationale for how or why it does so. At that point, practitioners may utilize the new textual affordances of generative AI to attempt to fill this explanatory gap, offering a rationale for decision that is a plausible imitation of past, human-written explanations of judgments in cases with similar sets of words in their claims.

This article explains why such generative AI should not be used to justify SBNLP decisions in this way. Due process and other core principles of administrative justice require humanly intelligible identification of the grounds for administrative action. Given that ‘next-token prediction’ is distinct from understanding a text, generative AI cannot perform such identification reliably. Moreover, given current opacity and potential bias in leading chatbots – which are based on large language models – as well as deep ethical concerns raised by the databases they are built on, there is a strong case for excluding these automated outputs from administrative decision-making. Nevertheless, SBNLP may legitimately be established parallel or external to justification-based legal proceedings for humanitarian purposes.

Mazur & Thimmesch on Transforming Government with Augmented LLMs

Orly Mazur (SMU Law) and Adam B. Thimmesch (U Nebraska Law) have posted “Beyond ChatGPT: Transforming Government with Augmented LLMs” (Tennessee Law Review, Forthcoming) on SSRN. Here is the abstract:

The release of ChatGPT demonstrated the remarkable capabilities and the existing limitations of large language models (LLMs) and the natural language chatbots that they power. One area that is ripe for innovation using this new technology, but that has often been bypassed in mainstream discussions, is the public sector. This Article redirects attention towards this overlooked area, acknowledging the limitations of LLMs, while specifically exploring their potential to transform government operations.

The Article discusses the various technological advancements that allow for the development of tools far more refined than the general-use chatbots commonly available to the public. The Article then introduces a dual-category framework for proposing potential government AI applications: applications that improve external government operations and those that streamline internal operations. Using tax administration as a case study, the Article illustrates how generative AI, such as LLMs, can respond to well-known issues within the administration of law by substantially enhancing the quality of government communications, thereby improving operational efficiency and promoting equitable access to government services.

The Article makes several innovative, practical proposals. These include leveraging LLM-powered chatbots to manage interactions with non-government entities, strategically integrating LLMs into workplace training and customer service processes, and developing various AI tools to mitigate service disparities faced by marginalized communities. These recommendations underscore the promising potential that LLMs have in this area, despite their current shortcomings. Ultimately, however, the Article concludes that to fully harness the benefits of generative AI within the public sphere, a concerted, inclusive effort involving a broad spectrum of stakeholders is necessary. Such a collaborative effort holds the promise to redefine public service delivery in a manner that enhances the efficiency, effectiveness, and overall quality of government services.