Hrdy on Trade Secrets and Artificial Intelligence

Camilla Alexandra Hrdy (Rutgers) has posted “Trade Secrets and Artificial Intelligence” (Trade Secrets and Artificial Intelligence Forthcoming in Elgar Concise Encyclopedia of Artificial Intelligence and the Law (Edward Elgar, eds. Ryan Abbott, Elizabeth Rothman, forthcoming, 2026)) on SSRN. Here is the abstract:

Companies create, collect, and manage significant amounts of economically valuable information. Some of this information is deliberately kept secret and can be protected under trade secret law. Trade secret laws protect certain forms of secret and economically valuable information against improper use or disclosure by others. Artificial intelligence (AI) raises many challenging issues for trade secret law. This entry identifies some of the major issues and what commentators have said about them: (1) protecting AI as a trade secret, (2) the difference between closed-source and open-source AI and the trade secrecy implications, (3) risks posed by generative AI to existing trade secrets, (4) whether AI poses risks to companies’ trade secrets, (5) whether AI-generated outputs can be protected as trade secrets, and (6) whether trade secrecy stands in the way of transparency goals.

Friedler & Selbst on The OMB Artificial Intelligence Memoranda

Sorelle A. Friedler (Haverford College) and Andrew D. Selbst (UCLA Law) have posted “The OMB Artificial Intelligence Memoranda” (Forthcoming in Berkeley Technology Law Journal (2025)) on SSRN. Here is the abstract:

Under the Biden and Trump administrations, the Office of Management and Budget issued two memoranda on the use of artificial intelligence (AI) by the federal government. The memos set out minimum required risk management practices and associated governance structures that must be in place within federal government agencies before AI can be used. This Article first traces the history of the OMB AI memos, explaining their shared origin in a decade of advocacy within civil society, industry, and academia that led to the creation of the Blueprint for an AI Bill of Rights by the Biden Administration’s Office of Science and Technology Policy, which then fed directly into the Biden AI Memo, before it was replaced by the Trump administration’s version. 

The Article then makes two arguments about the significance these memos. First, the lineage of the memos reveals the concern with practical implementation of minimum practices and safeguards in order to protect civil rights. Perhaps surprisingly, while the Trump administration’s replacement reflects the updated priorities of the new administration, it keeps much of the structure and substance of the original memo, including some of the civil rights orientation and the requirement that an agency must meet the minimum practices or cease using the AI. Second, these memos serve an important but rarely recognized regulatory role within the government as what we call “intermediate instruments.” By describing requirements at a level of specificity that makes them actionable while at a level of generality to make them applicable across many agencies and use cases, these memos become necessary governance tools that bridge the principles expressed in executive orders and the day-to-day practice of agencies. Such intermediate instruments are not often recognized as important in themselves, but the Article argues that they are worthy of independent recognition because they are likely widely used in oversight schemes of distributed bureaucratic structures.

Coglianese & Crum on On Leashing (and Unleashing) AI Innovation

Cary Coglianese (U Pennsylvania Carey Law) and Colton R. Crum (U Notre Dame) have posted “On Leashing (and Unleashing) AI Innovation” on SSRN. Here is the abstract:

The way that analysts and policymakers conceive of regulatory tools can color their decisions about how to govern artificial intelligence. To date, policy discourse surrounding the governance of artificial intelligence (AI) has been predominantly framed around a debate over whether and how to impose “guardrails” on the development and deployment of AI. Regulation conceived as guardrails suggests that standard-setters should be seeking fixed rules and mandates to be imposed on AI firms or placed on certain uses of AI. Such a vision, though, risks creating policies that impose undue barriers on innovation that benefits society. We propose an alternative way of conceiving regulation, one that allows AI innovation to flourish while still, importantly, assuring it remains under attentive human oversight. We urge replacing calls for “guardrails” with calls for “leashes.” Building on an analogy of AI to animal intelligence, we draw out implications for an alternative conception of AI governance that demands that AI developers and users seek to identify risks and develop robust internal practices to manage them. Just as pet owners are expected by law in many jurisdictions to keep their dogs on leashes when walking them in public spaces, we argue in favor of management-based regulations that direct firms to keep a grip on their AI tools by using effective leashing strategies. We explain how conceiving AI governance in terms of leashes holds virtues that governance as guardrails does not. Leashes, after all, are flexible and adaptable. Just as with a physical leash used to take a dog for a walk, AI leashes allow for technological innovation to meander and explore new terrain. But leashes nevertheless keep AI tethered to humans who can restrain the technology when needed to avoid unacceptable risks to others. Moreover, because AI governance conceived as leashes does not presuppose some predetermined path or roadway that innovation must stay within, it better accommodates the rapid technological changes that are emblematic of the modern era of AI innovation, including the emergence of agentic forms of AI. Paradoxically, by leashing AI through responsible human management oversight, rather than trying futilely to keep it within some fixed guardrails, AI governance can in a broader sense help unleash technological innovation so that it produces positive value for society.

Srivastava on The Philosophical Nature of Corporations: Examining Corporate Personality and Liability in the Context of Artificial Intelligence

Yashraj Srivastava (Amity U) has posted “The Philosophical Nature of Corporations: Examining Corporate Personality and Liability in the Context of Artificial Intelligence” on SSRN. Here is the abstract:

Identity plays a major role in philosophy to determine the recognition of an entity, and when we talk about identity, we need to determine in what capacity it effects the legal personhood of such entity. Determination of corporate legal personality of an entity is defined well over the years by courts of law; hence the aspects of philosophical identity embody corporate bodies by recognizing it as legal personality. There are two major aspects which determines the philosophical grounds of a corporations; firstly, the personality of a corporation and secondly, the liability which it holds with respect to such personality. With the advent of Artificial intelligence (Herein AI), there is major emphasis of various legal and social changes that it has to offer. Even the corporations are wrapped by the garb of this change. The duality of corporations (Personality and Liability) is also subject to change with the arrival of AI. There are two dimensions wherein the questions of personality and liability will be enquired; firstly, AI driven decisions by a corporation and secondly, companies who provide services featuring AI. The philosophical enquiry in such two dimensions will emphasize the nature of corporate legal personality in this new age of AI. Once there is determination of legal personality of corporation (which is already in this case determined by various courts of law around the world) there is a need to trace the dimensions and range of various rights, duties, liabilities, powers, immunities etc. in the Hohfeldian’s lens. The same will determine the jurisprudential essence of corporations which are subject to AI. In order to dive into the philosophical depths, we need to ascertain the deontological, utilitarianism and ethical implications of AI driven entities in a form of corporations. This will determine the various effects of AI driven corporations in a sociological and moral angle, which eventually will give us a better hold in analyzing it as a legal entity.

Grimmelmann et al. on Generative Misinterpretation

James Grimmelmann (Cornell Law) et al. have posted “Generative Misinterpretation” (Harvard Journal on Legislation, Vol. 63.1 (forthcoming)) on SSRN. Here is the abstract:

In a series of provocative experiments, a loose group of scholars, lawyers, and judges has endorsed generative interpretation: asking large language models (LLMs) like ChatGPT and Claude to resolve interpretive issues from actual cases. With varying degrees of confidence, they argue that LLMs are (or will soon be) able to assist-or even replace-judges in performing interpretive tasks like determining the meaning of a term in a contract or statute. A few go even further and argue for using LLMs to decide entire cases and to generate opinions supporting those decisions.

We respectfully dissent. In this Article, we show that LLMs are not yet fit for purpose for use in judicial chambers. Generative interpretation, like all empirical methods, must bridge two gaps to be useful and legitimate. The first is areliability gap: are its methods consistent and reproducible enough to be trusted in high-stakes, real-world settings? Unfortunately, as we show, LLM proponents’ experimental results are brittle and frequently arbitrary. The second is anepistemic gap: do these methods measure what they purport to? Here, LLM proponents have pointed to (1) LLMs’ training processes on large datasets, (2) empirical measures of LLM outputs, (3) the rhetorical persuasiveness of those outputs, and (4) the assumed predictability of algorithmic methods. We show, however, that all of these justifications rest on unstated and faulty premises about the nature of LLMs and the nature of judging.

The superficial fluency of LLM-generated text conceals fundamental gaps between what these models are currently capable of and what legal interpretation requires to be methodologically and socially legitimate. Put simply, any human or computer can put words on a page, but it takes something more to turn those words into a legitimate act of legal interpretation. LLM proponents do not yet have a plausible story of what that “something more” comprises.

Canellas on Mo AI, Skidmore Problems: Governing in our Loper Bright Era

Marc Canellas (Maryland Office The Public Defender) has posted “Mo AI, Skidmore Problems: Governing in our Loper Bright Era” (Journal of Law and Politics, Volume 41 (forthcoming)) on SSRN. Here is the abstract:

Chevron is dead. Skidmore is dead-lettered. Long live Loper Bright. Under Loper Bright Enterprises v. Raimondo, judges have become judicial policymakers, required to determine the single, best, and only permissible interpretation of any statute, no matter how impenetrable, no matter whether they receive or weigh any external perspectives. As the Majority believed, Congress expects courts to handle technical statutory questions as agencies have no special competency to answer them. Despite the Majority’s praise of Skidmore at the expense of Chevron, they implicitly overruledSkidmore in rejecting the possibility that sometimes the cases where the agency had decisive interpretations. Loper Brightcrowned judges into judicial policymakers posing an incredible challenge to the future of the administrative state and federal governance of all kinds, a challenge, as Justice Kagan’s dissent showed, is best exemplified by artificial intelligence (AI). Congress already has difficulty governing and wants agencies to make policy choices, not courts with their long history of poor technical understanding. Given that ambiguity can be found almost any statute, any court can get to the Loper Bright step of statutory interpretation and justify their single, best meaning which will ossify and balkanize incorrect interpretations of statutes. But the federal government is not without policy choices for its response. Congress can codifyChevron deference generally or into individual legislation. Congress and agencies can embrace soft law-instruments like standards and certifications that create expectations but which are not directly enforceable. Lastly, agencies can categorize their decisions as factbound to protect them from Court interference; or reject rulemaking altogether and embrace jawboning, informal efforts by government to persuade non-government parties to take action.

Berumen on When Data Lies: Synthetic Data, AI, and the New Corporate Risk

Alfonso Berumen (Pepperdine U Graziadio Business and Management) has posted “When Data Lies: Synthetic Data, AI, and the New Corporate Risk” on SSRN. Here is the abstract:

The case of Charlie Javice and the $175 million acquisition of her startup, Frank, by JPMorgan Chase (JPMorgan) provides a critical lens through which to examine the emerging corporate risks associated with synthetic data. Javice allegedly fabricated millions of student/user/customer profiles to inflate metrics, highlighting how internally generated synthetic data can be weaponized to mislead investors and bypass due diligence efforts. This white paper explores the broader implications of the Javice case, positioning it as a cautionary example of how synthetic data, while a powerful tool for innovation, machine learning, and privacy-preserving analytics, can also be exploited for fraud. As Artificial Intelligence (AI) becomes more deeply embedded across industries, this paper offers a set of regulatory, legal, and ethical recommendations aimed at addressing the dual-use nature of synthetic data and safeguarding corporate integrity.

Soh et al. on Artificial Intelligence in the Regulatory Wonderland

Jerrold Soh (Singapore Management U Yong Pung How Law) et al. have posted “Artificial Intelligence in the Regulatory Wonderland” on SSRN. Here is the abstract:

This Chapter conducts a theoretical and practical examination of existing AI governance and regulatory models. With “regulation” we include not only formal legal rules but also soft laws, industry codes, and other regulatory modes. Using existing approaches as case studies, we comment on potential advantages and disadvantages of each model, as well as seek to extract common themes and challenges that accompany AI governance and regulation more generally. Part II sets the theoretical context by scrutinizing two key dimensions implied by the term “AI regulation” itself: (1) “AI” as an object of regulation; and (2) “regulation” itself as a subject of discussion. In Part III, which forms the bulk of this Chapter’s contribution, we visit selected checkpoints along the regulatory spectrum, specifically including self-, co-, quasi-, and direct regulation. Each model will be defined using the regulatory theory literature, exemplified via real world legal instruments, and evaluated by applying the former to the latter. Part IV synthesizes general themes and insights emerging from Part III’s regulatory tour. It identifies how regulatory assessments of the risks and benefits of AI systems appear to differ substantively across jurisdictions. Given the challenges inherent in regulating the use of an opaque, complex, and relatively nascent technology, Part IV further identifies the potential utility of a rigorous AI testing framework.

Silva et al. on Procuring Public-Sector AI: Guidance for Local Governments

Elise Silva (U Pittsburgh) et al. have posted “Procuring Public-Sector AI: Guidance for Local Governments” on SSRN. Here is the abstract:

In this white paper written for public sector employees, we discuss the unique governance challenges posed by procured artificial intelligence (AI) systems, and provide actionable guidance on first steps that governments can take today to manage these emerging risks. Our audience for this paper is U.S. local government employees involved in procurement, IT, innovation, and related departments. Our recommendations are informed by an empirical study of local governments’ procurement practices across the United States.

Frye on Thomson Reuters v. ROSS: Brief for Amici Curiae in Support of Appellant-Defendant’s Petition for Certification Under s 1292(c)

Brian L. Frye (U Kentucky J. David Rosenberg College Law) has posted “Thomson Reuters v. ROSS: Brief for Amici Curiae in Support of Appellant-Defendant’s Petition for Certification Under s 1292(c)” on SSRN. Here is the abstract:

This is an amicus brief in support of ROSS Intelligence’s petition for interlocutory review of the district court’s order in Thomson Reuters Enter. Ctr. GMBH v. Ross Intel. Inc.,

No. 1:20-CV-613-SB, 2025 WL 458520 (D. Del. Feb. 11, 2025). Plaintiffs Thomson Reuters and West allege that ROSS infringed the copyright in West’s headnotes by using them to train an AI model. The district court largely granted the plaintiffs’ motions for summary judgment, finding that at least some of West’s headnotes are protected by copyright and that ROSS’s use of West’s headnotes was not protected by the fair use doctrine. This amicus brief argues that the Third Circuit should grant interlocutory review because West’s headnotes are not copyrightable subject matter. Accordingly, this case is not an appropriate vehicle for the court to determine whether the use of copyrighted works to train an AI model is infringing or a fair use.