Asay on Artificial Creators

Clark D. Asay (Brigham Young U J. Reuben Clark Law) has posted “Artificial Creators” (2 George Washington Journal of Law and Technology (forthcoming 2026)) on SSRN. Here is the abstract:

Artificial intelligence systems cannot be inventors or authors under current U.S. law. On that point, the U.S. Patent and Trademark Office and the U.S. Copyright Office agree. Yet beyond that, the two regimes sharply diverge. The USPTO has adopted a more flexible approach to AI-assisted invention, permitting extensive AI involvement so long as a human being can be said to have conceived of the claimed invention. The Copyright Office, by contrast, has taken a far more restrictive stance, effectively denying registration to works whose expressive elements are generated by AI—even where humans engage in detailed, iterative prompting and exercise some amount of creative direction.

This Essay explores the reasons for that divergence and questions whether it is justified. While copyright’s idea–expression dichotomy and independent creation requirement may appear to provide some justification for copyright law’s more restrictive approach, those doctrines do not compel the Copyright Office’s denial of copyright registration in AI-assisted works. Indeed, copyright law has long accommodated technologically mediated creativity—from photography to film—by focusing on human control and creative contribution rather than the mechanics of execution.

Drawing on patent law’s conception requirement, as well as copyright doctrines governing joint authorship and derivative works, this Essay argues that copyrightability standards should move more in patent law’s direction. Where a human meaningfully conceives of and directs the realization of a work—even if AI performs substantial expressive tasks—copyright law should recognize authorship at least to the extent of the human’s creative contribution. Failing to do so risks undermining copyright’s incentive structure and distorting the future development of creative industries in an era where AI assistance is increasingly ubiquitous.

Barnett on The Free Content Illusion

Jonathan Barnett (USC Gould Law) has posted “The Free Content Illusion” (Journal of Intellectual Property Law (2026)) on SSRN. Here is the abstract:

Peer-to-peer file sharing in the early 2000s destabilized traditional content markets and associated business models that rely on preserving control over the use of creative assets.  Academics and other commentators widely argued that robust forms of intellectual property rights had been rendered largely obsolete in a digital environment of low production and distribution costs. Reflecting this view, courts expanded the fair use doctrine and generously applied safe harbors under the Digital Millenium Copyright Act, which largely immunized platforms against liability for user infringement and consistently favored content aggregators over originators.  The subsequent evolution of digital markets nonetheless shows that exclusivity protections remain critical to sustaining an independently viable content economy that does not rely on philanthropic or governmental patronage.  Streaming services in audio, video, and literary media restored revenue flows to content originators through contractual and technological complements to copyright protection, while content segments (notably, the news industry) that failed to deploy such mechanisms struggled economically.  Contrary to prevailing views, meaningful property rights and other exclusivity protections remain essential for sustaining the production, financing, and development of creative assets in digital environments and, together with technological and contractual complements, are likely to retain this role in supporting a robust flow of original content for the artificial intelligence ecosystem.

Fagan on When Fair Use Fails: Contingent Licensing for AI Training

Frank Fagan (South Texas College Law Houston) has posted “When Fair Use Fails: Contingent Licensing for AI Training” (forthcoming, Foundation for American Innovation, 2025) on SSRN. Here is the abstract:

As content producers increasingly gate material in response to AI-driven substitution-despite no changes to fair use law-there is growing risk that socially valuable inputs may disappear from the generative AI training ecosystem. This paper proposes a narrowly tailored, contingent licensing scheme to preserve access to high-value content when market failures prevent voluntary licensing. The scheme activates only when three conditions are met: (1) the content is demonstrably valuable for training; (2) the producer is economically marginal-that is, likely to restrict or withdraw access absent compensation; and (3) voluntary licensing has failed due to high transaction costs or bargaining asymmetries. While the proposal is focused on economically marginal creators at risk of exit, it allows for future extension to inframarginal producers if systemic gating emerges (defined here as a sustained, measurable reduction in access to critical content, whether by a majority of producers or by a small set whose gating materially degrades model performance). Drawing on the model of compulsory music licensing, the fallback mechanism operates only when necessary and always includes an opt-out, offering a light-touch intervention to sustain open access without undermining innovation or core publication incentives. In this way, the proposal aims to preserve innovation conditions when asymmetric withdrawal risks distorting competition and locking in advantages for firms with early licensing deals or deep proprietary libraries. Stronger measures that compel content creators to license their works, and without an opt-out, are considered but tentatively rejected as inefficient and likely to distort functioning markets.

Haynes on Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance

Maria De Lourdes Haynes (American U Dubai) has posted “Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance” on SSRN. Here is the abstract:

The European Artificial Intelligence Act (AI Act) constitutes a landmark regulatory framework governing artificial intelligence technologies, with core principles grounded in transparency, accountability, and risk mitigation. While designed to foster innovation and safeguard fundamental rights, the Act poses considerable implementation challenges. Organisations must navigate complex compliance obligations imposed to various actors across the value chain. These requirements entail rigorous reporting, auditing, monitoring and governance mechanisms, placing increased demands on corporate governance structures.A defining feature of the AI Act is its extraterritorial scope, mirroring the reach of the General Data Protection Regulation (GDPR). The AI Act applies not only to entities established within the European Union but also to non-EU businesses operating or placing AI products on the EU Market. Its extensive provision, covering authorised representatives and specific duties for actors across the AI value chain, are expected to incentivise non-EU jurisdictions and corporations to align their AI development and deployment practices with EU standards. Non-compliance may lead to hefty fines and exposure to reputational damage along with an erosion of consumer trust.AI Act is poised to emerge as a global benchmark for AI regulation. Board-level governance bodies must reconcile innovation and business objectives with regulatory imperatives, address liability risks, and embed AI literacy into strategic management and decision-making. As the regulatory framework evolves, it reinforces the necessity of integrating multidisciplinary legal, ethical, and strategic considerations into managerial and corporate governance frameworks to navigate this dynamic environment effectively and mitigate emerging risks.

Alonso et al. on AI And Copyright “Hallucinations”: Does the Text and Data Mining Exception Really Support Generative AI Training?

Eduardo Alonso (City U London) and Nicola Lucchi (Universitat Pompeu Fabra Law) have posted “AI And Copyright “Hallucinations”: Does the Text and Data Mining Exception Really Support Generative AI Training?” (European Intellectual Property Review, 2025, volume 47, issue 9, pp. 515-526) on SSRN. Here is the abstract:

This article critically challenges the widespread – and, it is argued, conceptually flawed – assumption that arts 3 and 4 of the CDSM Directive provide a lawful basis for training generative AI systems on copyright-protected content. The article describes this misinterpretation as a form of legal “hallucination”, underscoring its disconnect from the Directive’s textual, technical, and normative foundations. Designed to enable automated analytical extraction for scientific or informational purposes, the TDM exceptions do not encompass the large-scale reproduction, internalisation, and expressive re-use of works characteristic of GenAI training. Article 3 is limited to non-commercial research; Art.4’s opt-out mechanism, based on non-standardised signals, exacerbates uncertainty without ensuring transparency or fair compensation. This misclassification not only undermines core copyright incentives but also distorts the scope of EU exceptions, placing the framework in tension with the three-step test and international norms. The article argues that applying TDM rules to GenAI training introduces structural imbalances, both doctrinal and distributive, that risk entrenching platform asymmetries, weakening authorial agency, and threatening cultural diversity. Rather than relying on strained legal interpretations, a forward-looking response requires bespoke legal reforms that preserve normative coherence while addressing the specific challenges posed by synthetic content creation.

Neill et al. on A Framework for Applying Copyright Law to the Training of Textual Generative Artificial Intelligence

Arthur H. Neill (New Media Rights) et al. have posted “A Framework for Applying Copyright Law to the Training of Textual Generative Artificial Intelligence” (32 Texas Intellectual Property Law Journal 225 (2024)) on SSRN. Here is the abstract:

The rise in the popularity of consumer-facing generative artificial intelligence (“GenAI”) has created considerable confusion and consternation among some copyright owners. Copyright owners argue that GenAI’s ability to automatically generate works is made possible by large-scale direct infringement by OpenAI, Microsoft, and other major GenAI developers. This article explores the application of copyright law to the training of OpenAI’s ChatGPT, specifically focusing on the legal issues surrounding the unauthorized use of copyrighted textual works in the GenAI training process.

The large language models (“LLMs”) that drive ChatGPT and similar GenAI can summarize written works, generate movie scripts, write poetry, and compose stories nearly instantaneously. LLMs can only function in this way due to the use of vast, diverse training datasets comprised of billions of websites and expansive repositories of books. These datasets are processed to derive the functionality and syntax of language, allowing the LLMs to generate new works.

This article discusses the recent lawsuits launched by high-profile authors and copyright owners against OpenAI and Microsoft, claiming direct, vicarious, and derivative infringement. Authors such as George RR Martin, Sarah Silverman, Christopher Golden, and professional organizations such as the Authors Guild contended their works were infringed upon to turn OpenAI into an $80 billion company.

In considering the merits of these lawsuits, we discuss the curation and content of training datasets used in the known iterations of ChatGPT, and characterize the protectability of the different works the datasets included. We then explore whether the transitory nature of OpenAI’s training process uses acceptable, non-infringing copies and how that undermines claims of direct infringement.

The article then looks at the applicability of current fair use precedent to textual GenAI and the various types of works used in training datasets. To do so, we apply settled caselaw and leading decisions to discuss OpenAI’s use of copyrighted works regarding purpose and character, nature of the original work, the amount and substantiality of the works used, and the impact on the market value of the works by ChatGPT. We pay special attention to other innovative technologies that rely on a fair use defense to draw analogies and comparisons to GenAI.

Finally, this article considers the policy and legislation of other countries and their approach to ChatGPT and copyright. In doing so, policy considerations are taken into account to argue the necessity of a finding of fair use to maintain international competitiveness and to prevent an erosion of fair use in other sectors outside of GenAI. The article concludes that there is substantial support for arguments that GenAI training involves only transitory, non-actionable copying, and is also permissable under fair use.

Cook et al. on Social Group Bias in AI Finance

Thomas R. Cook (Federal Reserve Bank Kansas City) and Sophia Kazinnik (Stanford U) have posted “Social Group Bias in AI Finance” on SSRN. Here is the abstract:

Financial institutions increasingly rely on large language models (LLMs) for highstakes decision-making. However, these models risk perpetuating harmful biases if deployed without careful oversight. This paper investigates racial bias in LLMs specifically through the lens of credit decision-making tasks, operating on the premise that biases identified here are indicative of broader concerns across financial applications. We introduce a reproducible, counterfactual testing framework that evaluates how models respond to simulated mortgage applicants identical in all attributes except race. Our results reveal significant race-based discrepancies, exceeding historically observed bias levels. Leveraging layer-wise analysis, we track the propagation of sensitive attributes through internal model representations. Building on this, we deploy a control-vector intervention that effectively reduces racial disparities by up to 70% (33% on average) without impairing overall model performance. Our approach provides a transparent and practical toolkit for the identification and mitigation of bias in financial LLM deployments.

Solow-Niederman on AI and Doctrinal Collapse

Alicia Solow-Niederman (George Washington U Law) has posted “AI and Doctrinal Collapse” (78 Stanford Law Review __ (forthcoming 2026)) on SSRN. Here is the abstract:

Artificial intelligence runs on data. But the two legal regimes that govern data—information privacy law and copyright law—are under pressure. Formally, each regime demands different things. Functionally, the boundaries between them are blurring, and their distinct rules and logics are becoming illegible.

This Article identifies this phenomenon, which I call “inter-regime doctrinal collapse,” and exposes the individual and institutional consequences. Through analysis of pending litigation, discovery disputes, and licensing agreements, this Article highlights two dominant exploitation tactics enabled by collapse: Companies “buy” data through business-to-business deals that sidestep individual privacy interests, or “ask” users for broad consent through privacy policies and terms of service that leverage notice-and-choice frameworks. Left unchecked, the data acquisition status quo favors established corporate players and impedes law’s ability to constrain the arbitrary exercise of private power.

Doctrinal collapse poses a fundamental challenge to the rule of law. When a leading AI developer can simultaneously argue that data is public enough to scrape—diffusing privacy and copyright controversies—and private enough to keep secret—avoiding disclosure or oversight of its training data—something has gone seriously awry with how law constrains power. To manage these costs and preserve space for salutary innovation, we need a law of collapse. This Article offers institutional responses, drawn from conflict of laws and legal pluralism, to create one.

Perot on Anticipating AI: A Partial Solution to Image Rights Protection for Performers

Emma Perot (U the West Indies (Saint Augustine)) has posted “Anticipating AI: A Partial Solution to Image Rights Protection for Performers” (European Intellectual Property Review, Volume 46(7), pgs 407 – 418) on SSRN. Here is the abstract:

This article assesses Equity’s ‘Stop AI from Stealing the Show’ survey and suggests that a statutory image right could address some of the harms posed by AI, namely, unauthorised digital replicas. Unauthorised commercial use of persona can already be pursued under passing off and Advertising Codes in certain circumstances, but the inclusion of persona in films, television programs, and audio works is not addressed by the existing law. Even the US right of publicity is potentially inadequate in this regard because this type of harm is novel and has not been fully contemplated outside of the realm of video game avatars. Introducing a statutory image right in the UK that reflects the US ‘No Fakes’ Bill will only be a partial solution because of the existing contractual practices that result from inequality of bargaining power in the entertainment industry. Additionally, nefarious uses of deepfakes are more suited to technological intervention and criminal penalties.

Babaei et al. on Explainable Fairness, with Application to Credit Lending

Golnoosh Babaei (U Pavia) et al. have posted “Explainable Fairness, with Application to Credit Lending” on SSRN. Here is the abstract:

Fairness is a key requirement for artificial intelligence applications. The assessment of fairness is typically based on group based measures, such as statistical parity, which compares the machine learning output for the different population groups of a protected variable. Although intuitive and simple, statistical parity may be affected by the presence of control variables, correlated with the protected variable. To remove this effect, we propose to employ Shapley values, which measures the additional difference in output specifically due to the protected variable. To remove the possible impact of correlations on Shapley values, we compare them across different subgroups of the most correlated control variables, checking for the presence of Simpson’s paradox, for which a fair model may become unfair when conditioning on a control variable. We also show how to mitigate unfairness, by means of a propensity score matching that can improve statistical parity, building a training sample which matches similar individuals in different protected groups. We apply our proposal to a real-world database containing 157,269 personal lending decisions and show that both logistic regression and random forest models are fair, when all loan applications are considered; but become unfair, for high loan amount requested. We also show how propensity score matching can mitigate this bias.