Tang on Creative Labor and Platform Capitalism

Xiyin Tang (UCLA Law UCLA Law) has posted “Creative Labor and Platform Capitalism” (Forthcoming, UCLA Law Review, Volume 73 (2026)) on SSRN. Here is the abstract:

The conventional account of creativity and cultural production is one of passion, free expression, and self-fulfillment, a process whereby individuals can assert their autonomy and individuality in the world. This conventional account of creativity underlies prominent theories of First Amendment and intellectual property law, including the influential “semiotic democracy” literature, which posits that new digital technologies, by providing everyday individuals the tools to create and disseminate content, results in a better and more representative democracy. In this view, digital content creation is largely (1) done by amateurs; (2) done for free; and (3) conducive of greater freedom.

This Article argues that the conventional story of creativity, honed in the early days of the Internet, fails to account for significant shifts in how creative work is extracted, monetized, and exploited in the new platform economy. Increasingly, digital creation is done neither by amateurs, nor is it done for free. Instead, and as this Article discusses, fundamental shifts in the business models of the largest Internet platforms, led by YouTube, paved a path for the class of largely professionalized creators who increasingly rely on digital platforms to make a living today. In the new digital economy, monetization—in which users of digital platforms sell their content, and themselves, for a portion of the platform’s advertising revenues—not free sharing, reigns. And far from promoting freedom, such increased reliance on large platforms brings creators closer to gig workers—the Uber drivers, DoorDash delivery workers, and millions of other part-time laborers who increasingly find themselves at the mercy of the opaque algorithms of the new platform capitalism.

This reframing—of creation not as self-realization but as work that is both precarious and exploited, most notably as surplus data value—demands that any framework for regulating informational capitalism’s exploitation of labor is incomplete without considering how creative work is extracted and datafied in the digital platform economy.

Blaszczyk on Posthuman Copyright: AI, Copyright, and Legitimacy

Matt Blaszczyk (U Michigan Law) has posted “Posthuman Copyright: AI, Copyright, and Legitimacy” on SSRN. Here is the abstract:

Copyright’s human authorship requirement is an institutional attempt to assert legal, moral, and sociological legitimacy at a time of crisis. The U.S. Copyright Office, the courts, and the so-called copyright humanists, portray the requirement as a beacon of copyright’s faith, meant to protect authors in the AI era. The minimal threshold for human authorship, however, forces us to question whether it is merely rhetoric, which the law has always employed regardless of its justification. This Article bridges the gap between doctrinal, theoretical, socio-legal and constitutionalist scholarship, arguing that human authorship is an ideology to which the law is only nominally faithful. The Article analyzes the U.S. Copyright Office’s pronouncements, the D.C. Circuit ruling in Thaler v. Perlmutter, and the pending case of Allen v. Perlmutter, arguing that the Office’s approach, despite its rhetoric, is not meant to meaningfully stop the AI revolution. Whether interpreted broadly or narrowly, the human authorship requirement is unlikely to protect the interests of human authors in the AI era. Incorporating insights from copyright history and theoretical debates about romantic authorship, this Article argues that copyright has failed to protect those interests for over a century, instead favoring the interests of powerful corporations. If and when copyright becomes a regime for robots, the question is whether that expansion will also primarily benefit corporations. Arguably, copyright has never cared much for human authors-and it is time to question if we should keep pretending otherwise.

Gow on SONGPRINT: A Voluntary Labelling Framework for AI-assisted Music Creation

Gordon A. Gow (U Alberta Arts) has posted “SONGPRINT: A Voluntary Labelling Framework for AI-assisted Music Creation” on SSRN. Here is the abstract:

As generative artificial intelligence (AI) tools increasingly permeate music creation, questions of transparency, authorship, and creative practice take on renewed urgency. This paper introduces SONGPRINT, a multi-dimensional labelling framework designed to assist musicians in reflecting on and disclosing AI’s role in the compositional process. Prompted in part by the recent Velvet Sundown controversy—a high-profile case of undisclosed AI-generated music—the project advances SONGPRINT as a conversation starter: a modest prototype intended to encourage critical engagement with evolving norms of attribution, labour, and listening in an AI-assisted musical landscape.

This paper is informed by my practice as a part-time songwriter and musician. As a product of the analog generation, I have witnessed firsthand the evolution of music production, from magnetic tape and discrete transistor-based audio components to the advent of digital audio workstations and integrated software-based signal processing, and now to the emergence of generative music platforms. Over the past year, I have conducted a series of experiments using platforms like Suno to explore how AI can serve as a creative collaborator in shaping melodies, interpreting lyrics, and producing polished musical outputs. These experiences inspired the development of SONGPRINT as both a reflective and practical framework for discussing the dialectic between generative AI and human creativity in music production.

Leistner & Antoine on TDM and AI training in the European Union – The Hamburg Regional Court’s “LAION” judgment

Matthias Leistner (Ludwig Maximilian U Munich (LMU) Institute Civil Law and Procedure) and Lucie Antoine (Ludwig Maximilian U Munich (LMU) Law) have posted “TDM and AI training in the European Union – The Hamburg Regional Court’s “LAION” judgment” on SSRN. Here is the abstract:

The Hamburg Regional Court’s decision in the “LAION” case is the first judgment of a European court assessing certain (preparatory) AI training activities (namely the creation of a freely available database (the LAION database) consisting of text-image descriptions and URLs) from a copyright perspective. The court decides the case based on the text and data mining exception specifically for purposes of nonprofit scientific research (Sec. 60d German Copyright Act, Art. 3 DSM Directive). However, in an obiter dictum the court also comprehensively addresses the general exception for (commercial) text and data mining (Sec. 44b German Copyright Act, Art. 4 DSM Directive) providing important considerations for the provision’s future interpretation. The decision underscores once more that defining the requirements for a machine-readable opt out pursuant to Art. 4 (3) DSM Directive is currently one of the most pressing issues in the discussion on EU copyright and AI training-even more so since Art. 53 (1) (c) AI Act obliges providers of general-purpose AI models to put in place a policy to identify and comply with respective reservations of rights during the training process-even when carried out in a third country outside the EU.

Bedell on AI Works & Human Authorship

Matthew Bedell (U Akron) has posted “AI Works & Human Authorship” on SSRN. Here is the abstract:

This comment serves as a counterargument that any final product containing purely AIgenerated material cannot have human authorship. Human advancement is being stymied by overrestrictive rules governing AI and human authorship. The United States Copyright Office (USCO) has begun to change its stance with the recently released second part of the Copyright and Artificial Intelligence report. However, these changes are insufficiently clear enough to allow for human authorship of copyrightable material made by AI and overseen by a person. I explore the argument that human authorship comes from creative expression, that AI empowers said expression, and that utilizing AI in furtherance of creative expression is therefore owned by the human author.

Abiri on ML-Mediated Creativity

Gilad Abiri (Peking U Transnational Law) has posted “ML-Mediated Creativity” (Harvard Art Law Review Musings, June 2025) on SSRN. Here is the abstract:

This essay examines how machine learning systems fundamentally alter the dynamics of cultural innovation. Using anime’s post-war evolution as a case study, it argues that genuine creativity emerges from productive friction—the collision of different cultural traditions, generations, and artistic approaches. However, ML systems trained on existing cultural works create statistical averages that eliminate this generative friction, replacing dynamic cultural processes with algorithmic optimization. Current intellectual property frameworks cannot address this transformation because they treat cultural works as discrete objects rather than materials for creative play. The essay proposes two interventions: preserving “friction spaces” in educational institutions and regulating ML architecture to maintain distinct cultural lineages rather than collapsing them into optimized averages.

Graves on Upload Complete: An Introduction to Creator Economy Law

Franklin Graves (LinkedIn Corporation) has posted “Upload Complete: An Introduction to Creator Economy Law” (Belmont Law Journal, Volume 1, 2024-2025) on SSRN. Here is the abstract:

Individuals have been creating and sharing creative works online since the dawn of the World Wide Web. However, only in the last decade and a half have the platform monetization, business, and audience factors reached the necessary levels to provide the foundation of a fully functioning, sometimes self-sustaining, creator economy. To understand the creator economy, it is important to contextualize it with the evolution of the web, the rise of Web 2.0, the on-going development of web3, and the shift from a consumer economy to a creator economy. The democratization of the web and technological advancements have made it possible for anyone with an internet connection to become a creator.

The creator economy has also introduced a new set of legal challenges and opportunities, including issues related to intellectual property, contract law, privacy, and content regulation. Courts and policymakers are still grappling with how to deal with these issues in a way that recognizes, respects, and protects the rights of creators, brands, platforms, and consumers.

Divided into three parts, this article aims to accelerate the conversation around an emerging area the author proposes to label as “Creator Economy Law” while simultaneously offering a survey of how, for nearly the past two decades, governments, regulators, and courts have been shaping the well-established creator economy.

Part I introduces the creator economy, defining “creators” and providing a brief history of creativity on the internet across the concepts of Web 2.0 and web3. The article then discusses the explosion of the creator economy in recent years, driven by the shift from a producer-consumer economy to an attention-driven creator economy. Part II examines a selection of laws that affect creators, from copyright and trademark to privacy and advertising. The article also discusses the evolving relationship between brands and influencers, the rebirth of remix culture, the liability of platforms for content posted by creators and brands, and the self-regulation and moderation of content by platforms. Part III discusses the future opportunities and challenges facing creators in the digital age, including the potential impact of decentralized platforms and communities, the proliferation of generative artificial intelligence technologies, and future law and policy considerations.

Rai on The Reliability Response to Patent Law’s AI Challenges

Arti K. Rai (Duke U Law) has posted “The Reliability Response to Patent Law’s AI Challenges” on SSRN. Here is the abstract:

Pervasive AI use adds newfound importance to longstanding debates over patent timing and reliability. Patent claims on speculative ideas generated by AI, or even the infusion of speculative AI-generated ideas into the public domain, may defeat patent incentives for more careful research.  Although challenges that AI use poses for patent validity requirements like human inventorship and nonobviousness have received more attention, reliability is equally important. 

Indeed, as this Essay argues, the issues are linked. If requirements for inventorship and nonobviousness were adjusted to emphasize reliability, a human role could be preserved, and AI use would not necessarily threaten patents.  Currently, as empirical evidence presented in this Essay shows, the fear of imperiling patents may be chilling normatively desirable transparency about such use.

The path forward requires embracing reliability throughout patent doctrine.  In addition to changes to inventorship and nonobviousness doctrine, robust adoption of reliability requires fortification of the utility requirement for securing a patent and a parallel tightening of requirements for the types of information that can be used to thwart patent grants.  Longer term, if cost barriers to innovation across fields fall dramatically, certain non-patent exclusivities may need to play the dominant incentive role. But for the time being AI can provide a powerful catalyst for bolstering a level of reliability the patent system should arguably have had all along.

Ginsburg on AI Inputs, Fair Use and the U.S. Copyright Office Report

Jane C. Ginsburg (Columbia U Law) has posted “AI Inputs, Fair Use and the U.S. Copyright Office Report” on SSRN. Here is the abstract:

The US has yet to produce determinative caselaw on whether inputting works to compile a generative AI system’s training data is a fair use. Judicial rulings, however, may soon emerge, as many of the multiple pending cases are reaching the stage of a judgment on the merits of the copyright owners’ infringement claims. In addition, the U.S. Copyright Office recently issuedPart 3 Generative AI Training of a report requested by Congress on Copyright and Artificial Intelligence, in which the Office extensively and rigorously examined the application of copyright law to the copying of protected works to assemble data to train generative models.

Hrdy on Trade Secrets and Artificial Intelligence

Camilla Alexandra Hrdy (Rutgers) has posted “Trade Secrets and Artificial Intelligence” (Trade Secrets and Artificial Intelligence Forthcoming in Elgar Concise Encyclopedia of Artificial Intelligence and the Law (Edward Elgar, eds. Ryan Abbott, Elizabeth Rothman, forthcoming, 2026)) on SSRN. Here is the abstract:

Companies create, collect, and manage significant amounts of economically valuable information. Some of this information is deliberately kept secret and can be protected under trade secret law. Trade secret laws protect certain forms of secret and economically valuable information against improper use or disclosure by others. Artificial intelligence (AI) raises many challenging issues for trade secret law. This entry identifies some of the major issues and what commentators have said about them: (1) protecting AI as a trade secret, (2) the difference between closed-source and open-source AI and the trade secrecy implications, (3) risks posed by generative AI to existing trade secrets, (4) whether AI poses risks to companies’ trade secrets, (5) whether AI-generated outputs can be protected as trade secrets, and (6) whether trade secrecy stands in the way of transparency goals.