Hannah Kuker (U Miami) has posted “When Opt-Outs Fail Us: Charting a More Effective Course for Attribution & Monetization on AI Platforms” on SSRN. Here is the abstract:
At present, improvements to AI image-generating technology have been forestalled at the crossroads of the very debate through which intellectual property law was born: the balance between the protection of individual creator rights and the progression of science and the useful arts. These interests at odds have been reflected in recent legal turmoil inundating the court system between artists and AI developers. In the interim, the legislature and tech industry alike have been advocating for an “opt-out,” or “notice and consent,” approach to assembling training datasets. Yet, notice and consent frameworks have been historically ineffective, with complexity and opaque information flows creating a false appearance of user autonomy. We face this illusion of control should we adopt an opt-out approach to AI training dataset permissions. Because opt-outs are locationspecific, they ignore downstream copying, which is misleading for artists who believe if they have opted-out once, they have done so successfully across the board. AI companies are primed to manipulate this environment, exploiting artists’ inability to effectively opt-out, all under the guise of compliance. At the same time, we must recognize the profound impact AI can have on the arts-a potential that falls flat without rich, diverse, and high-quality training data. There exists a need for an alternative that respects the interests of both parties, or better yet encourages positive relationships between them. This Essay offers that solution. It calls for the regulation of data provenance recording practices by AI developers to facilitate mechanisms for attribution and monetization without sacrificing AI functionality. This Essay’s proposal avoids the pitfalls of opt-out schemas to preserve the key promises of intellectual property law.
