Goodman on Synthetic Content: Default to Distrust

Ellen P. Goodman (Rutgers Law) has posted “Synthetic Content: Default to Distrust” (Forthcoming Case Western Reserve Law Review) on SSRN. Here is the abstract:

AI generated or altered content — synthetic content — can cause economic, dignitary, and epistemic harms. To combat epistemic harms in particular, many jurisdictions have adopted or are considering mandatory synthetic content provenance or content labels.  This article takes a skeptical view of such mandates on practical and conceptual grounds. Synthetic content disclosure mandates usually rest on two premises: that (1) synthetic content deceives or otherwise distorts public discourse, as compared with authentic content; and (2) source disclosure effectively combats discourse harms. These premises are not well-supported and the disclosures themselves may distort understanding. Moreover, in a near-future world where a large portion of communications are partially or fully synthetic, it makes little sense to assume such content is rare. These problems of scaling, burden, and meaning all feed into the First Amendment infirmities of mandatory synthetic content disclosures. This is not to devalue the importance of content authentication or voluntary synthetic content disclosure. It is simply to challenge the wisdom of taking what are inherently contested sociotechnical calls — not unlike distinguishing what is true from false — and legally mandating them to be made. 

An alternative to “defining in” the synthetic is to “define out” the authentic. It is to default to distrust in the authenticity of content. Rather than putting the onus on synthetic content creators and distributors to mark content as synthetic, a more scalable approach is to support those who want to authenticate content as human-made. Robust content authentication calls for quite different policy interventions than disclosure mandates. The latter draw from the regulatory heritage of media laws, such as electioneering and advertising disclosure laws. These force reluctant communicators to disclose what they would prefer to conceal. By contrast, the policies needed to support authentication would draw from the regulatory heritage of software law, such as the Digital Millenium Copyright Act’s requirement that distributors preserve content technical protection measures. Here, the goal is to preserve communications voluntarily made and protect them from downstream tampering. If and when most content is at least partially synthetic, those who want to flag their communications as authentic (or provide provenance information about content alterations) need assurance that those flags will convey through the content distribution chain to recipients.