Burk on Asemic Defamation, or, the Death of the AI Speaker

Dan L. Burk (UC Irvine Law) has posted “Asemic Defamation, or, the Death of the AI Speaker” (First Amendment Law Review, Vol. 22, 2024) on SSRN. Here is the abstract:

Large Language Model (“LLM”) systems have captured considerable popular, scholarly, and governmental notice. By analyzing vast troves of text, these machine learning systems construct a statistical model of relationships among words, and from that model they are able to generate syntactically sophisticated texts. However, LLMs are prone to “hallucinate,” which is to say that they routinely generate statements that are demonstrably false. Although couched in the language of credible factual statements, such LLM output may entirely diverge from known facts. When they concern particular individuals, such texts may be reputationally damaging if the contrived false statements they contain are derogatory.

Scholars have begun to analyze the prospects and implications of such AI defamation. However, most analyses to date begin from the premise that LLM texts constitute speech that is protected under constitutional guarantees of expressive freedom. This assumption is highly problematic, as LLM texts have no semantic content. LLMs are not designed, have no capability, and do not attempt to fit the truth values of their output to the real world. LLM texts appear to constitute an almost perfect example of what semiotics labels “asemic signification,” that is, symbols that have no meaning except for meaning imputed to them by a reader.

In this paper, I question whether asemic texts are properly the subject of First Amendment coverage. I consider both LLM texts and historical examples to examine the expressive status of asemic texts, recognizing that LLM texts may be the first instance of fully asemic texts. I suggest that attribution of meaning by listeners alone cannot credibly place such works within categories of protected speech. In the case of LLM outputs, there is neither a speaker, nor communication of any message, nor any meaning that is not supplied by the text recipient. I conclude that LLM texts cannot be considered protected speech, which vastly simplifies their status under defamation law.