Eric Martínez (U Chicago Law) has posted “Traditional and Computational Canons” (Harvard Journal of Law & Technology, Vol. 39 (forthcoming 2026)) on SSRN. Here is the abstract:
As part of the rise of modern textualism, dictionaries and linguistic canons have become a ubiquitous part of legal interpretation. One longstanding question is whether judges successfully use these tools to arrive at the plain meaning of a legal text, or merely as window-dressing for their preferred policy outcome. The practical significance of this question extends across all major doctrinal areas, and with the Supreme Court’s overturning of Chevron deference, its importance is only to grow, as courts are now instructed to use every tool at their disposal to resolve ambiguity when interpreting a law. This Article is the first to show, contrary to longstanding academic speculation, that courts by-and-large align with linguistic consensus—as judged by both ordinary and expert readers—when invoking dictionaries and linguistic canons to uncover the plain meaning of a term at issue in a legal dispute.
After documenting the rise of plain meaning, linguistic canons, and dictionaries in a sample of over 2 million published opinions across the federal and state judiciaries, the Article presents the results of an experiment examining how lawyers (n=2,373) and non-lawyers (n=4,533) interpret the words at issue in 180 real-world plain-meaning cases. The experiment revealed that lawyers and laypeople tended to strongly converge on one interpretation over another, even in cases where there appeared to be two equally applicable canons leading to opposite results, and that this interpretation coincided with that of the court in a supermajority of cases. These findings suggest that courts use canons and dictionaries not merely as a smokescreen but as part of a good-faith (and largely successful) attempt to uncover the consensus meaning of a legal text.
With the advent of large language models purportedly equipped with legal and linguistic competence, a second question concerns whether novel computational tools might offer a useful supplement to judges’ use of traditional tools to determine the best reading of a legal text. Prompting state-of-the-art AI models such as GPT-4o and o1 on the aforementioned materials, this Article is the first to show that their predictions of linguistic consensus reliably match, though do not exceed, those of human judges invoking canons and dictionaries in real-world cases, even when controlling for possible data contamination and potential knowledge of prior cases. These findings suggest that some computational tools may offer an efficient, if not more effective, supplement to traditional tools in uncovering plain meaning.