Asil & Wollman on Can Machines Commit Crimes Under US Antitrust Laws?

Aslihan Asil (Yale University) and Thomas Wollmann (University of Chicago) on “Can Machines Commit Crimes Under US Antitrust Laws?” on SSRN. Here is the abstract:

Generative artificial intelligence is being rapidly deployed for corporate tasks including pricing. Suppose one of these machines communicates with the pricing manager of a competing firm, proposes to collude, receives assent, and raises price. Is this a crime under US antitrust laws, and, if so, who is liable? Based on the observed behavior of the most widely adopted large language model, we argue that this conduct is imminent, would satisfy the requirements for agreement and intent under Section 1 of the Sherman Act, and confer criminal liability to both firms as well as the pricing manager of the competing firm.

Lemley, Henderson & Hasimoto on Liability for Harmful AI Speech (including hallucinations)

Mark A. Lemley (Stanford Law School), Peter Henderson (Stanford University), and Tatsunori Hashimoto (same) have posted “Where’s the Liability in Harmful AI Speech?” on SSRN. Here is the abstract:

Generative AI, in particular text-based “foundation models” (large models trained on a huge variety of information including the internet), can generate speech that could be problematic under a wide range of liability regimes. Machine learning practitioners regularly “red-team” models to identify and mitigate such problematic speech: from “hallucinations” falsely accusing people of serious misconduct to recipes for constructing an atomic bomb. A key question is whether these red-teamed behaviors actually present any liability risk for model creators and deployers under U.S. law, incentivizing investments in safety mechanisms. We examine three liability regimes, tying them to common examples of red-teamed model behaviors: defamation, speech integral to criminal conduct, and wrongful death. We find that any Section 230 immunity analysis or downstream liability analysis is intimately wrapped up in the technical details of algorithm design. And there are many roadblocks to truly finding models (and their associated parties) liable for generated speech. We argue that AI should not be categorically immune from liability in these scenarios and that as courts grapple with the already fine-grained complexities of platform algorithms, the technical details of generative AI loom above with thornier questions. Courts and policymakers should think carefully about what technical design incentives they create as they evaluate these issues.

Lee, Cooper & Grimmelmann on Copyright and the Generative AI Supply Chain

Katherine Lee (Google DeepMind; Cornell), A. Feder Cooper (Cornell CS), and James Grimmelmann (Cornell Law) have posted “Talkin’ ‘Bout AI Generation: Copyright and the Generative AI Supply Chain” on SSRN. Here is the abstract:

This essay is an attempt to work systematically through the copyright infringement analysis of the generative AI supply chain. Our goal is not to provide a definitive answer as to whether and when training or using a generative AI is infringing conduct. Rather, we aim to map the surprisingly large number of live copyright issues that generative AI raises, and to identify the key decision points at which the analysis forks in interesting ways.

Minow on Equality, Equity, and Algorithms

Martha Minow (Harvard Law School) has posted “Equality, Equity, and Algorithms: Learning from Justice Rosalie Abella” (University of Toronto Law Journal) on SSRN. Here is the abstract:

In the United States, employers, schools, and governments use race or other protected classifications can face a collision between two competing legal requirements: to avoid race-conscious decision making and to avoid decisions with racially disparate impacts. Growing use of machine learning and other predictive algorithmic tools heightened this tension as employers and other actors use tools that make choices about contrasting definitions of equality and anti-discrimination; design algorithmic practices against explicit or implicit uses of certain personal characteristics associated with historic discrimination; and address inaccuracies and biases in the data and algorithmic practices. Justice Rosalie Abella’s approach to equality issues, highly influential in Canadian law, offers guidance by directing decision makers to (a) acknowledge and accommodate differences in people’s circumstances and identities; (b) resist attributing to personal choice the patterns and practices of society, including different starting points and opportunities; and (c) resisting consideration of race or other group identities as justification when used to harm historically disadvantaged groups, but permitting such consideration when intended to remedy historic exclusions or economic disadvantages.

Warner & Sloan on How AI Unfairly Tilts the Playing Field in Risk Assessment

Richard Warner (Chicago-Kent College of Law) and Robert H. Sloan (University of Illinois) have posted “How AI Unfairly Tilts the Playing Field: Privacy, Fairness, and Risk Shadows” on SSRN. Here is the abstract:

Private sector applications of artificial intelligence (AI) raise related questions of informational privacy and fairness. Fairness requires that market competition occurs on a level playing field, and uses of AI unfairly tilt the field. Informational privacy concerns arise because AI tilts the playing field by taking information about activities in one area of one’s life and using it in ways that impose novel risks in areas not formerly associated with such risks. The loss of control over that information constitutes a loss of informational privacy. To illustrate both the fairness and privacy issues, imagine, for example, that Sally declares bankruptcy after defaulting on $50,000 of credit card debt. She incurred the debt by paying for lifesaving medical treatment for her eight-year-old daughter. Post-bankruptcy Sally is a good credit risk. Her daughter has recovered, and her sole-proprietor business is seeing increased sales. Given her bankruptcy, however, an AI credit scoring system predicts that she is a poor risk and assigns her a low score. That low credit score casts a shadow that falls on her when her auto insurance company, which uses credit scores in its AI system as a measure of the propensity to take risks, raises her premium. Is it fair that saving her daughter’s life should carry with it the risk—realized in this case—of a higher premium? The pattern is not confined to credit ratings and insurance premiums. AI routinely creates risk shadows.

We address fairness questions in two steps. First, we turn to philosophical theories of fairness as equality of opportunity to spell out the content behind our metaphor of tilting the playing field. Second, we address the question of how, when confronted with a mathematically complex AI system, one can tell whether the system meets requirements of fairness. We answer by formulating three conditions whose violation makes a system presumptively unfair. The conditions provide a lens that reveals relevant features when policy makers and regulators investigate complex systems. Our goal is not to resolve fairness issues but to contribute to the creation of a forum in which legal regulators and affected parties can work to resolve them. The third of our three condition requires that systems incorporate contextual information about individual consumers, and we conclude by raising the question of whether our suggested approach to fairness significantly reduces informational privacy. We do not answer the question but emphasize that fairness and informational privacy questions can closely intertwine.

Hoffman & Arbel on Generative Interpretation

David A. Hoffman (U Penn Law) and Yonathan A. Arbel (U Alabama Law) have posted “Generative Interpretation” (NYU Law Review 2024) on SSRN. Here is the abstract:

We introduce generative interpretation, a new approach to estimating contractual meaning using large language models. As AI triumphalism is the order of the day, we proceed by way of grounded case studies, each illustrating the capabilities of these novel tools in distinct ways. Taking well-known contracts opinions, and sourcing the actual agreements that they adjudicated, we show that AI models can help factfinders ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties’ agreements. We also illustrate how models can calculate the probative value of individual pieces of extrinsic evidence.

After offering best practices for the use of these models given their limitations, we consider their implications for judicial practice and contract theory. Using LLMs permits courts to estimate what the parties intended cheaply and accurately, and as such generative interpretation unsettles the current interpretative stalemate. Their use responds to efficiency-minded textualists and justice-oriented contextualists, who argue about whether parties will prefer cost and certainty or accuracy and fairness. Parties—and courts—would prefer a middle path, in which adjudicators strive to predict what the contract really meant, admitting just enough context to approximate reality while avoiding unguided and biased assimilation of evidence. As generative interpretation offers this possibility, we argue it can become the new workhorse of contractual interpretation.

Recommended.

Wasserman-Rozen et al. on The Limits of Explainability in AI

Hofit Wasserman-Rozen (Tel-Aviv U), Ran Gilad-Bachrach (same), and Niva Elkin-Koren (same) have posted “Lost in Translation: The Limits of Explainability in AI” on SSRN. Here is the abstract:

As artificial intelligence becomes more prevalent, regulators are increasingly turning to legal measures, like “a right to explanation” to protect against potential risks raised by AI systems. However, are eXplainable AI (XAI) tools – the artificial intelligence tools that provide such explanations – up for the task?

This paper critically examines XAI’s potential to facilitate the right to explanation by applying the prism of explanation’s role in law to different stakeholders. Inspecting the underlying functions of reason-giving reveals different objectives for each of the stakeholders involved. From the perspective of a decision-subject, reason-giving facilitates due process and acknowledges human agency. From a decision-maker’s perspective, reason-giving contributes to improving the quality of the decisions themselves. From an ecosystem perspective, reason-giving may strengthen the authority of the decision-making system toward different stakeholders by promoting accountability and legitimacy, and by providing better guidance. Applying this analytical framework to XAI’s generated explanations reveals that XAI fails to fulfill the underlying objectives of the right to explanation from the perspective of both the decision-subject and the decision-maker. In contrast, XAI is found to be extremely well-suited to fulfil the underlying functions of reason-giving from an ecosystems’ perspective, namely, strengthening the authority of the decision-making system. However, lacking all other virtues, this isolated ability may be misused or abused, eventually harming XAI’s intended human audience. The disparity between human decision-making and automated decisions makes XAI an insufficient and even a risky tool, rather than serving as a guardian of human rights. After conducting a rigorous analysis of these ramifications, this paper concludes by urging regulators and the XAI community to reconsider the pursuit of explainability and the right to explanation of AI systems.

Tomlinson, Patterson & Torrance on The A.I. Training Set as a Trojan Horse of Misinformation

Bill Tomlinson (UC Irvine), Donald Patterson (same), and Andrew W. Torrance (Kansas Law; MIT Sloan) have posted “Turning Fake Data into Fake News: The A.I. Training Set as a Trojan Horse of Misinformation” (San Diego Law Journal, Forthcoming) on SSRN. Here is the abstract:

Generative artificial intelligence (“A.I.”) offers tremendous benefits to society. However, these benefits must be carefully weighed against the societal damage A.I. can also cause. Dangers posed by inaccurate training sets have been raised by many authors. These include racial discrimination, sexual bias, and other pernicious forms of misinformation. One remedy to such problems is to ensure that training sets used to teach A.I. models are correct and that the data upon which they rely are accurate. An assumption behind this correction is that data inaccuracies are inadvertent mistakes. However, a darker possibility exists: the deliberate seeding of training sets with inaccurate information for the purpose of skewing the output of A.I. models toward misinformation. As United States Supreme Court Justice Oliver Wendell Holmes, Jr., suggested, laws are not written for the “good man”, because good people will tend to obey moral and legal principles in manners consistent with a well functioning society even in the absence of formal laws. Rather, Holmes proposed, laws should be written with the “bad man” in mind, because bad people will push the limits of acceptable behavior, engaging in cheating, dishonesty, crime, and other societally-damaging practices, unless constrained by carefully-designed laws and their accompanying penalties.

This article raises the spectre of the deliberate sabotage of training sets used to train A.I. models, with the purpose of perverting the outputs of such models. Examples include fostering revisionist histories, unjustly harming or rehabilitating the reputations of people, companies, or institutions, or even promoting as true ideas that are not. Strategic and clever efforts to introduce ideas into training sets that later manifest themselves as facts could aid and abet fraud, libel, slander, or the creation of “truths,” the belief in which promote the interests of particular individuals or groups. Imagine, for example, a first investor who buys grapefruit futures, who then seeds training sets with the idea that grapefruits will become the new gold, with the result that later prospective investors who consult A.I. models for investment advice are informed that they should invest in grapefruit, enriching the first investor. Or, consider a malevolent political movement that hopes to rehabilitate the reputation of an abhorrent leader; if done effectively, this movement could seed training sets with sympathetic information about this leader, resulting in positive portrayals of this leader in the future outputs of trained A.I. models.

This article adopts the cautious attitude necessitated by Holmes’ Bad Man, applying it to proactively stopping, or retroactively punishing and correcting, deliberate attempts to subvert the training sets of A.I. models. It offers legal approaches drawn from doctrines ranging from fraud, nuisance, libel, and slander, to misappropriation, privacy, and right of publicity. It balances these with protections for speech afforded by the First Amendment and other doctrines of free speech. The result is the first comprehensive attempt to prevent, respond to, and correct deliberate attempts to subvert training sets of A.I. models for malicious purposes.

Schrepel & Pentland on Competition between AI Foundation Models

Thibault Schrepel (VU Amsterdam; Stanford Codex; Sorbonne; Science Po) & Alex Pentland (MIT) have posted “Competition between AI Foundation Models: Dynamics and Policy Recommendations” (MIT Connection Science WP 1-2003) on SSRN. Here is the abstract:

Generative AI is set to become a critical technology for our modern economies. If we are currently experiencing a strong, dynamic competition between the underlying foundation models, legal institutions have an important role to play in ensuring that the spring of foundation models does not turn into a winter with an ecosystem frozen by a handful of players.

Lemley, Henderson & Volokh on Freedom of Speech and AI Output

Mark A. Lemley (Stanford Law), Peter Henderson (same), and Eugene Volokh (UCLA Law) have posted “Freedom of Speech and AI Output” on SSRN. Here is the abstract:

Is the output of generative AI entitled to First Amendment protection? We’re inclined to say yes. Even though current AI programs are of course not people and do not themselves have constitutional rights, their speech may potentially be protected because of the rights of the programs’ creators. But beyond that, and likely more significantly, AI programs’ speech should be protected because of the rights of their users—both the users’ rights to listen and their rights to speak. In this short Article, we sketch the outlines of this analysis.