Andrew D. Selbst (UCLA Law) has posted “Artificial Intelligence and the Discrimination Injury” (78 Florida Law Review __ (forthcoming 2026)) on SSRN. Here is the abstract:
For a decade, scholars have debated whether discrimination involving artificial intelligence (AI) can be captured by existing discrimination laws. This article argues that the challenge that artificial intelligence poses for discrimination law stems not from the specifics of any statute, but from the very conceptual framework of discrimination law. Discrimination today is a species of tort, concerned with rectifying individual injuries, rather than a law aimed at broadly improving social or economic equality. As a result, the doctrine centers blameworthiness and individualized notions of injury. But it is also a strange sort of tort that does not clearly define its injury. Defining the discrimination harm is difficult and contested. As a result, the doctrine skips over the injury question and treats a discrimination claim as a process question about whether a defendant acted properly in a single decisionmaking event. This tort-with-unclear-injury formulation effectively merges the questions of injury and liability: If a defendant did not act improperly, then no liability attaches because a discrimination event did not occur. Injury is tied to the single decision event and there is no room for recognizing discrimination injury without liability.
This formulation directly affects regulation of AI discrimination for two reasons: First, AI decisionmaking is distributed; it is a combination of software development, its configuration, and its application, all of which are completed at different times and usually by different parties. This means that the mental model of a single decision and decisionmaker breaks down in this context. Second, the process-based injury is fundamentally at odds with the existence of “discriminatory” technology as a concept. While we can easily conceive of discriminatory AI as a colloquial matter, if there is legally no discrimination event until the technology is used in an improper way, then the technology cannot be considered discriminatory until it is improperly used.
The analysis leads to two ultimate conclusions. First, while the applicability of disparate impact law to AI is unknown, as no court has addressed the question head-on, liability will depend in large part on the degree to which a court is willing to hold a decisionmaker (e.g. and employer, lender, or landlord) liable for using a discriminatory technology without adequate attention to the effects, for a failure to either comparison shop or fix the AI. Given the shape of the doctrine, the fact that the typical decisionmaker is not tech savvy, and that they likely purchased the technology on the promise of it being non- discriminatory, whether a court would find such liability is an open question. Second, discrimination law cannot be used to create incentives or penalties for the people best able to address the problem of discriminatory AI—the developers themselves. The Article therefore argues for supplementing discrimination law with the application of a combination of consumer protection, product safety, and products liability—all legal doctrines meant to address the distribution of harmful products on the open market, and all better suited to directly addressing the products that create discriminatory harms.
