Selbst on Artificial Intelligence and the Discrimination Injury

Andrew D. Selbst (UCLA Law) has posted “Artificial Intelligence and the Discrimination Injury” (78 Florida Law Review __ (forthcoming 2026)) on SSRN. Here is the abstract:

For a decade, scholars have debated whether discrimination involving artificial intelligence (AI) can be captured by existing discrimination laws. This article argues that the challenge that artificial intelligence poses for discrimination law stems not from the specifics of any statute, but from the very conceptual framework of discrimination law. Discrimination today is a species of tort, concerned with rectifying individual injuries, rather than a law aimed at broadly improving social or economic equality. As a result, the doctrine centers blameworthiness and individualized notions of injury. But it is also a strange sort of tort that does not clearly define its injury. Defining the discrimination harm is difficult and contested. As a result, the doctrine skips over the injury question and treats a discrimination claim as a process question about whether a defendant acted properly in a single decisionmaking event. This tort-with-unclear-injury formulation effectively merges the questions of injury and liability: If a defendant did not act improperly, then no liability attaches because a discrimination event did not occur. Injury is tied to the single decision event and there is no room for recognizing discrimination injury without liability.

This formulation directly affects regulation of AI discrimination for two reasons: First, AI decisionmaking is distributed; it is a combination of software development, its configuration, and its application, all of which are completed at different times and usually by different parties. This means that the mental model of a single decision and decisionmaker breaks down in this context. Second, the process-based injury is fundamentally at odds with the existence of “discriminatory” technology as a concept. While we can easily conceive of discriminatory AI as a colloquial matter, if there is legally no discrimination event until the technology is used in an improper way, then the technology cannot be considered discriminatory until it is improperly used.

The analysis leads to two ultimate conclusions. First, while the applicability of disparate impact law to AI is unknown, as no court has addressed the question head-on, liability will depend in large part on the degree to which a court is willing to hold a decisionmaker (e.g. and employer, lender, or landlord) liable for using a discriminatory technology without adequate attention to the effects, for a failure to either comparison shop or fix the AI. Given the shape of the doctrine, the fact that the typical decisionmaker is not tech savvy, and that they likely purchased the technology on the promise of it being non- discriminatory, whether a court would find such liability is an open question. Second, discrimination law cannot be used to create incentives or penalties for the people best able to address the problem of discriminatory AI—the developers themselves. The Article therefore argues for supplementing discrimination law with the application of a combination of consumer protection, product safety, and products liability—all legal doctrines meant to address the distribution of harmful products on the open market, and all better suited to directly addressing the products that create discriminatory harms.

Cofone & Khern-am-nuai on The Overstated Cost of AI Fairness in Criminal Justice

Ignacio Cofone (Oxford U Law) and Warut Khern-am-nuai (McGill U Desautels Management) have posted “The Overstated Cost of AI Fairness in Criminal Justice” (Indiana Law Journal (forthcoming 2025)) on SSRN. Here is the abstract:

The dominant critique of algorithmic fairness in AI decision-making, particularly in criminal justice, is that increasing fairness reduces the accuracy of predictions, thereby imposing a cost on society. This article challenges that assumption by empirically analyzing the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm, a widely used and widely discussed risk assessment tool in the U.S. criminal justice system.

This Essay makes two contributions. First, it demonstrates that widely used AI models do more than replicate existing biases—they exacerbate them. Using causal inference methods, we show that racial bias not only is present in the COMPAS training dataset but is also worsened by AI models such as COMPAS. This finding has implications for legal scholarship and policymaking, as it (a) challenges the assumption that AI can offer an objective or neutral improvement over human decision-making and (b) provides counterevidence to the idea that AI merely mirrors pre-existing human biases.

Second, this Essay reframes the debate over the cost of fairness in algorithmic decision-making for criminal justice. It shows that applying fairness constraints does not necessarily lead to a cost in terms of loss in predictive accuracy regarding recidivism.  AI systems operationalize concepts such as risk by making implicit and often flawed normative choices about what to predict and how to predict it. The claim that fair AI models decrease accuracy assumes that the model’s prediction is an optimal baseline. Fairness constraints, rather, they can correct distortions introduced by biased outcome variables—which magnify systemic racial disparities in rearrest data rather than reflect actual risk. In some cases, interventions can introduce algorithmic fairness without imposing the cost often presumed in policy discussions.

These findings are consequential beyond criminal justice. Similar dynamics exist in AI-driven decision-making in lending, hiring, and housing, where biased outcome variables reinforce systemic inequalities beyond the choices of proxies. By providing empirical evidence that fairness constraints can improve rather than undermine decision-making, this article advances the conversation on how law and policy should approach AI bias, particularly when algorithmic decisions affect fundamental rights.

Fagan on Reducing Proxy Discrimination

Frank Fagan (South Texas College Law Houston) has posted “Reducing Proxy Discrimination” (Journal of Law & Technology at Texas (forthcoming 2025)) on SSRN. Here is the abstract:

Law protects people from discrimination. Algorithms, however, can easily circumvent the appearance of discrimination through the artful use of proxy variables. For instance, a lending algorithm may appear to satisfy a legal standard by ignoring race, but the same algorithm might deny loan applicants on the basis of having attended a particular high school-a variable that may closely correlate with race. An algorithm that assesses work performance and recommends promotions may ignore sex, but the same algorithm might penalize employees who take, on average, more paternity leave-a variable that may closely correlate with sex. The abuse of proxies cuts across political views on affirmative action. For example, an admissions committee might technically ignore race consistent with recent changes to Equal Protection rules, but the same committee might consider variables that are highly correlated to race, such as zip code, high school, and the income of parents, in order to achieve a university’s diversity goals.

Today, there is no clear legal test for regulating the use of variables that proxy for race and other protected classes and classifications. This Article develops such a test. Decision tools that use proxies are narrowly tailored when they exhibit the weakest total proxy power. The test is necessarily comparative. Thus, if two algorithms predict loan repayment or university academic performance with identical accuracy rates, but one uses zip code and the other does not, then the second algorithm can be said to have deployed a more equitable means for achieving the same result as the first algorithm. Scenarios in which two algorithms produce comparable and non-identical results present a greater challenge. This Article suggests that lawmakers can develop caps to permissible proxy power over time, as courts and algorithm builders learn more about the power of variables. Finally, the Article considers who should bear the burden of producing less discriminatory alternatives and suggests plaintiffs remain in the best position to keep defendants honest—so long as testing data is made available.

Salib on Abolition by Algorithm

Peter Salib (U Houston Law) has posted “Abolition by Algorithm” (Michigan Law Review, Forthcoming) on SSRN. Here is the abstract:

In one sense, America’s newest Abolitionist movement—advocating the elimination of policing and prison—has been a success. Following the 2020 Black Lives Matter protests, a small group of self-described radicals convinced a wide swath of ordinary liberals to accept a radical claim: Mere reforms cannot meaningfully reduce prison and policing’s serious harms. Only elimination can. On the other hand, Abolitionists have failed to secure lasting policy change. The difficulty is crime. In 2021, following a nationwide uptick in homicides, liberal support for Abolitionist proposals collapsed. Despite being newly “abolition curious,” left-leaning voters consistently rejected concrete abolitionist policies. Faced with the difficult choice between reducing prison and policing and controlling serious crime, voters consistently chose the latter.

This Article presents a policy approach that could accomplish both goals simultaneously: “Algorithmic Abolitionism.” Under Algorithmic Abolitionism, powerful machine learning algorithms would allocate policing and incarceration. They would abolish both maximally, up to the point at which crime would otherwise begin to rise. Results would be dramatic. Using existing technology, Algorithmic Abolitionist policies could: eliminate at least 42% and as many as 86% of Terry stops; free between 40 and 80% of incarcerated persons; eradicate nearly all traffic stops; and remove police patrols from between 50 and 85% of city blocks. All without causing more crime.

Beyond these practical effects, Algorithmic Abolitionist thinking generates new and important normative insights in the debate over algorithmic discrimination. In short, in an Algorithmic Abolitionist world, traditional frameworks for understanding and measuring such discrimination fall apart. They sometimes rate Algorithmic Abolitionist policies as unfair, even when those policies massively reduce the number of people mistreated because of their race. And they rate other policies as fair, even when those policies would cause far more discriminatory harm. To overcome these problems, this Article introduces a new framework for understanding—and a new quantitative tool for measuring—algorithmic discrimination: “bias-impact.” It then explores the complex array of normative trade-offs that bias-impact analyses reveal. As the Article shows, bias-impact analysis will be vital not just in the criminal enforcement context, but in the wide range of settings—healthcare, finance, employment—where Algorithmic Abolitionist designs are possible.

Minow on Equality, Equity, and Algorithms

Martha Minow (Harvard Law School) has posted “Equality, Equity, and Algorithms: Learning from Justice Rosalie Abella” (University of Toronto Law Journal) on SSRN. Here is the abstract:

In the United States, employers, schools, and governments use race or other protected classifications can face a collision between two competing legal requirements: to avoid race-conscious decision making and to avoid decisions with racially disparate impacts. Growing use of machine learning and other predictive algorithmic tools heightened this tension as employers and other actors use tools that make choices about contrasting definitions of equality and anti-discrimination; design algorithmic practices against explicit or implicit uses of certain personal characteristics associated with historic discrimination; and address inaccuracies and biases in the data and algorithmic practices. Justice Rosalie Abella’s approach to equality issues, highly influential in Canadian law, offers guidance by directing decision makers to (a) acknowledge and accommodate differences in people’s circumstances and identities; (b) resist attributing to personal choice the patterns and practices of society, including different starting points and opportunities; and (c) resisting consideration of race or other group identities as justification when used to harm historically disadvantaged groups, but permitting such consideration when intended to remedy historic exclusions or economic disadvantages.

Pisanelli on Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring

Elena Pisanelli (European University Institute) has posted “A New Turning Point for Women: Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring” on SSRN. Here is the abstract:

This paper studies whether firms’ adoption of AI has a causal effect on their probability of hiring female managers, using data on the 500 largest firms by revenues in Europe and the US, and a staggered difference-in-differences approach. Despite the concerns the existing literature prompts about AI fairness, I find firms’ use of AI causes, on average, a relative increase by 40% in the hiring of female managers. This result is best explained by one specific type of AI, assessment software. I show the use of such software is correlated with a reduction in firms being sued for gender discrimination in hiring.

Kim on AI and Inequality

Pauline Kim (Washington University in St. Louis – School of Law) has posted “AI and Inequality” (Forthcoming in The Cambridge Handbook on Artificial Intelligence & the Law, Kristin Johnson & Carla Reyes, eds. (2022)) on SSRN. Here is the abstract:

This Chapter examines the social consequences of artificial intelligence (AI) when it is used to make predictions about people in contexts like employment, housing and criminal law enforcement. Observers have noted the potential for erroneous or arbitrary decisions about individuals; however, the growing use of predictive AI also threatens broader social harms. In particular, these technologies risk increasing inequality by reproducing or exacerbating the marginalization of historically disadvantaged groups, and by reinforcing power hierarchies that contribute to economic inequality. Using the employment context as the primary example, this Chapter explains how AI-powered tools that are used to recruit, hire and promote workers can reflect race and gender biases, reproducing past patterns of discrimination and exclusion. It then explores how these tools also threaten to worsen class inequality because the choices made in building the models tend to reinforce the existing power hierarchy. This dynamic is visible in two distinct trends. First, firms are severing the employment relationship altogether, relying on AI to maintain control over workers and the value created by their labor without incurring the legal obligations owed to employees. And second, employers are using AI tools to increase scrutiny of and control over employees within the firm. Well-established law prohibiting discrimination provides some leverage for addressing biased algorithms, although uncertainty remains over precisely how these doctrines will be applied. At the same time, U.S. law is far less concerned with power imbalances, and thus, more limited in responding to the risk that predictive AI will contribute to economic inequality. Workers currently have little voice in how algorithmic management tools are used and firms face few constraints on further increasing their control. Addressing concerns about growing inequality will require broad legal reforms that clarify how anti-discrimination norms apply to predictive AI and strengthen employee voice in the workplace.

Fisher & Streinz on Confronting Data Inequality

Angelina Fisher (NYU School of Law – Guarini Global Law & Tech) & Thomas Streinz (NYU School of Law – Guarini Global Law & Tech) have posted “Confronting Data Inequality” on SSRN. Here is the abstract:

Data conveys significant social, economic, and political power. Unequal control over data — a pervasive form of digital inequality — is a problem for economic development, human agency, and collective self-determination that needs to be addressed. This paper takes some steps in this direction by analyzing the extent to which law facilitates unequal control over data and by suggesting ways in which legal interventions might lead to more equal control over data. By unequal control over data, we not only mean having or not having data, but also having or not having power over deciding what becomes and what does not become data. We call this the power to datafy. We argue that data inequality is in turn a function of unequal control over the infrastructures that generate, shape, process, store, transfer, and use data. Existing law often regulates data as an object to be transferred, protected, and shared and is not always attuned to the salience of infrastructural control over data. While there are no easy solutions to the variegated causes and consequences of data inequality, we suggest that retaining flexibility to experiment with different approaches, reclaiming infrastructural control, systematically demanding enhanced transparency, pooling of data and bargaining power, and differentiated and conditional access to data mechanisms may help in confronting data inequality more effectively going forward.

Hellman on Personal Responsibility in an Unjust World

Deborah Hellman (University of Virginia School of Law) has posted “Personal Responsibility in an Unjust World: A Reply to Eidelson” (The American Journal of Law and Equality (Forthcoming)) on SSRN. Here is the abstract:

In this reply to Benjamin Eidelson’s Patterned Inequality, Compounding Injustice and Algorithmic Prediction, I argue that moral unease about algorithmic prediction is not fully explained by the importance of dismantling what Eidelson terms “patterned inequality.” Eidelson is surely correct that patterns of inequality that track socially salient traits like race are harmful and that this harm provides an important reason not to entrench these structures of disadvantage. We disagree, however, about whether this account fully explains the moral unease about algorithmic prediction. In his piece, Eidelson challenges my claim that individual actors also have reason to avoid compounding prior injustice. In this reply, I answer his challenges.

Okidegbe on The Democratizing Potential Of Algorithms?

Ngozi Okidegbe (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “The Democratizing Potential Of Algorithms?” (Connecticut Law Review, Forthcoming 2021) on SSRN. Here is the abstract:

Jurisdictions are increasingly embracing the use of pretrial risk assessment algorithms as a solution to the problem of mass pretrial incarceration. Conversations about the use of pretrial algorithms in legal scholarship have tended to focus on their opacity, determinativeness, reliability, validity, or their (in)ability to reduce high rates of incarceration as well as racial and socioeconomic disparities within the pretrial system. This Article breaks from this tendency, examining these algorithms from a democratization of criminal law perspective. Using this framework, it points out that currently employed algorithms are exclusionary of the viewpoints and values of the racially marginalized communities most impacted by their usage, since these algorithms are often procured, adopted, constructed, and overseen without input from these communities.

This state of affairs should caution enthusiasm for the transformative potential of pretrial algorithms since they reinforce and entrench the democratic exclusion that members of these communities already experience in the creation and implementation of the laws and policies shaping pretrial practices. This democratic exclusion, alongside social marginalization, contributes to the difficulties that these communities face in contesting and resisting the political, social, and economic costs that pretrial incarceration has had and continues to have on them. Ultimately, this Article stresses that resolving this democratic exclusion and its racially stratifying effects might be possible but requires shifting power over pretrial algorithms toward these communities. Unfortunately, the actualization of this prescription may be unreconcilable with the aims sought by algorithm reformers, revealing a deep tension between the algorithm project and racial justice efforts.