Pauline Kim (Washington University in St. Louis – School of Law) has posted “Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action” on SSRN. Here is the abstract:
Concerns that predictive algorithms may discriminate are growing, but mitigating or removing bias requires designers to be aware of protected characteristics and take them into account. If they do so, however, will those efforts be considered a form of discrimination? Put concretely, if model builders take race into account to prevent racial bias against Blacks, have they then engaged in discrimination against whites? Some scholars assume so, and seek to justify those practices as valid forms of affirmative action. This Article argues that they have started the analysis in the wrong place. Rather than assuming that disparate treatment has occurred, we should first ask whether race-aware strategies constitute discrimination at all. Despite rhetoric about colorblindness, some forms of race-consciousness are widely accepted as lawful. Because creating an algorithm is a complex, multi-step process involving many choices, tradeoffs and judgment calls, there are many different ways a designer might take race into account, and not all of these strategies entail disparate treatment. Only if a particular strategy is found to be disparate treatment is it necessary to consider whether it is justifiable under affirmative action doctrine. This difference in approach matters, because affirmative action programs bear a heavy legal burden of justification. In addition, treating all race-aware algorithms as a form of disparate treatment reinforces the false notion that leveling the playing field for disadvantaged groups somehow disrupts the entitlements of a previously-advantaged group. It also mistakenly suggests that prior to considering race, algorithms are neutral processes that uncover some objective truth about merit or desert, rather than properly understanding them as human constructs that reflect the choices of their creators.