Bambauer, Zarsky & Mayer on Algorithmic Fairness Among Similar Individuals

Jane R. Bambauer (University of Arizona College of Law), Tal Zarsky (University of Haifa – Faculty of Law), and Jonathan Mayer (Princeton University) have posted “When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals” (UC Davis Law Review, Forthcoming) on SSRN. Here is the abstract:

If a machine learning algorithm treats two people very differently because of a slight difference in their attributes, the result intuitively seems unfair. Indeed, an aversion to this sort of treatment has already begun to affect regulatory practices in employment and lending. But an explanation, or even a definition, of the problem has not yet emerged. This Article explores how these situations—when a Small Change Makes a Big Difference (SCMBDs)—interact with various theories of algorithmic fairness related to accuracy, bias, strategic behavior, proportionality, and explainability. When SCMBDs are associated with an algorithm’s inaccuracy, such as overfitted models, they should be removed (and routinely are.) But outside those easy cases, when SCMBDs have, or seem to have, predictive validity, the ethics are more ambiguous. Various strands of fairness (like accuracy, equity, and proportionality) will pull in different directions. Thus, while SCMBDs should be detected and probed, what to do about them will require humans to make difficult choices between social goals.