Richardson on Defining and Demystifying Automated Decision Systems

Rashida Richardson (Rutgers, The State University of New Jersey – Rutgers Law School, Northeastern University School of Law) has posted “Defining and Demystifying Automated Decision Systems” (Maryland Law Review, Forthcoming) on SSRN. Here is the abstract:

Government agencies are increasingly using automated decision systems to aid or supplant human decision-making and policy enforcement in various sensitive social domains. They determine who will have their food subsidies terminated, how much healthcare benefits a person is entitled to, and who is likely to be a victim of a crime. Yet, existing legislative and regulatory definitions fail to adequately describe or clarify how these technologies are used in practice and their impact on society. This failure to adequately describe and define “automated decision systems” leads to such systems evading scrutiny that policymakers are increasingly recognizing is warranted and potentially impedes avenues for legal redress. Such oversights can have concrete consequences for individuals and communities, such as increased law enforcement harassment, deportation, denial of housing or employment opportunities, and death.

This article is the first in law review literature to provide two clear and measured definitions of “automated decision systems” for legislative and regulatory purposes and to suggest how these definitions should be applied. The definitions and analytical framework offered in this article clarify automated decision systems as prominent modes of governance and social control that warrant greater public scrutiny and immediate regulation. The definitions foreground the social implications of these technologies in addition to capturing the multifarious functions these technologies perform as they relate to rights, liberties, public safety, access, and opportunities. To demonstrate the significance and practicality of these definitions I analyze and apply them to two modern use cases: teacher evaluation systems and gang databases. I then explore how policymakers should determine exemptions and evaluate two technologies routinely used in government: email filters and accounting software. This law review provides a much-needed intervention in global public policy discourse and interdisciplinary scholarship regarding the regulation of emergent, data- driven technologies.

Narechania on Machine Learning as Natural Monopoly

Tejas N. Narechania (University of California, Berkeley, School of Law) has posted “Machine Learning as Natural Monopoly” (Iowa Law Review, Forthcoming) on SSRN. Here is the abstract:

Machine learning is transforming the economy, reshaping operations in communications, law enforcement, and medicine, among other sectors. But all is not well: It is now well-established that many machine-learning-based applications harvest vast amounts of personal information and yield results that are systematically biased. In response, policymakers have begun to offer a range of inchoate and often insufficient solutions, overlooking the possibility—suggested intuitively by scholars across disciplines—that these systems are natural monopolies, and thus neglecting the long legal tradition of natural monopoly regulation.

Drawing on the computer science, economics, and legal literatures, I find that machine-learning-based applications can be natural monopolies. Several features of machine learning suggest that this is so, including the fixed costs of developing these applications and the computational methods of optimizing these systems. This conclusion yields concrete policy implications: Where natural monopolies exist, public oversight and regulation is typically superior to market discipline through competition. Hence, where machine-learning-based applications are natural monopolies, this regulatory tradition offers one framework for confronting a range of issues—from privacy to accuracy and bias—that attend to such systems. Just as prior natural monopolies—the railways, electric grids, and telephone networks—faced rate and service regulation to protect against extractive, anticompetitive, and undemocratic behaviors, so too might machine-learning-based applications face similar public regulation to limit intrusive data collection and protect against algorithmic redlining, among other harms.

 

Schwarcz on Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data

Daniel Schwarcz (University of Minnesota Law School) has posted “Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data” (Houston Journal of Health Law and Policy, 2021) on SSRN. Here is the abstract:

Insurers and employers often have financial incentives to discriminate against people who are relatively likely to experience future healthcare costs. Numerous federal and states laws nonetheless seek to restrict such health-based discrimination. Examples include the Pregnancy Discrimination Act (PDA), the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), and the Genetic Information Non-Discrimination Act (GINA). But this Essay argues that these laws are incapable of reliably preventing health-based discrimination when employers or insurers rely on machine-learning AIs to inform their decision-making. At bottom, this is because machine-learning AIs are inherently structured to identify and rely upon proxies for traits that directly predict whatever “target variable” they are programmed to maximize. Because the future health status of employees and insureds is in fact directly predictive of innumerable facially neutral goals for employers and insurers respectively, machine-learning AIs will tend to produce similar results as intentional discrimination based on health-related factors. Although laws like the Affordable Care Act (ACA) can avoid this outcome by prohibiting all forms of discrimination that are not pre-approved, this approach is not broadly applicable. Complicating the issue even further, virtually all technical strategies for developing “fair algorithms” are not workable when it comes to health-based proxy discrimination, because health information is generally private and hence cannot be used to correct unwanted biases. The Essay nonetheless closes by suggesting a new strategy for combatting health-based proxy discrimination by AI: limiting firms’ capacity to program their AIs using target variables that have a strong possible link to health-related factors.

Recommended.