Pisanelli on Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring

Elena Pisanelli (European University Institute) has posted “A New Turning Point for Women: Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring” on SSRN. Here is the abstract:

This paper studies whether firms’ adoption of AI has a causal effect on their probability of hiring female managers, using data on the 500 largest firms by revenues in Europe and the US, and a staggered difference-in-differences approach. Despite the concerns the existing literature prompts about AI fairness, I find firms’ use of AI causes, on average, a relative increase by 40% in the hiring of female managers. This result is best explained by one specific type of AI, assessment software. I show the use of such software is correlated with a reduction in firms being sued for gender discrimination in hiring.

Kim on AI and Inequality

Pauline Kim (Washington University in St. Louis – School of Law) has posted “AI and Inequality” (Forthcoming in The Cambridge Handbook on Artificial Intelligence & the Law, Kristin Johnson & Carla Reyes, eds. (2022)) on SSRN. Here is the abstract:

This Chapter examines the social consequences of artificial intelligence (AI) when it is used to make predictions about people in contexts like employment, housing and criminal law enforcement. Observers have noted the potential for erroneous or arbitrary decisions about individuals; however, the growing use of predictive AI also threatens broader social harms. In particular, these technologies risk increasing inequality by reproducing or exacerbating the marginalization of historically disadvantaged groups, and by reinforcing power hierarchies that contribute to economic inequality. Using the employment context as the primary example, this Chapter explains how AI-powered tools that are used to recruit, hire and promote workers can reflect race and gender biases, reproducing past patterns of discrimination and exclusion. It then explores how these tools also threaten to worsen class inequality because the choices made in building the models tend to reinforce the existing power hierarchy. This dynamic is visible in two distinct trends. First, firms are severing the employment relationship altogether, relying on AI to maintain control over workers and the value created by their labor without incurring the legal obligations owed to employees. And second, employers are using AI tools to increase scrutiny of and control over employees within the firm. Well-established law prohibiting discrimination provides some leverage for addressing biased algorithms, although uncertainty remains over precisely how these doctrines will be applied. At the same time, U.S. law is far less concerned with power imbalances, and thus, more limited in responding to the risk that predictive AI will contribute to economic inequality. Workers currently have little voice in how algorithmic management tools are used and firms face few constraints on further increasing their control. Addressing concerns about growing inequality will require broad legal reforms that clarify how anti-discrimination norms apply to predictive AI and strengthen employee voice in the workplace.

Fisher & Streinz on Confronting Data Inequality

Angelina Fisher (NYU School of Law – Guarini Global Law & Tech) & Thomas Streinz (NYU School of Law – Guarini Global Law & Tech) have posted “Confronting Data Inequality” on SSRN. Here is the abstract:

Data conveys significant social, economic, and political power. Unequal control over data — a pervasive form of digital inequality — is a problem for economic development, human agency, and collective self-determination that needs to be addressed. This paper takes some steps in this direction by analyzing the extent to which law facilitates unequal control over data and by suggesting ways in which legal interventions might lead to more equal control over data. By unequal control over data, we not only mean having or not having data, but also having or not having power over deciding what becomes and what does not become data. We call this the power to datafy. We argue that data inequality is in turn a function of unequal control over the infrastructures that generate, shape, process, store, transfer, and use data. Existing law often regulates data as an object to be transferred, protected, and shared and is not always attuned to the salience of infrastructural control over data. While there are no easy solutions to the variegated causes and consequences of data inequality, we suggest that retaining flexibility to experiment with different approaches, reclaiming infrastructural control, systematically demanding enhanced transparency, pooling of data and bargaining power, and differentiated and conditional access to data mechanisms may help in confronting data inequality more effectively going forward.

Hellman on Personal Responsibility in an Unjust World

Deborah Hellman (University of Virginia School of Law) has posted “Personal Responsibility in an Unjust World: A Reply to Eidelson” (The American Journal of Law and Equality (Forthcoming)) on SSRN. Here is the abstract:

In this reply to Benjamin Eidelson’s Patterned Inequality, Compounding Injustice and Algorithmic Prediction, I argue that moral unease about algorithmic prediction is not fully explained by the importance of dismantling what Eidelson terms “patterned inequality.” Eidelson is surely correct that patterns of inequality that track socially salient traits like race are harmful and that this harm provides an important reason not to entrench these structures of disadvantage. We disagree, however, about whether this account fully explains the moral unease about algorithmic prediction. In his piece, Eidelson challenges my claim that individual actors also have reason to avoid compounding prior injustice. In this reply, I answer his challenges.

Okidegbe on The Democratizing Potential Of Algorithms?

Ngozi Okidegbe (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “The Democratizing Potential Of Algorithms?” (Connecticut Law Review, Forthcoming 2021) on SSRN. Here is the abstract:

Jurisdictions are increasingly embracing the use of pretrial risk assessment algorithms as a solution to the problem of mass pretrial incarceration. Conversations about the use of pretrial algorithms in legal scholarship have tended to focus on their opacity, determinativeness, reliability, validity, or their (in)ability to reduce high rates of incarceration as well as racial and socioeconomic disparities within the pretrial system. This Article breaks from this tendency, examining these algorithms from a democratization of criminal law perspective. Using this framework, it points out that currently employed algorithms are exclusionary of the viewpoints and values of the racially marginalized communities most impacted by their usage, since these algorithms are often procured, adopted, constructed, and overseen without input from these communities.

This state of affairs should caution enthusiasm for the transformative potential of pretrial algorithms since they reinforce and entrench the democratic exclusion that members of these communities already experience in the creation and implementation of the laws and policies shaping pretrial practices. This democratic exclusion, alongside social marginalization, contributes to the difficulties that these communities face in contesting and resisting the political, social, and economic costs that pretrial incarceration has had and continues to have on them. Ultimately, this Article stresses that resolving this democratic exclusion and its racially stratifying effects might be possible but requires shifting power over pretrial algorithms toward these communities. Unfortunately, the actualization of this prescription may be unreconcilable with the aims sought by algorithm reformers, revealing a deep tension between the algorithm project and racial justice efforts.

Okidegbe on Race, Algorithms and The Practice of Criminal Law

Ngozi Okidegbe (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “When They Hear Us: Race, Algorithms and The Practice of Criminal Law” (Kansas Journal of Law & Public Policy, Vol. 29, 2020) on SSRN. Here is the abstract:

We are in the midst of a fraught debate in criminal justice reform circles about the merits of using algorithms. Proponents claim that these algorithms offer an objective path towards substantially lowering high rates of incarceration and racial and socioeconomic disparities without endangering community safety. On the other hand, racial justice scholars argue that these algorithms threaten to entrench racial inequity within the system because they utilize risk factors that correlate with historic racial inequities, and in so doing, reproduce the same racial status quo, but under the guise of scientific objectivity.

This symposium keynote address discusses the challenge that the continued proliferation of algorithms poses to the pursuit of racial justice in the criminal justice system. I start from the viewpoint that racial justice scholars are correct about currently employed algorithms. However, I advocate that as long as we have algorithms, we should consider whether they could be redesigned and repurposed to counteract racial inequity in the criminal law process. One way that algorithms might counteract inequity is if they were designed by most impacted racially marginalized communities. Then, these algorithms might counterintuitively benefit these communities by endowing them with a democratic mechanism to contest the harms that the criminal justice system’s operation enacts on them.

Kelly-Lyth on Challenging Biased Hiring Algorithms

Aislinn Kelly-Lyth (Harvard Law School, University of Cambridge – Faculty of Law) has posted “Challenging Biased Hiring Algorithms” (Oxford Journal of Legal Studies (March 2021) on SSRN. Here is the abstract:

Employers are increasingly using automated hiring systems to assess job applicants, with potentially discriminatory effects. This paper considers the effectiveness of EU-derived laws, which regulate the use of these algorithms in the UK. The paper finds that while EU data protection and equality laws already seek to balance the harms of biased hiring algorithms with the benefits of their use, enforcement of these laws in the UK is severely limited in practice. One significant problem is transparency, and this problem is likely to exist across the EU. The paper therefore recommends that data protection impact assessments, which must be carried out by all employers using automated hiring systems in the EU or UK, should be published in redacted form. Mandating, and in the short term incentivising, such publication would enable better enforcement of rights which already exist.

Reinbold on Choosing Equality over Technology

Patric Reinbold (University of Wisconsin – Madison, School of Law) has posted “Facing Discrimination: Choosing Equality over Technology” to SSRN. Here is the abstract:

On its face, facial recognition technology poses advantages in the form of efficiency and cost-savings in sectors of society such as law enforcement, education, employment, and healthcare. However, these advantages perpetuate indirect forms of discrimination through unequal access to the technology’s benefits and—more significantly—direct forms of discrimination such as falsely identifying Black, Indigenous, and People of Color as suspects of crimes disproportionately. Facial recognition technology offers several opportunities to inject bias into its performance: through biased algorithm design, recycling racial bias in the form of past law enforcement data, and through biased user applications.

The precautionary principle warns against regulating a technology before it is fully developed and implemented, but the consequences of allowing this technology to go unregulated are overcome by the startling implications on racial discrimination in the United States. Therefore, this technology should be regulated before any further harm is done. This Comment analyzes the legislation proposed to regulate facial recognition technology by considering the longevity and breadth of the proposed regulations.