Selbst on An Institutional View of Algorithmic Impact Assessments

Andrew D. Selbst (UCLA School of Law) has posted “An Institutional View Of Algorithmic Impact Assessments” (35 Harvard Journal of Law & Technology (Forthcoming)) on SSRN. Here is the abstract:

Scholars and advocates have proposed algorithmic impact assessments (AIAs) as a regulatory strategy for addressing and correcting algorithmic harms. An AIA-based regulatory framework would require the creator of an algorithmic system to assess its potential socially harmful impacts before implementation and create documentation that can be later used for accountability and future policy development. In practice, an impact assessment framework relies on the expertise and information that only the creators of the project have access to. It is therefore inevitable that technology firms will have an amount of practical discretion in the assessment, and willing cooperation from firms is necessary to make the regulation work. But a regime that relies on good-faith partnership from the private sector also has strong potential to be undermined by the incentives and institutional logics of the private sector. This Article argues that for AIA regulation to be effective, it must anticipate the ways that such a regulation will be filtered through the private sector institutional environment.

This Article combines insights from governance, organizational theory, and computer science to analyze how future AIA regulations will be implemented on the ground. Institutional logics, such as liability avoidance and the profit motive, will render the first goal—early consideration of social impacts—difficult in the short term. But AIAs can still be beneficial. The second goal—documentation to support future policy learning—does not require full compliance to be successful, and over time, there is reason to believe that AIAs can be part of a broader cultural shift toward accountability within the technical industry. This will lead to greater buy-in and less need for enforcement of documentation requirements.

Given the challenges and reliance on participation, AIAs must have synergy with how the field works rather than be in tension with it. For this reason, the Article argues that it is also crucial that regulators understand the technical industry itself, including the technology, the organizational culture, and emerging documentation standards. This Article demonstrates how emerging research within the field of algorithmic accountability can also inform the shape of AIA regulation. By looking at the different stages of development and so-called “pause points,” regulators can know at which points firms can export information. Looking at AI ethics research can show what social impacts the field thinks are important and where it might miss issues that policymakers care about. Overall, understanding the industry can make the AIA documentation requirements themselves more legible to technology firms, easing the path for a future AIA mandate to be successful on the ground.

Chambers Goodman on Just Applications of AI in Education

Christine Chambers Goodman (Pepperdine University – School of Law) has posted “Just-AIed: An Essay on Just Applications of Artificial Intelligence in Education” (West Virginia Law Review, Vol. 123, No. 3, 2021) on SSRN. Here is the abstract:

This Essay addresses the intersections between AI and justice in the context of K-12 education. Part II provides some foundational context, defining the scope of AI and some aspects of justice that especially apply to educational access and opportunity. The next part sketches an answer to the question: What should we do to promote AI for justice? Part IV considers the potential pitfalls of augmenting AI in K-12 education. The final part makes a bold call to action.

Richardson & Kak on Suspect Development Systems: Databasing Marginality and Enforcing Discipline

Rashida Richardson (Northeastern University School of Law) and Amba Kak (New York University (NYU)) have posted “Suspect Development Systems: Databasing Marginality and Enforcing Discipline” (University of Michigan Journal of Law Reform, Vol. 55, Forthcoming) on SSRN. Here is the abstract:

Algorithmic accountability law — focused on the regulation of data-driven systems like artificial intelligence (AI) or automated decision-making (ADM) tools — is the subject of lively policy debates, heated advocacy, and mainstream media attention. Concerns have moved beyond data protection and individual due process to encompass a broader range of group-level harms such as, discrimination and modes of democratic participation. While this is a welcome and long overdue shift, this discourse has ignored systems like databases, that are viewed as technically ‘rudimentary’ and often siloed from regulatory scrutiny and public attention. Additionally, burgeoning regulatory proposals like algorithmic impact assessments are not structured to surface important yet often overlooked social, organizational and political economy contexts that are critical to evaluating the practical functions and outcomes of technological systems.

This article presents a new categorical lens and analytical framework that aims to address and overcome these limitations. “Suspect Development Systems” (SDS) refers to: (1) information technologies used by government and private actors, (2) to manage vague or often immeasurable social risk based on presumed or real social conditions (e.g. violence, corruption, substance abuse), (3) that subjects targeted individuals or groups to greater suspicion, differential treatment, and more punitive and exclusionary outcomes. This frame includes some of the most recent and egregious examples of data-driven tools (such as predictive policing or risk assessments) but critically, it is also inclusive of a broader range of database systems that are currently at the margins of technology policy discourse. By examining the use of various criminal intelligence databases in India, the United Kingdom, and the United States, we developed a framework of five categories of features (technical, legal, political economy, organizational, and social) that together and separately influence how these technologies function in practice, the ways they are used, and the outcomes they produce. We then apply this analytical framework to welfare system databases, universal or ID number databases, and citizenship databases to demonstrate the value of this framework in both identifying and evaluating emergent or under-examined technologies in other sensitive social domains.

Suspect Development Systems is an intervention in legal scholarship and practice as it provides a much-needed definitional and analytical framework for understanding an ever-evolving ecosystem of technologies embedded and employed in modern governance. Our analysis also helps redirect attention toward important yet often under-examined contexts, conditions, and consequences that are pertinent to the development of meaningful legislative or regulatory interventions in the field of algorithmic accountability. The cross-jurisdictional evidence put forth across this Article illuminates the value of examining commonalities between the Global North and South to inform our understanding of how seemingly disparate technologies and contexts are in fact coaxial, which is the basis for building more global solidarity.

Sheard on Employment Discrimination by Algorithm

Natalie Sheard (La Trobe Law School) has posted “Employment Discrimination by Algorithm: Can Anyone be Held Accountable?” (University of New South Wales Law Journal, Vol. 45, No. 2, 2022, (Forthcoming)). Here is the abstract:

The use by employers of algorithmic systems to automate or assist with recruitment decisions (Algorithmic Hiring Systems (‘AHSs’)) is on the rise internationally and in Australia. High levels of unemployment and reduced job vacancies provide conditions for these systems to proliferate, particularly in retail and other low wage positions. While promising to remove subjectivity and human bias from the recruitment process, AHSs may in fact lock members of protected groups out of the job market by entrenching and perpetuating historical and systematic discrimination.

In Australia, AHSs are being developed and deployed by employers without effective legal oversight. Regulators are yet to undertake a thorough analysis of the legal issues and challenges posed by their use. Academic literature examining the ability of Australia’s anti-discrimination framework to protect against discrimination by an employer using an AHS is limited. Judicial guidance is yet to be provided as cases involving discriminatory algorithms have not come before the courts.

This article provides the first broad overview of whether, and to what extent, the direct and indirect discrimination provisions of Australian anti-discrimination laws regulate the use by employers of discriminatory algorithms in the recruitment and hiring process. It considers three AHSs in use by employers in Australia: digital job advertisements, CV parsing and video interviewing systems. After analysing the mechanisms by which discrimination by AHS may occur, it critically evaluates four aspects of the law’s ability to protect against discrimination by an employer using an AHS. First, it examines the re-emergence of blatant direct discrimination by digital job advertising tools. Second, it considers who, if anyone, is liable for automated discrimination, that is, where the discriminatory decision is made by an algorithmic model in an AHS and not a natural person. Third, it examines the law’s ability to regulate algorithmic discrimination on the basis of a personal feature, such as a person’s postcode, which is not itself protected by discrimination legislation but is highly correlated with protected attributes (known as ‘proxy discrimination’). Finally, it explores whether indirect discrimination provisions can provide redress for the disparate impact of an AHS.

This article concludes that the ability of Australian anti-discrimination laws to regulate AHSs and other emerging technologies which employ discriminatory algorithms is limited. These laws are long overdue for reform and new legislative provisions specifically tailored to the use by employers of algorithmic decision systems are needed.