Natalie Sheard (La Trobe Law School) has posted “Employment Discrimination by Algorithm: Can Anyone be Held Accountable?” (University of New South Wales Law Journal, Vol. 45, No. 2, 2022, (Forthcoming)). Here is the abstract:
The use by employers of algorithmic systems to automate or assist with recruitment decisions (Algorithmic Hiring Systems (‘AHSs’)) is on the rise internationally and in Australia. High levels of unemployment and reduced job vacancies provide conditions for these systems to proliferate, particularly in retail and other low wage positions. While promising to remove subjectivity and human bias from the recruitment process, AHSs may in fact lock members of protected groups out of the job market by entrenching and perpetuating historical and systematic discrimination.
In Australia, AHSs are being developed and deployed by employers without effective legal oversight. Regulators are yet to undertake a thorough analysis of the legal issues and challenges posed by their use. Academic literature examining the ability of Australia’s anti-discrimination framework to protect against discrimination by an employer using an AHS is limited. Judicial guidance is yet to be provided as cases involving discriminatory algorithms have not come before the courts.
This article provides the first broad overview of whether, and to what extent, the direct and indirect discrimination provisions of Australian anti-discrimination laws regulate the use by employers of discriminatory algorithms in the recruitment and hiring process. It considers three AHSs in use by employers in Australia: digital job advertisements, CV parsing and video interviewing systems. After analysing the mechanisms by which discrimination by AHS may occur, it critically evaluates four aspects of the law’s ability to protect against discrimination by an employer using an AHS. First, it examines the re-emergence of blatant direct discrimination by digital job advertising tools. Second, it considers who, if anyone, is liable for automated discrimination, that is, where the discriminatory decision is made by an algorithmic model in an AHS and not a natural person. Third, it examines the law’s ability to regulate algorithmic discrimination on the basis of a personal feature, such as a person’s postcode, which is not itself protected by discrimination legislation but is highly correlated with protected attributes (known as ‘proxy discrimination’). Finally, it explores whether indirect discrimination provisions can provide redress for the disparate impact of an AHS.
This article concludes that the ability of Australian anti-discrimination laws to regulate AHSs and other emerging technologies which employ discriminatory algorithms is limited. These laws are long overdue for reform and new legislative provisions specifically tailored to the use by employers of algorithmic decision systems are needed.