Wilkins on Artificial Intelligence in the Recruiting Process: Identifying Perceptions of Bias

Lorenza M. Wilkins (Columbia Southern University) has posted “Artificial Intelligence in the Recruiting Process: Identifying Perceptions of Bias” on SSRN. Here is the abstract:

This study examined the perceived level of bias in former and current employees, supervisors, and directors, who work for companies that utilize artificial intelligence. Despite the evolving role of the artificial intelligence recruitment process, only limited research has been conducted. The theory that reinforced this study is Adams’ Equity Theory. An Organizational Inclusive Behavior survey instrument was applied to former and present employees, supervisors, and directors in the Research Triangle Park area of North Carolina (N=21). Three open-ended survey questions were included and complemented the survey instrument’s reliability. The Shapiro-Wilk test/IBM SPSS 26 and In vivo coding, leveraged by Quirkos software, supported this qualitative and non-experimental design. Age, education, ethnicity, and organizational level (employee rank) did reveal a modest relationship to the perceptions of bias, awareness, trust, and transparency concerning the use of artificial intelligence in the recruiting process.

Sheard on Employment Discrimination by Algorithm

Natalie Sheard (La Trobe Law School) has posted “Employment Discrimination by Algorithm: Can Anyone be Held Accountable?” (University of New South Wales Law Journal, Vol. 45, No. 2, 2022, (Forthcoming)). Here is the abstract:

The use by employers of algorithmic systems to automate or assist with recruitment decisions (Algorithmic Hiring Systems (‘AHSs’)) is on the rise internationally and in Australia. High levels of unemployment and reduced job vacancies provide conditions for these systems to proliferate, particularly in retail and other low wage positions. While promising to remove subjectivity and human bias from the recruitment process, AHSs may in fact lock members of protected groups out of the job market by entrenching and perpetuating historical and systematic discrimination.

In Australia, AHSs are being developed and deployed by employers without effective legal oversight. Regulators are yet to undertake a thorough analysis of the legal issues and challenges posed by their use. Academic literature examining the ability of Australia’s anti-discrimination framework to protect against discrimination by an employer using an AHS is limited. Judicial guidance is yet to be provided as cases involving discriminatory algorithms have not come before the courts.

This article provides the first broad overview of whether, and to what extent, the direct and indirect discrimination provisions of Australian anti-discrimination laws regulate the use by employers of discriminatory algorithms in the recruitment and hiring process. It considers three AHSs in use by employers in Australia: digital job advertisements, CV parsing and video interviewing systems. After analysing the mechanisms by which discrimination by AHS may occur, it critically evaluates four aspects of the law’s ability to protect against discrimination by an employer using an AHS. First, it examines the re-emergence of blatant direct discrimination by digital job advertising tools. Second, it considers who, if anyone, is liable for automated discrimination, that is, where the discriminatory decision is made by an algorithmic model in an AHS and not a natural person. Third, it examines the law’s ability to regulate algorithmic discrimination on the basis of a personal feature, such as a person’s postcode, which is not itself protected by discrimination legislation but is highly correlated with protected attributes (known as ‘proxy discrimination’). Finally, it explores whether indirect discrimination provisions can provide redress for the disparate impact of an AHS.

This article concludes that the ability of Australian anti-discrimination laws to regulate AHSs and other emerging technologies which employ discriminatory algorithms is limited. These laws are long overdue for reform and new legislative provisions specifically tailored to the use by employers of algorithmic decision systems are needed.

Moore on AI Trainers in the Workplace

Phoebe V Moore (University of Leicester) has posted “AI Trainers: Who is the Smart Worker today?” on SSRN. Here is the abstract:

AI is often linked to automation and potential job losses, but it is more suitably described as an augmentation tool for data collection and usage, rather than a stand-alone entity, or in ways that avoid precise definitions. AI machines and systems are seen to demonstrate competences which are increasingly similar to human decision-making and prediction. AI-augmented tools and applications are intended to improve human resources and allow more sophisticated tracking of productivity, attendance and even health data for workers. These tools are often seen to perform much faster and more accurately than humans. What does this mean for workers of the future, however?
If AI does actually become as prevalent and as significant as predictions would have it – and we really do make ourselves the direct mirror reflection of machines, and/or simply resources for fuelling them through the production of datasets via our own supposed intelligence of, e.g., image recognition – then we will have a very real set of problems on our hands. Potentially, workers will only be necessary for machinic maintenance or, as discussed in this chapter, as ‘AI trainers’. How can we prepare ourselves to work with smart machines, and thus to become ourselves, ‘smart workers’?

Shackleton on on Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence

J. R. SHACKLETON (Institute of Economic Affairs (IEA), Westminster Business School, University of Buckingham) has posted “Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence” (Institute of Economic Affairs Current Controversies No. 61) on SSRN. Here is the abstract:

It is claimed that robots, algorithms and artificial intelligence are going to destroy jobs on an unprecedented scale. These developments, unlike past bouts of technical change, threaten rapidly to affect even highly-skilled work and lead to mass unemployment and/or dramatic falls in wages and living standards, while accentuating inequality. As a result, we are threatened with the ‘end of work’, and should introduce radical new policies such as a robot tax and a universal basic income. However the claims being made of massive job loss are based on highly contentious technological assumptions and are contested by economists who point to flaws in the methodology. In any case, ‘technological determinism’ ignores the engineering, economic, social and regulatory barriers to adoption of many theoretically possible innovations. And even successful innovations are likely to take longer to materialise than optimists hope and pessimists fear. Moreover, history strongly suggests that jobs destroyed by technical change will be replaced by new jobs complementary to these technologies – or else in unrelated areas as spending power is released by falling prices. Current evidence on new types of job opportunity supports this suggestion. The UK labour market is currently in a healthy state and there is little evidence that technology is having a strongly negative effect on total employment. The problem at the moment may be a shortage of key types of labour rather than a shortage of work. The proposal for a robot tax is ill-judged. Defining what is a robot is next to impossible, and concerns over slow productivity growth anyway suggest we should be investing more in automation rather than less. Even if a workable robot tax could be devised, it would essentially duplicate the effects, and problems, of corporation tax. Universal basic income is a concept with a long history. Despite its appeal, it would be costly to introduce, could have negative effects on work incentives, and would give governments dangerous powers. Politicians already seem tempted to move in the direction of these untested policies. They would be foolish to do so. If technological change were to create major problems in the future, there are less problematic policies available to mitigate its effects – such as reducing taxes on employment income, or substantially deregulating the labour market.