Poon et al. on Analysing Modern Slavery Statements (MSS) Using Large Language Models (LLMs)

Ser-Huang Poon (Alliance Manchester Business) et al. have posted “Analysing Modern Slavery Statements (MSS) Using Large Language Models (LLMs)” (AI & Higher Education Handbook, Edward Elgar Publishing, 2025.) on SSRN. Here is the abstract:

This chapter explores the application of Large Language Models (LLMs) to analyse Modern Slavery Statements (MSS). As businesses are increasingly required to produce MSS to comply with legislation, the volume of these documents presents challenges in monitoring and evaluation. LLMs offer a scalable solution for extracting, classifying, and assessing the content of MSS, enabling organisations and regulators to identify compliance issues and patterns of modern slavery risks. By leveraging LLMs, the analysis can go beyond simple keyword searches to understand the context and nuances within MSS, offering a more comprehensive insight into corporate responses to modern slavery. This chapter demonstrates the potential of LLMs to enhance transparency and accountability in corporate reporting on human rights issues, particularly in addressing modern slavery.

Ajunwa on A.I. and Captured Capital

Ifeoma Ajunwa (Emory U Law) has posted “A.I. and Captured Capital” (134 Yale L. J. Forum (2025)) on SSRN. Here is the abstract:

Increasingly, automated processes—under the catch-all term “artificial intelligence” (AI)—serve as “mechanical managers” in the workplace. They may manifest as productivity applications to spur workers to work faster or as deputized surveillants who monitor workers’ every move. Moving beyond surveillance capitalism, this Essay argues that, absent legal intervention, we are on a path toward a scientific approach to management that prioritizes efficiency and deploys AI technologies to maximize output through the collection and exploitation of worker’s captured capital. That is, the more benevolent tenets of scientific management, such as encouraging productivity or achieving mutual prosperity for employers and workers , no longer represent paramount goals for firms. Rather, emboldened by new AI capabilities, firms have set out to quantify or reduce all elements of workers’ experience to data. This data is valuable capital that (1) holds exchange value and (2) drives the automation of workplaces and the displacement of workers.

The contributions of this Essay are threefold. First, this Essay names and describes the sociolegal phenomenon of “captured capital”—that is, the coercive collection and use of worker data to facilitate workplace automation and ultimately worker displacement. Second, this Essay situates this phenomenon within an AI arms race in the workplace and analyzes it through the lens of law and political economy. Specifically, the Essay argues that the AI arms race has spurred the unchecked development and deployment of AI technologies and that a laissez-faire approach to globalization has encouraged the growth of a borderless labor market without adequate international labor protections, leaving workers vulnerable. Third, the Essay sets forth three potential legal avenues for redress: (1) treating the data gathered from workers as stake capital in the automation of their workplaces such that a portion of the gains from automation is rightfully returned to the worker; (2) creating a licensing regime for workers to license their data freely to firms; and (3) requiring firms that use workers’ data as part of their automation process to pay into a fund that will finance a guaranteed income. Finally, the Essay notes a role for the International Labor Organization to play in protecting workers in the AI revolution.

Kim on Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law

Pauline Kim (Wash. U. St. Louis Law) has posted “Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law” (Oxford Handbook of the Law of Work, eds Davidov, Langille, Lester (2024)) on SSRN. Here is the abstract:

Employers are increasingly relying on algorithms and AI to manage their workforces, using automated systems to recruit, screen, select, supervise, discipline, and even terminate employees. This chapter explores the effects of these systems on the rights of workers in standard work relationships, who are presumptively protected by labor laws. It examines how these new technological tools affect fundamental worker interests and how existing law applies, focusing primarily as examples on two particular concerns—nondiscrimination and privacy. Although current law provides some protections, legal doctrine has largely developed with human managers in mind, and as a result, fails to fully apprehend the risks posed by algorithmic tools. Thus, while anti-discrimination law prohibits discrimination by workplace algorithms, the existing framework has a number of gaps and uncertainties when applied to these systems. Similarly, traditional protections for employee privacy are ill-equipped to address the sheer volume and granularity of worker data that can now be collected, and the ability of computational techniques to extract new insights and infer sensitive information from that data. More generally, the expansion of algorithmic management affects other fundamental worker interests because it tends to increase employer power vis à vis labor. This chapter concludes by briefly considering the role that data protection laws might play in addressing the risks of algorithmic management.

Wilkins on Artificial Intelligence in the Recruiting Process: Identifying Perceptions of Bias

Lorenza M. Wilkins (Columbia Southern University) has posted “Artificial Intelligence in the Recruiting Process: Identifying Perceptions of Bias” on SSRN. Here is the abstract:

This study examined the perceived level of bias in former and current employees, supervisors, and directors, who work for companies that utilize artificial intelligence. Despite the evolving role of the artificial intelligence recruitment process, only limited research has been conducted. The theory that reinforced this study is Adams’ Equity Theory. An Organizational Inclusive Behavior survey instrument was applied to former and present employees, supervisors, and directors in the Research Triangle Park area of North Carolina (N=21). Three open-ended survey questions were included and complemented the survey instrument’s reliability. The Shapiro-Wilk test/IBM SPSS 26 and In vivo coding, leveraged by Quirkos software, supported this qualitative and non-experimental design. Age, education, ethnicity, and organizational level (employee rank) did reveal a modest relationship to the perceptions of bias, awareness, trust, and transparency concerning the use of artificial intelligence in the recruiting process.

Sheard on Employment Discrimination by Algorithm

Natalie Sheard (La Trobe Law School) has posted “Employment Discrimination by Algorithm: Can Anyone be Held Accountable?” (University of New South Wales Law Journal, Vol. 45, No. 2, 2022, (Forthcoming)). Here is the abstract:

The use by employers of algorithmic systems to automate or assist with recruitment decisions (Algorithmic Hiring Systems (‘AHSs’)) is on the rise internationally and in Australia. High levels of unemployment and reduced job vacancies provide conditions for these systems to proliferate, particularly in retail and other low wage positions. While promising to remove subjectivity and human bias from the recruitment process, AHSs may in fact lock members of protected groups out of the job market by entrenching and perpetuating historical and systematic discrimination.

In Australia, AHSs are being developed and deployed by employers without effective legal oversight. Regulators are yet to undertake a thorough analysis of the legal issues and challenges posed by their use. Academic literature examining the ability of Australia’s anti-discrimination framework to protect against discrimination by an employer using an AHS is limited. Judicial guidance is yet to be provided as cases involving discriminatory algorithms have not come before the courts.

This article provides the first broad overview of whether, and to what extent, the direct and indirect discrimination provisions of Australian anti-discrimination laws regulate the use by employers of discriminatory algorithms in the recruitment and hiring process. It considers three AHSs in use by employers in Australia: digital job advertisements, CV parsing and video interviewing systems. After analysing the mechanisms by which discrimination by AHS may occur, it critically evaluates four aspects of the law’s ability to protect against discrimination by an employer using an AHS. First, it examines the re-emergence of blatant direct discrimination by digital job advertising tools. Second, it considers who, if anyone, is liable for automated discrimination, that is, where the discriminatory decision is made by an algorithmic model in an AHS and not a natural person. Third, it examines the law’s ability to regulate algorithmic discrimination on the basis of a personal feature, such as a person’s postcode, which is not itself protected by discrimination legislation but is highly correlated with protected attributes (known as ‘proxy discrimination’). Finally, it explores whether indirect discrimination provisions can provide redress for the disparate impact of an AHS.

This article concludes that the ability of Australian anti-discrimination laws to regulate AHSs and other emerging technologies which employ discriminatory algorithms is limited. These laws are long overdue for reform and new legislative provisions specifically tailored to the use by employers of algorithmic decision systems are needed.

Moore on AI Trainers in the Workplace

Phoebe V Moore (University of Leicester) has posted “AI Trainers: Who is the Smart Worker today?” on SSRN. Here is the abstract:

AI is often linked to automation and potential job losses, but it is more suitably described as an augmentation tool for data collection and usage, rather than a stand-alone entity, or in ways that avoid precise definitions. AI machines and systems are seen to demonstrate competences which are increasingly similar to human decision-making and prediction. AI-augmented tools and applications are intended to improve human resources and allow more sophisticated tracking of productivity, attendance and even health data for workers. These tools are often seen to perform much faster and more accurately than humans. What does this mean for workers of the future, however?
If AI does actually become as prevalent and as significant as predictions would have it – and we really do make ourselves the direct mirror reflection of machines, and/or simply resources for fuelling them through the production of datasets via our own supposed intelligence of, e.g., image recognition – then we will have a very real set of problems on our hands. Potentially, workers will only be necessary for machinic maintenance or, as discussed in this chapter, as ‘AI trainers’. How can we prepare ourselves to work with smart machines, and thus to become ourselves, ‘smart workers’?

Shackleton on on Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence

J. R. SHACKLETON (Institute of Economic Affairs (IEA), Westminster Business School, University of Buckingham) has posted “Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence” (Institute of Economic Affairs Current Controversies No. 61) on SSRN. Here is the abstract:

It is claimed that robots, algorithms and artificial intelligence are going to destroy jobs on an unprecedented scale. These developments, unlike past bouts of technical change, threaten rapidly to affect even highly-skilled work and lead to mass unemployment and/or dramatic falls in wages and living standards, while accentuating inequality. As a result, we are threatened with the ‘end of work’, and should introduce radical new policies such as a robot tax and a universal basic income. However the claims being made of massive job loss are based on highly contentious technological assumptions and are contested by economists who point to flaws in the methodology. In any case, ‘technological determinism’ ignores the engineering, economic, social and regulatory barriers to adoption of many theoretically possible innovations. And even successful innovations are likely to take longer to materialise than optimists hope and pessimists fear. Moreover, history strongly suggests that jobs destroyed by technical change will be replaced by new jobs complementary to these technologies – or else in unrelated areas as spending power is released by falling prices. Current evidence on new types of job opportunity supports this suggestion. The UK labour market is currently in a healthy state and there is little evidence that technology is having a strongly negative effect on total employment. The problem at the moment may be a shortage of key types of labour rather than a shortage of work. The proposal for a robot tax is ill-judged. Defining what is a robot is next to impossible, and concerns over slow productivity growth anyway suggest we should be investing more in automation rather than less. Even if a workable robot tax could be devised, it would essentially duplicate the effects, and problems, of corporation tax. Universal basic income is a concept with a long history. Despite its appeal, it would be costly to introduce, could have negative effects on work incentives, and would give governments dangerous powers. Politicians already seem tempted to move in the direction of these untested policies. They would be foolish to do so. If technological change were to create major problems in the future, there are less problematic policies available to mitigate its effects – such as reducing taxes on employment income, or substantially deregulating the labour market.