Colonna on Legal Implications of Using AI as an Exam Invigilator

Liane Colonna (Stockholm University – Faculty of Law) has posted “Legal Implications of Using AI as an Exam Invigilator” on SSRN. Here is the abstract:

This article considers the legal implications of the use of remote proctoring using artificial intelligence (AI) to monitor online exams and, in particular, to validate students’ identities and to flag suspicious activities during the exam to discourage academic misconduct like plagiarism, unauthorized collaboration and sharing of test questions or answers. The emphasis is on AI-based facial recognition technologies (FRT) that can be used during the authentication process for remote users during the online exam process as well as to identify dubious behavior throughout the examination. The central question explored is whether these systems are necessary and lawful based on European human rights law.

The first part of the paper explores the use of AI-based remote proctoring technologies in higher education, both from the institutional perspective as well as from the student perspective. It emphasizes how universities are shifting from a reliance on systems that include human oversight, like proctors overseeing the examinations from remote locations, towards more algorithmically driven practices that rely on processing biometric data. The second part of the paper examines how the use of AI-based remote proctoring technologies in higher education impacts the fundamental rights of students, focusing on the fundamental rights to privacy, data protection, and non-discrimination. Next, it provides a brief overview of the legal frameworks that exists to limit the use of this technology. Finally, the paper closely examines the issue of legality of processing in an effort to unpack and understand the complex legal and ethical issues that arise in this context.

Recommended.

Shackleton on on Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence

J. R. SHACKLETON (Institute of Economic Affairs (IEA), Westminster Business School, University of Buckingham) has posted “Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence” (Institute of Economic Affairs Current Controversies No. 61) on SSRN. Here is the abstract:

It is claimed that robots, algorithms and artificial intelligence are going to destroy jobs on an unprecedented scale. These developments, unlike past bouts of technical change, threaten rapidly to affect even highly-skilled work and lead to mass unemployment and/or dramatic falls in wages and living standards, while accentuating inequality. As a result, we are threatened with the ‘end of work’, and should introduce radical new policies such as a robot tax and a universal basic income. However the claims being made of massive job loss are based on highly contentious technological assumptions and are contested by economists who point to flaws in the methodology. In any case, ‘technological determinism’ ignores the engineering, economic, social and regulatory barriers to adoption of many theoretically possible innovations. And even successful innovations are likely to take longer to materialise than optimists hope and pessimists fear. Moreover, history strongly suggests that jobs destroyed by technical change will be replaced by new jobs complementary to these technologies – or else in unrelated areas as spending power is released by falling prices. Current evidence on new types of job opportunity supports this suggestion. The UK labour market is currently in a healthy state and there is little evidence that technology is having a strongly negative effect on total employment. The problem at the moment may be a shortage of key types of labour rather than a shortage of work. The proposal for a robot tax is ill-judged. Defining what is a robot is next to impossible, and concerns over slow productivity growth anyway suggest we should be investing more in automation rather than less. Even if a workable robot tax could be devised, it would essentially duplicate the effects, and problems, of corporation tax. Universal basic income is a concept with a long history. Despite its appeal, it would be costly to introduce, could have negative effects on work incentives, and would give governments dangerous powers. Politicians already seem tempted to move in the direction of these untested policies. They would be foolish to do so. If technological change were to create major problems in the future, there are less problematic policies available to mitigate its effects – such as reducing taxes on employment income, or substantially deregulating the labour market.

Hellman on Personal Responsibility in an Unjust World

Deborah Hellman (University of Virginia School of Law) has posted “Personal Responsibility in an Unjust World: A Reply to Eidelson” (The American Journal of Law and Equality (Forthcoming)) on SSRN. Here is the abstract:

In this reply to Benjamin Eidelson’s Patterned Inequality, Compounding Injustice and Algorithmic Prediction, I argue that moral unease about algorithmic prediction is not fully explained by the importance of dismantling what Eidelson terms “patterned inequality.” Eidelson is surely correct that patterns of inequality that track socially salient traits like race are harmful and that this harm provides an important reason not to entrench these structures of disadvantage. We disagree, however, about whether this account fully explains the moral unease about algorithmic prediction. In his piece, Eidelson challenges my claim that individual actors also have reason to avoid compounding prior injustice. In this reply, I answer his challenges.

Ranchordas on Experimental Regulations for AI

Sofia Ranchordas (University of Groningen, Faculty of Law; Yale Law School – Information Society Project) has posted “Experimental Regulations for AI: Sandboxes for Morals and Mores” on SSRN. Here is the abstract:

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.