Backer & McQuilla on The Algorithmic Law of Business and Human Rights

Larry Cata Backer (Penn State Law) and Matthew McQuilla (Penn State Law) have posted “The Algorithmic Law of Business and Human Rights: Constructing a Private Transnational Law of Ratings, Social Credit, and Accountability Measures” on SSRN. Here is the abstract:

This paper examines the rise of algorithmic systems—that is systems of data driven governance (and social credit type) systems in the context of business and human rights and its ramifications (especially its challenges) for law. Section 1 sketched the context within which it is possible to frame concepts of algorithmic law. Algorithmic law is centered at the nexus point of a number of critical trends. Section 2 explored the premise that ratings based governance systems can be created in a lucid and coherent way. Section 3 then examined the way that these theoretical possibilities begin to emerge in the West. Section 4 then briefly considered ramifications for liberal democratic orders and the constitution of law. Among the more relevant are those tied to issues of privacy, of the integrity of data, and on transparency. The context centers on the examination of the landscape of such algorithmic private legal systems as it has developed to date by considering the extent to which a rating or algorithmic system has been emerging around recent national efforts to combat human trafficking through so-called Modern Slavery and Supply Chain Due Diligence legal regime and international norms.

Kelly-Lyth on Challenging Biased Hiring Algorithms

Aislinn Kelly-Lyth (Harvard Law School, University of Cambridge – Faculty of Law) has posted “Challenging Biased Hiring Algorithms” (Oxford Journal of Legal Studies (March 2021) on SSRN. Here is the abstract:

Employers are increasingly using automated hiring systems to assess job applicants, with potentially discriminatory effects. This paper considers the effectiveness of EU-derived laws, which regulate the use of these algorithms in the UK. The paper finds that while EU data protection and equality laws already seek to balance the harms of biased hiring algorithms with the benefits of their use, enforcement of these laws in the UK is severely limited in practice. One significant problem is transparency, and this problem is likely to exist across the EU. The paper therefore recommends that data protection impact assessments, which must be carried out by all employers using automated hiring systems in the EU or UK, should be published in redacted form. Mandating, and in the short term incentivising, such publication would enable better enforcement of rights which already exist.

Mulhoff on Predictive Privacy

Rainer Muhlhoff (Technische Universität Berlin (TU Berlin), Freie Universität Berlin) has posted “Predictive Privacy: Towards an Applied Ethics of Data Analytics” on SSRN. Here is the abstract:

Data analytics and data-driven approaches in Machine Learning are now among the most hailed computing technologies in many industrial domains. One major application is predictive analytics, which is used to predict sensitive attributes, future behavior, or cost, risk and utility functions associated with target groups or individuals based on large sets of behavioral and usage data. This paper stresses the severe ethical and data protection implications of predictive analytics if it is used to predict sensitive information about single individuals or treat individuals differently based on the data many unrelated individuals provided. To tackle these concerns in an applied ethics, first, the paper introduces the concept of “predictive privacy” to formulate an ethical principle protecting individuals and groups against differential treatment based on Machine Learning and Big Data analytics. Secondly, it analyses the typical data processing cycle of predictive systems to provide a step-by-step discussion of ethical implications, locating occurrences of predictive privacy violations. Thirdly, the paper sheds light on what is qualitatively new in the way predictive analytics challenges ethical principles such as human dignity and the (liberal) notion of individual privacy. These new challenges arise when predictive systems transform statistical inferences, which provide knowledge about the cohort of training data donors, into individual predictions, thereby crossing what I call the “prediction gap”. Finally, the paper summarizes that data protection in the age of predictive analytics is a collective matter as we face situations where an individual’s (or group’s) privacy is violated using data other individuals provide about themselves, possibly even anonymously. 

Zalnieriute on Automated Facial Recognition Technology

Monika Zalnieriute (University of New South Wales (UNSW) – Faculty of Law) has posted “Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State” (Columbia Science and Technology Law Review 22(2) 2021, Forthcoming) on SSRN. Here is the abstract:

A live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. In R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020 (‘Bridges’) – the first successful legal challenge to automated facial recognition technology worldwide – the Court of Appeal in the United Kingdom held that the use of automated facial recognition technology by the South Wales Police was unlawful. This landmark ruling can set a precedent and influence future policy on facial recognition in many countries. Bridges decision imposes some limits on the previously unconstrained police discretion on whom to target and where to deploy the technology. Yet, while the decision demands a clearer legal framework to limit the discretion of police who use such technology, it does not, in principle, oppose the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. To the contrary, the Court accepted that the use of automated facial recognition in public spaces to identify very large numbers of people and to track their movements is proportional to law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces; it underplayed the heavy burden placed on democratic participation and the rights to freedom of expression and association, which require collective action in public spaces. Neither did the Court demand transparency about the technologies used by the police force, which is often shielded behind the ‘trade secrets’ by the corporations who produce them, nor did the Court act to prevent fragmentation and inconsistency between local police forces’ rules and regulations on automated facial recognition technology, which leaves the law less predictable. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of UK police in the short term, its long-term impact in burning bridges between the expanding public space surveillance infrastructure and the modern state is less certain.