Stein on Assuming the Risks of Artificial Intelligence

Amy L. Stein (University of Florida Levin College of Law) has posted “Assuming the Risks of Artificial Intelligence” (102 Boston University Law Review 2022) on SSRN. Here is the abstract:

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-risk- consent trilemma.

Papakonstantinou & de Hert on The Regulation of Digital Technologies in the EU

Vagelis Papakonstantinou (Faculty of Law and Criminology, Vrije Universiteit Brussel) and Paul De Hert
(Free University of Brussels (VUB)- LSTS; Tilburg University – TILT) have posted “The Regulation of Digital Technologies in the EU: The Law-Making Phenomena of ‘Act-ification’, ‘GDPR Mimesis’ and ‘EU Law Brutality'” (Technology and Regulation Journal 2022) on SSRN. Here is the abstract:

EU regulatory initiatives on technology-related topics has spiked over the past few years. On the basis of its Priorities Programme 2019-2024, while creating “Europe fit for the Digital Age”, the EU Commission has been busy releasing new texts aimed at regulating a number of technology topics, including, among others, data uses, online platforms, cybersecurity, or artificial intelligence. This paper identifies three basic phenomena common to all, or most, EU new technology-relevant regulatory initiatives, namely (a) “act-ification”, (b) “GDPR mimesis”, and (c) “regulatory brutality”. These phenomena divulge new-found confidence on the part of the EU technology legislator, who has by now asserted for itself the right to form policy options and create new rules in the field for all of Europe. These three phenomena serve as indicators or early signs of a new European technology law-making paradigm that by now seems ready to emerge.

Stepanian on European Artificial Intelligence Act: Should Russia Implement the Same?

Armen Stepanian (Moscow State Law Academy) has posted “European Artificial Intelligence Act: Should Russia Implement the Same?” (8 Kutafin Law Review 2022) on SSRN. Here is the abstract:

The proposal for a European Union Regulation establishing harmonized rules for artificial intelligence (Artificial Intelligence Act) is under consideration. The structure and features of the proposal of this regulatory legal act of the integrational organization are analyzed. EU AI Act scope is analyzed and shown as wider than the current Russian one. The act will contain harmonized rules for placing into market, operation and use of AI systems; bans on certain artificial intelligence methods; special requirements for AI systems with high level of risk and obligations of operators of such systems, harmonized transparency rules for AI systems designed for interaction with individuals, emotion recognition systems and biometric categorization systems, AI systems used to creating or managing images, audio or video content; market surveillance and supervision rules. The provisions of the Act, the features of the proposed institutions and norms, including extraterritoriality (as for GDPR before that raised many questions), risk-oriented approach (which is based both on self-certification and definite criteria for high-risk systems), object, scope, definitions are considered. The possible key concerns based on case-law to undermine possible discrimination are expressed. The author expresses conclusions about the advisability of (non) application of these institutions or rules in Russia.

Yap & Lim on A Legal Framework for Artificial Intelligence Fairness Reporting

Jia Qing Yap (National University of Singapore – Faculty of Law) and Ernest Lim (same) have posted “A Legal Framework for Artificial Intelligence Fairness Reporting” (81 Cambridge Law Journal, Forthcoming 2022) on SSRN. Here is the abstract:

Clear understanding of artificial intelligence (AI) usage risks and how they are being addressed is needed, which require proper and adequate corporate disclosure. We advance a legal framework for AI Fairness Reporting to which companies can and should adhere on a comply or explain basis. We analyse the sources of unfairness arising from different aspects of AI models and the disparities in the performance of machine learning systems. We evaluate how the machine learning literature has sought to address the problem of unfairness through the use of different fairness metrics. We then put forward a nuanced and viable framework for AI Fairness Reporting comprising: (a) disclosure of all machine learning models usage; (b) disclosure of fairness metrics used and the ensuing trade-offs; (c) disclosure of de-biasing methods used; and (d) release of datasets for public inspection or for third-party audit. We then apply this reporting framework to two case studies.

Chistakis et al. on Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes

Theodore Christakis (University Grenoble-Alpes, CESICE, France. Senior Fellow Cross Border Data Forum & Future of Privacy Forum), Karine Bannelier (University Grenoble-Alpes, CESICE, France), Claude Castelluccia, and Daniel Le Métayer (INRIA) have posted “Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes” on SSRN. Here is the abstract:

This is the 1st ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (and private) spaces: for authorization purposes. This 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection, privacy and Human Rights; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and European citizens who will find here an accessible way to understand all these issues.

Part 1 of our “MAPping the use of Facial Recognition in public spaces in Europe” (MAPFRE) project reports explained in detail what “facial recognition” means, ad-dressed the issues surrounding definitions, presented the political landscape and set out the exact material and geographical scope of the study. Part 2 of our Reports presented, in the most accessible way possible, how facial recognition works and produced a “Classification Table” with illustrations, explanations and examples, detailing the uses of facial recognition/analysis in public spaces, in order to help avoid conflating the diverse ways in which facial recognition is used and to bring nuance and precision to the public debate.

This 3rd Report focuses on what is, undoubtedly, the most widespread way in which Facial Recognition Technologies (FRT) are used in public (and private) spaces: Facial Recognition for authorisation purposes.

Facial recognition is often used to authorise access to a space (e.g. access control) or to a service (e.g. to make a payment). Depending on the situation, both verification and identification functionalities (terms that are explained in our 2nd Report) can be used. Millions of people use FRT to unlock their phones every day. Private entities (such as banks) or public authorities (such as the French government in terms of the now abandoned ALICEM project) increasingly envisage using FRT as a means of providing strong authentication in order to control access to private or public online services, such as e-banking, or administrative websites that concern income, health or other personal matters. FRT is increasingly being considered as a means of improving security when controlling and managing access to private areas (building entrances, goods warehouses, etc.).

In public spaces, FRT is being used as an authentication tool for automated international border controls (for example at airports) or to manage access in places as diverse as airports, stadiums or schools. Pre COVID-19, there were a lot of projects to use in the future FRT in order to “accelerate people flows”, “improve the customer experience”, “speed up operations” and “reduce queuing time” for users of different services (e.g. passengers boarding a plane or shopping) but the advent of the COVID-19 pandemic has further boosted calls for investment in FRTs in order to provide contactless services and reduce the risk of contamination.

Supermarkets, such as Carrefour, which was involved in a pilot project in Romania, or transport utilities in “smart cities”, such as the EMT bus network in Madrid, which teamed with Mastercard to conduct a pilot project that enables users to pay on EMT buses using FRT, have implemented facial recognition payment systems that permit consumers to complete transactions by simply having their faces scanned. In Europe, similar pilot projects are currently being tested enabling the management of payments in restaurants, cafés and shops.

Despite this widespread existing use or projected use of FRT for authorisation purposes we are not aware of any detailed study that is focusing on this specific issue. We hope that the present analytic study will help fill this gap by focusing on the specific issue of the use of FRT for authorisation purposes in public spaces in Europe.

We have examined in detail seven “emblematic” cases of FRT being used for authorisation purposes in public spaces in Europe. We have reviewed the documents disseminated by data controllers concerning all of these cases (and several others). We have sought out the reactions of civil society and other actors. We have dived into EU and Member State laws. We have analysed a number of Data Protection Authority (DPA) opinions. We have identified Court decisions of relevance to this matter.

Our panoramic analysis enables the identification of convergences among EU Member States, but also the risks of divergence with regard to certain specific, important ways in which FRTs are used. It also permits an assessment of whether the GDPR, as interpreted by DPAs and Courts around Europe, is a sufficient means of regulating the use of FRT for authorisation purposes in public spaces in Europe – or whether new rules are needed.

What are the main issues in practice in terms of the legal basis invoked by data controllers? What is the difference between “consent” and “voluntary” in relation to the ways in which FRT is used? Are the “alternative (non-biometric) solutions” proposed satisfactory? What are the positions of DPAs and Courts around Europe on the important issues around necessity and proportionality, including the key “less intrusive means” criterion? What are the divergences among DPAs on these issues? Is harmonisation needed and if so, how is this to be achieved? What are the lessons learned concerning the issue of DPIAs and evaluations? These are some of the questions examined in this report.

Our study ends with a series of specific recommendations that we are making, in relation to data controllers, the EDPB as well as stakeholders making proposals for new FRT rules.
We make three recommendations vis-à-vis those data controllers wishing to use facial recognition applications for authorisation purposes:

1) Data controllers should understand that they have the burden of proof in terms of meeting all of the GDPR requirements, including understanding exactly how the necessity and proportionality principles as well as the principles relating to processing of personal data should be applied in this field.

2) Data controllers should understand the limits of the “cooperative” use of facial recognition when used for authorisation purposes. Deployments of FR systems for authorisation purposes in public spaces in Europe have almost always been based on consent or have been used in a “voluntary” way. However, this does not mean that consent is almighty. First, there are situations (such as the various failed attempts to introduce FRT in schools in Europe) where consent could not be justified as being “freely given” because of an imbalance of power between users and data controllers. Second, consensual and other “voluntary” uses of FRT imply the existence of alternative solutions which must be as available and as effective as those that involve the use of FRT.

3) Data controllers should conduct DPIAs and evaluation reports and publish them to the extent possible and compatible with industrial secrets and property rights. Our study found that there is a serious lack of information available on DPIAs and evaluations of the effectiveness of FRT systems. As we explain, this is regrettable for several reasons.

We make two recommendations in relation to the EDPB:

1) The EDPB should ensure that there is harmonization on issues such as the use of centralised databases, and those principles that relate to the processing of personal data. A diverging interpretation of the GDPR on issues such as the implementation of IATA’s “One ID” concept for air travel or “pay by face” applications in Europe could create legal tension and operational difficulties.

2) The EDPB could also produce guidance on the approach that should be followed both for DPIAs and evaluation reports where FRT authorisation applications are concerned.
Finally, a recommendation regarding policy makers and other stakeholders formulating new legislative proposals: there is often a great deal of confusion about the different proposals that concern the regulation of facial recognition. It is therefore important for all stakeholders to distinguish the numerous ways in which FRT is used for authorisation purposes from other use cases and to target their proposals accordingly. For instance, proposals calling for a broad ban on “biometric recognition in public spaces” are likely to result in all of the ways in which FRT is used for authorisation purposes being prohibited. Policymakers should take this into consideration, and make sure that this is their intention, before they make such proposals.

Jain on Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India

Chirag Jain (NYU Law) has posted “Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India” on SSRN. Here is the abstract:

Part of this paper focuses on how retail fashion stores leverage AI algorithms to offer enhanced interactive features in virtual try-on mirrors, and the other part analyzes the current state of biometric data privacy laws in the US, EU, and India, and their impact on the usage of AR technologies in the retail fashion industry. Specifically, the author has attempted to deep dive into the architectural design of virtual fitting rooms (one of the technologies that have recently gained traction in law firm articles discussing the surge in biometric privacy law litigations) and analyze several advanced AI techniques; (ii) discussed the ethical issues that can arise from the usage of underlying AI technologies in VFR; (iii) briefly compared and analyzed the biometric privacy law landscape in the US, EU, and India, and especially, in the US, analyze the approach followed by the Illinois’ Biometric Information Privacy Act, which has remained a cause of concern for various businesses engaged in the collection of biometric data; (iv) suggested various recommendations for technology vendors and fashion brands – to design VFRs with “privacy by design” principles being at the forefront; and (v) Lastly, made a recommendation for legislators, by suggesting that in almost all the biometric data protection laws proposed in each state in the US, and if possible in the existing laws, collection of “second-order data” (like body geometry) without any first-order data (i.e., a retina or iris scan, a fingerprint or voiceprint, a scan of hand or face geometry, or any other identifying characteristic) shall be excluded from the ambit of “biometric identifiers” as that can reduce the unnecessary regulatory pressure in the usage of technologies like VFRs for commercial purpose.