Spencer on The First Amendment and the Regulation of Speech Intermediaries

Shaun B. Spencer (University of Massachusetts School of Law – Dartmouth) has posted “The First Amendment and the Regulation of Speech Intermediaries” (Marquette Law Review, Forthcoming) on SSRN. Here is the abstract:

Calls to regulate social media platforms abound on both sides of the political spectrum. Some want to prevent platforms from deplatforming users or moderating content, while others want them to deplatform more users and moderate more content. Both types of regulation will draw First Amendment challenges. As Justices Thomas and Alito have observed, applying settled First Amendment doctrine to emerging regulation of social media platforms presents significant analytical challenges.

This Article aims to alleviate at least some of those challenges by isolating the role of the speech intermediary in First Amendment jurisprudence. Speech intermediaries complicate the analysis because they introduce speech interests that may conflict with the traditional speaker and listener interests that First Amendment doctrine evolved to protect. Clarifying the under-examined role of the speech intermediary can help inform the application of existing doctrine in the digital age. The goal of this Article is to articulate a taxonomy of speech intermediary functions that will help courts (1) focus on which intermediary functions are implicated by a given regulation and (2) evaluate how the mix of speaker, listener, and intermediary interests should affect whether that regulation survives a First Amendment challenge.

This Article proceeds as follows. First, it provides a taxonomy of the speech intermediary functions—conduit, curator, commentator, and collaborator—and identifies for each function the potential conflict or alignment between the intermediary’s speech interest and the speech interests of the speakers and listeners the intermediary serves. Next, it maps past First Amendment cases onto the taxonomy and describes how each intermediary’s function influenced the application of First Amendment doctrine. Finally, it illustrates how the taxonomy can help analyze First Amendment challenges to emerging regulation of contemporary speech intermediaries.

Recommended.

Low & Hara on Cryptoassets and Property

Kelvin F.K. Low (National University of Singapore – Faculty of Law) and Megumi Hara (Chuo University Law School) have posted “Cryptoassets and Property” (Sjef van Erp & Katja Zimmermann (eds), Edward Elgar Research Handbook on EU Property Law (Forthcoming)) on SSRN. Here is the abstract:

The concept of property has always been, and remains, a vexed notion. Within civilian systems, the difficulty of incorporating the basic idea of ownership – surely fundamental to any idea of property – within the Gaian and other schema demonstrates the elusiveness of property. Its elusiveness lies in part in the intersection of various distinct ideas within the law of property. In civilian schema, these distinct ideas are often distinguished by discrete vocabulary. For example, a modern French schema distinguishes between biens (assets), choses (things), and droits (rights). Within common law systems, the comparative lack of attention to classification, and relative paucity in vocabulary for discrete concepts has led to much confusion. With this background in mind, it is perhaps unsurprising to find that cryptoassets have been more readily accommodated within common law systems’ vague notions of property than those of civilian systems. Within the civil law, Francophone systems, with looser conceptions of chose than Germanic Pandectist systems’ strict conceptions of Sach (thing), are more accommodating of cryptoassets as property but even so, it may more accurately be said that they are biens or droits. Accordingly, depending on one’s conception of property, cryptoassets may (or may not) be property.

Yoo & Keung on The Political Dynamics of Legislative Reform: Potential Drivers of the Next Communications Statute

Christopher S. Yoo (University of Pennsylvania) and Tiffany Keung (University of Pennsylvania Carey Law School) have posted “The Political Dynamics of Legislative Reform: Potential Drivers of the Next Communications Statute” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Although most studies of major communications reform legislation focus on the merits of their substantive provisions, analyzing the political dynamics that led to the enactment of such legislation can yield important insights. An examination of the tradeoffs that led the major industry segments to support the Telecommunications Act of 1996 provides a useful illustration of the political bargain that it embodies. Application of a similar analysis to the current context identifies seven components that could form the basis for the next communications statute: universal service, pole attachments, privacy, intermediary immunity, net neutrality, spectrum policy, and antitrust reform. Determining how these components might fit together requires an assessment of areas in which industry interests overlap and diverge as well as aspects of the political environment that can make passage of reform legislation more difficult.

Recommended.

Nachbar on Qualitative Market Definition

Thomas Nachbar (University of Virginia School of Law) has posted “Qualitative Market Definition” (Virginia Law Review, Vol. 109, 2023) on SSRN. Here is the abstract:

Modern antitrust law has come under intense criticism in recent years, with a bipartisan chorus of complaints about the power of technology and internet platforms such as Google, Amazon, Facebook, and Apple. A fundamental issue in these debates is how to define the “market” for the purposes of antitrust law. But market definition is highly contentious. The Supreme Court case that launched modern market definition has become the name of an economic blunder: the “Cellophane fallacy,” and the Justices in 2018’s Ohio v. American Express case disagreed with each other so strongly that the dissent described the majority’s approach as not only “wrong” but “economic nonsense.” Partially in response to the controversy in American Express, recent judicial, legislative, and regulatory proposals have even suggested doing away with market definition in some antitrust cases.

The root problem, this Article shows, is that modern market definition has been treated in antitrust as a matter of quantitative economics, with markets defined by economic formulas (such as the Lerner Index) lacking a connection to widely held social understandings of competition. Antitrust law needs to augment these quantitative approaches by explicitly acknowledging qualitative aspects of markets, including the normative visions of competition they represent. When more fully considered, the Lerner Index itself represents a vision of competition, but it is a vision is one that no society would want to pursue.

Paying attention to the normative meaning underlying quantitative measures is hardly radical; such qualitative factors have been part of market definition since its origin. The Cellophane fallacy itself was originally advanced not as a point about economics but about the content of the antitrust law. This Article argues that market definition is necessarily normative and describes an approach for including qualitative criteria in market definition so that market definition accurately reflects the types of competition antitrust law seeks to protect.

Stein on Assuming the Risks of Artificial Intelligence

Amy L. Stein (University of Florida Levin College of Law) has posted “Assuming the Risks of Artificial Intelligence” (102 Boston University Law Review 2022) on SSRN. Here is the abstract:

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-risk- consent trilemma.

Papakonstantinou & de Hert on The Regulation of Digital Technologies in the EU

Vagelis Papakonstantinou (Faculty of Law and Criminology, Vrije Universiteit Brussel) and Paul De Hert
(Free University of Brussels (VUB)- LSTS; Tilburg University – TILT) have posted “The Regulation of Digital Technologies in the EU: The Law-Making Phenomena of ‘Act-ification’, ‘GDPR Mimesis’ and ‘EU Law Brutality'” (Technology and Regulation Journal 2022) on SSRN. Here is the abstract:

EU regulatory initiatives on technology-related topics has spiked over the past few years. On the basis of its Priorities Programme 2019-2024, while creating “Europe fit for the Digital Age”, the EU Commission has been busy releasing new texts aimed at regulating a number of technology topics, including, among others, data uses, online platforms, cybersecurity, or artificial intelligence. This paper identifies three basic phenomena common to all, or most, EU new technology-relevant regulatory initiatives, namely (a) “act-ification”, (b) “GDPR mimesis”, and (c) “regulatory brutality”. These phenomena divulge new-found confidence on the part of the EU technology legislator, who has by now asserted for itself the right to form policy options and create new rules in the field for all of Europe. These three phenomena serve as indicators or early signs of a new European technology law-making paradigm that by now seems ready to emerge.

Stepanian on European Artificial Intelligence Act: Should Russia Implement the Same?

Armen Stepanian (Moscow State Law Academy) has posted “European Artificial Intelligence Act: Should Russia Implement the Same?” (8 Kutafin Law Review 2022) on SSRN. Here is the abstract:

The proposal for a European Union Regulation establishing harmonized rules for artificial intelligence (Artificial Intelligence Act) is under consideration. The structure and features of the proposal of this regulatory legal act of the integrational organization are analyzed. EU AI Act scope is analyzed and shown as wider than the current Russian one. The act will contain harmonized rules for placing into market, operation and use of AI systems; bans on certain artificial intelligence methods; special requirements for AI systems with high level of risk and obligations of operators of such systems, harmonized transparency rules for AI systems designed for interaction with individuals, emotion recognition systems and biometric categorization systems, AI systems used to creating or managing images, audio or video content; market surveillance and supervision rules. The provisions of the Act, the features of the proposed institutions and norms, including extraterritoriality (as for GDPR before that raised many questions), risk-oriented approach (which is based both on self-certification and definite criteria for high-risk systems), object, scope, definitions are considered. The possible key concerns based on case-law to undermine possible discrimination are expressed. The author expresses conclusions about the advisability of (non) application of these institutions or rules in Russia.

Yap & Lim on A Legal Framework for Artificial Intelligence Fairness Reporting

Jia Qing Yap (National University of Singapore – Faculty of Law) and Ernest Lim (same) have posted “A Legal Framework for Artificial Intelligence Fairness Reporting” (81 Cambridge Law Journal, Forthcoming 2022) on SSRN. Here is the abstract:

Clear understanding of artificial intelligence (AI) usage risks and how they are being addressed is needed, which require proper and adequate corporate disclosure. We advance a legal framework for AI Fairness Reporting to which companies can and should adhere on a comply or explain basis. We analyse the sources of unfairness arising from different aspects of AI models and the disparities in the performance of machine learning systems. We evaluate how the machine learning literature has sought to address the problem of unfairness through the use of different fairness metrics. We then put forward a nuanced and viable framework for AI Fairness Reporting comprising: (a) disclosure of all machine learning models usage; (b) disclosure of fairness metrics used and the ensuing trade-offs; (c) disclosure of de-biasing methods used; and (d) release of datasets for public inspection or for third-party audit. We then apply this reporting framework to two case studies.

Chistakis et al. on Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes

Theodore Christakis (University Grenoble-Alpes, CESICE, France. Senior Fellow Cross Border Data Forum & Future of Privacy Forum), Karine Bannelier (University Grenoble-Alpes, CESICE, France), Claude Castelluccia, and Daniel Le Métayer (INRIA) have posted “Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes” on SSRN. Here is the abstract:

This is the 1st ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (and private) spaces: for authorization purposes. This 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection, privacy and Human Rights; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and European citizens who will find here an accessible way to understand all these issues.

Part 1 of our “MAPping the use of Facial Recognition in public spaces in Europe” (MAPFRE) project reports explained in detail what “facial recognition” means, ad-dressed the issues surrounding definitions, presented the political landscape and set out the exact material and geographical scope of the study. Part 2 of our Reports presented, in the most accessible way possible, how facial recognition works and produced a “Classification Table” with illustrations, explanations and examples, detailing the uses of facial recognition/analysis in public spaces, in order to help avoid conflating the diverse ways in which facial recognition is used and to bring nuance and precision to the public debate.

This 3rd Report focuses on what is, undoubtedly, the most widespread way in which Facial Recognition Technologies (FRT) are used in public (and private) spaces: Facial Recognition for authorisation purposes.

Facial recognition is often used to authorise access to a space (e.g. access control) or to a service (e.g. to make a payment). Depending on the situation, both verification and identification functionalities (terms that are explained in our 2nd Report) can be used. Millions of people use FRT to unlock their phones every day. Private entities (such as banks) or public authorities (such as the French government in terms of the now abandoned ALICEM project) increasingly envisage using FRT as a means of providing strong authentication in order to control access to private or public online services, such as e-banking, or administrative websites that concern income, health or other personal matters. FRT is increasingly being considered as a means of improving security when controlling and managing access to private areas (building entrances, goods warehouses, etc.).

In public spaces, FRT is being used as an authentication tool for automated international border controls (for example at airports) or to manage access in places as diverse as airports, stadiums or schools. Pre COVID-19, there were a lot of projects to use in the future FRT in order to “accelerate people flows”, “improve the customer experience”, “speed up operations” and “reduce queuing time” for users of different services (e.g. passengers boarding a plane or shopping) but the advent of the COVID-19 pandemic has further boosted calls for investment in FRTs in order to provide contactless services and reduce the risk of contamination.

Supermarkets, such as Carrefour, which was involved in a pilot project in Romania, or transport utilities in “smart cities”, such as the EMT bus network in Madrid, which teamed with Mastercard to conduct a pilot project that enables users to pay on EMT buses using FRT, have implemented facial recognition payment systems that permit consumers to complete transactions by simply having their faces scanned. In Europe, similar pilot projects are currently being tested enabling the management of payments in restaurants, cafés and shops.

Despite this widespread existing use or projected use of FRT for authorisation purposes we are not aware of any detailed study that is focusing on this specific issue. We hope that the present analytic study will help fill this gap by focusing on the specific issue of the use of FRT for authorisation purposes in public spaces in Europe.

We have examined in detail seven “emblematic” cases of FRT being used for authorisation purposes in public spaces in Europe. We have reviewed the documents disseminated by data controllers concerning all of these cases (and several others). We have sought out the reactions of civil society and other actors. We have dived into EU and Member State laws. We have analysed a number of Data Protection Authority (DPA) opinions. We have identified Court decisions of relevance to this matter.

Our panoramic analysis enables the identification of convergences among EU Member States, but also the risks of divergence with regard to certain specific, important ways in which FRTs are used. It also permits an assessment of whether the GDPR, as interpreted by DPAs and Courts around Europe, is a sufficient means of regulating the use of FRT for authorisation purposes in public spaces in Europe – or whether new rules are needed.

What are the main issues in practice in terms of the legal basis invoked by data controllers? What is the difference between “consent” and “voluntary” in relation to the ways in which FRT is used? Are the “alternative (non-biometric) solutions” proposed satisfactory? What are the positions of DPAs and Courts around Europe on the important issues around necessity and proportionality, including the key “less intrusive means” criterion? What are the divergences among DPAs on these issues? Is harmonisation needed and if so, how is this to be achieved? What are the lessons learned concerning the issue of DPIAs and evaluations? These are some of the questions examined in this report.

Our study ends with a series of specific recommendations that we are making, in relation to data controllers, the EDPB as well as stakeholders making proposals for new FRT rules.
We make three recommendations vis-à-vis those data controllers wishing to use facial recognition applications for authorisation purposes:

1) Data controllers should understand that they have the burden of proof in terms of meeting all of the GDPR requirements, including understanding exactly how the necessity and proportionality principles as well as the principles relating to processing of personal data should be applied in this field.

2) Data controllers should understand the limits of the “cooperative” use of facial recognition when used for authorisation purposes. Deployments of FR systems for authorisation purposes in public spaces in Europe have almost always been based on consent or have been used in a “voluntary” way. However, this does not mean that consent is almighty. First, there are situations (such as the various failed attempts to introduce FRT in schools in Europe) where consent could not be justified as being “freely given” because of an imbalance of power between users and data controllers. Second, consensual and other “voluntary” uses of FRT imply the existence of alternative solutions which must be as available and as effective as those that involve the use of FRT.

3) Data controllers should conduct DPIAs and evaluation reports and publish them to the extent possible and compatible with industrial secrets and property rights. Our study found that there is a serious lack of information available on DPIAs and evaluations of the effectiveness of FRT systems. As we explain, this is regrettable for several reasons.

We make two recommendations in relation to the EDPB:

1) The EDPB should ensure that there is harmonization on issues such as the use of centralised databases, and those principles that relate to the processing of personal data. A diverging interpretation of the GDPR on issues such as the implementation of IATA’s “One ID” concept for air travel or “pay by face” applications in Europe could create legal tension and operational difficulties.

2) The EDPB could also produce guidance on the approach that should be followed both for DPIAs and evaluation reports where FRT authorisation applications are concerned.
Finally, a recommendation regarding policy makers and other stakeholders formulating new legislative proposals: there is often a great deal of confusion about the different proposals that concern the regulation of facial recognition. It is therefore important for all stakeholders to distinguish the numerous ways in which FRT is used for authorisation purposes from other use cases and to target their proposals accordingly. For instance, proposals calling for a broad ban on “biometric recognition in public spaces” are likely to result in all of the ways in which FRT is used for authorisation purposes being prohibited. Policymakers should take this into consideration, and make sure that this is their intention, before they make such proposals.

Jain on Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India

Chirag Jain (NYU Law) has posted “Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India” on SSRN. Here is the abstract:

Part of this paper focuses on how retail fashion stores leverage AI algorithms to offer enhanced interactive features in virtual try-on mirrors, and the other part analyzes the current state of biometric data privacy laws in the US, EU, and India, and their impact on the usage of AR technologies in the retail fashion industry. Specifically, the author has attempted to deep dive into the architectural design of virtual fitting rooms (one of the technologies that have recently gained traction in law firm articles discussing the surge in biometric privacy law litigations) and analyze several advanced AI techniques; (ii) discussed the ethical issues that can arise from the usage of underlying AI technologies in VFR; (iii) briefly compared and analyzed the biometric privacy law landscape in the US, EU, and India, and especially, in the US, analyze the approach followed by the Illinois’ Biometric Information Privacy Act, which has remained a cause of concern for various businesses engaged in the collection of biometric data; (iv) suggested various recommendations for technology vendors and fashion brands – to design VFRs with “privacy by design” principles being at the forefront; and (v) Lastly, made a recommendation for legislators, by suggesting that in almost all the biometric data protection laws proposed in each state in the US, and if possible in the existing laws, collection of “second-order data” (like body geometry) without any first-order data (i.e., a retina or iris scan, a fingerprint or voiceprint, a scan of hand or face geometry, or any other identifying characteristic) shall be excluded from the ambit of “biometric identifiers” as that can reduce the unnecessary regulatory pressure in the usage of technologies like VFRs for commercial purpose.