Hartzog, Selinger & Gunawan on Privacy Nicks: How the Law Normalizes Surveillance

Woodrow Hartzog (Boston University School of Law; Stanford Law School Center for Internet and Society), Evan Selinger
(Rochester Institute of Technology – Department of Philosophy), and Johanna Gunawan (Northeastern University Khoury College of Computer Sciences) have posted “Privacy Nicks: How the Law Normalizes Surveillance” (101 Washington University Law Review, Forthcoming) on SSRN. Here is the abstract:

Privacy law is failing to protect individuals from being watched and exposed, despite stronger surveillance and data protection rules. The problem is that our rules look to social norms to set thresholds for privacy violations, but people can get used to being observed. In this article, we argue that by ignoring de minimis privacy encroachments, the law is complicit in normalizing surveillance. Privacy law helps acclimate people to being watched by ignoring smaller, more frequent, and more mundane privacy diminutions. We call these reductions “privacy nicks,” like the proverbial “thousand cuts” that lead to death.

Privacy nicks come from the proliferation of cameras and biometric sensors on doorbells, glasses, and watches, and the drift of surveillance and data analytics into new areas of our lives like travel, exercise, and social gatherings. Under our theory of privacy nicks as the Achilles heel of surveillance law, invasive practices become routine through repeated exposures that acclimate us to being vulnerable and watched in increasingly intimate ways. With acclimation comes resignation, and this shift in attitude biases how citizens and lawmakers view reasonable measures and fair tradeoffs.

Because the law looks to norms and people’s expectations to set thresholds for what counts as a privacy violation, the normalization of these nicks results in a constant re-negotiation of privacy standards to society’s disadvantage. When this happens, the legal and social threshold for rejecting invasive new practices keeps getting redrawn, excusing ever more aggressive intrusions. In effect, the test of what privacy law allows is whatever people will tolerate. There is no rule to stop us from tolerating everything. This article provides a new theory and terminology to understand where privacy law falls short and suggests a way to escape the current surveillance spiral.

Dvoskin on Speaking Back to Sexual Privacy Invasions

Brenda Dvoskin (Harvard Law) has posted “Speaking Back to Sexual Privacy Invasions” (Washington Law Review, Vol. 98, 2023) on SSRN. Here is the abstract:

Many big players in the internet ecosystem do not like hosting sexual expression. They often justify these bans as a protection of sexual privacy. For example, Meta states that it removes sexual imagery to prevent the nonconsensual distribution of sexual images. In response, this Article argues that banning digital sexual expression is counterproductive if the aim is to alleviate the harms inflicted by sexual privacy losses.

Contemporary sexual privacy theory, however, lacks analytical tools to explain why nudity bans harm the interests they intend to protect. This Article aims at building those tools. The main contribution is an invitation to locate part of the harm that victims experience not in the unwanted exposure but in the social interpretations of that exposure. If social interpretations make losses of sexual privacy exceptionally harmful, we should focus on both preventing invasions and changing those interpretations. Sexual expression is a powerful discourse that often aims at rewriting the social script that underlies the social sanctions we collectively impose on victims. Thus, the Article argues that protecting sexual expression ought to be an essential piece of a content moderation system designed from a sexual privacy perspective.

Schultz & Dincer on Clearview AI Litigation

Jason Schultz (NYU Law) and Melodi Dincer (same) have posted “Amici Brief of Science, Legal, and Technology Scholars in Renderos et al. v. Clearview AI, Inc. et al., No. RG21096898 (Superior Ct. Alameda County)” on SSRN. Here is the abstract:

This Amici Brief was filed before the Superior Court of the State of California, County of Alameda in the case of Renderos et al. v. Clearview AI, Inc. et al. in support of Plaintiffs’ opposition to Defendant Clearview’s Special Motion to Strike Pursuant to California Code of Civil Procedure § 425.16 (California’s anti-SLAPP statute).

For over a century, the right of publicity (ROP) has protected individuals from unwanted commercial exploitation of their identities. Originating around the turn of the twentieth century in response to the newest image-appropriation technologies of the time, the ROP has continued to evolve to cover each new wave of technologies enabling companies to exploit peoples’ identities as part of their business models.

The latest example of such a technology is Defendant Clearview AI’s facial recognition (FR) application. Clearview boasts that the primary economic value of its app stems from commercially exploiting its massive facial image database, filled with millions of individual likenesses and identities that it appropriated through images scraped from across the internet. Clearview’s misappropriations also extend to training its algorithm, matching identities to new images, and displaying results to customers. The purpose of Clearview’s product is to allow customers to identify an individual using only a picture of their face. Without the capacity to exploit millions of likenesses and identities, Clearview’s system would fail to function as a commercial product.

Clearview attempts to avoid ROP liability by arguing (1) that it cannot be liable because humans rarely witness its acts of misappropriation and (2) that its app and business strategy are forms of protected speech under the First Amendment.

In this brief, Amici Science, Legal, and Technology Scholars urge the Court to reject Clearview’s arguments and allow Plaintiffs’ ROP claim to proceed. First, Amici describe how the ROP claim against Clearview’s FR technology is consistent with those upheld by the courts for over a century, tracing the parallel evolutions of early image-appropriation technologies and of the ROP as a legal limitation on their capacity to exploit identities for profit. Amici then apply each ROP element to Clearview’s FR app. Second, Amici challenge Clearview’s claim to protection under the anti-SLAPP statute. Clearview does not appropriate images and identities as a form of speech in connection with a public issue. Clearview is a visual surveillance company that built its app off misappropriated images for the exclusive purpose of selling and operating its commercial surveillance services, using proprietary software that it attempts to keep as far from public scrutiny as possible.

If the Court finds this case is insulated from judicial review, a company can appropriate billions of individuals’ images and identities without consent, enmesh those identities in its product, license that product widely, profit lavishly, and continue with business as usual. As new products emerge that similarly undermine one’s ability to control who can use their identity and how, individuals will have less legal recourse than their ancestors had a century ago.

Faced with these facts, this Court should reject Clearview’s anti-SLAPP Motion and find Plaintiffs have alleged a legally valid ROP claim at this early stage.

Lundqvist on Regulating Data Access and Portability of Data in the EU

Bjorn Lundqvist (Stockholm University – Faculty of Law) has posted “Regulating the Data-Driven Economy Under EU Law – Access and Portability of Data” on SSRN. Here is the abstract:

While business users face difficulties accessing and porting data on platforms, the Digital Markets Act and the proposed Data Act have been hailed as the legislative tools enabling users access and transfer the data they have generated on platforms controlled by gatekeepers or Internet of Things manufacturers. The tools provided by the Digital Markets Act and the proposed Data Act respectively are discussed in this manuscript and the author argues that users should have a more elaborated right to first access the data they produce on platforms, with Internet of Thing devices and in ecosystems, and secondly transfer such data from platform to platform, cloud to cloud, thing to thing or in-house. A right to access and transfer data could have several benefits; it benefits dissemination of data, creativity and innovation in connected markets and it promotes competition between platforms, clouds and ecosystem providers. Creativity will be enhanced because necessary data — being the raw material for new innovations—will be more broadly disbursed. It will also benefit consumers having a disbursed and disseminated data commons for the development of ideas, innovations, and the exchange of knowledge.

Indeed, with an aim of finding a solution for dysfunctional and unfair data-driven markets; the proposal is that the EU should introduce an access and transfer governance right to data, an Access and Transfer Right (ATR). A new form of right, however not derived from the idea of exclusive control of the object of property, but on a right to access and transfer data. A governance right that can work in tandem with data protection rules benefiting individuals and businesses. Areas that will be explored include the subject-matter of the protection, potential right holders and the scope of the protection, including exceptions and limitations under intellectual property law and competition law.

Yoo on The Overlooked Systemic Impact of the Right to Be Forgotten

Christopher S. Yoo (University of Pennsylvania Carey Law School) has posted “The Overlooked Systemic Impact of the Right to Be Forgotten: Lessons from Adverse Selection, Moral Hazard, and Ban the Box” (University of Pennsylvania Law Review Online, vol. 170, forthcoming) on SSRN. Here is the abstract:

The right to be forgotten, which began as a part of European law, has found increasing acceptance in state privacy statutes recently enacted in the U.S. Commentators have largely analyzed the right to be forgotten as a clash between the privacy interests of data subjects and the free speech rights of those holding the data. Framing the issues as a clash of individual rights largely ignores the important scholarly literatures exploring how giving data subjects the ability to render certain information unobservable can give rise to systemic effects that can harm society as a whole. This Essay fills this gap by exploring what the right to be forgotten can learn from the literatures exploring the implications of adverse selection, moral hazard, and the emerging policy intervention know as ban the box.

Ohm & Kim on The Internet of Things

Paul Ohm (Georgetown University Law Center) and Nathaniel Kim have posted “Legacy Switches: A Proposal to Protect Privacy, Security, Competition, and the Environment from the Internet of Things” (Ohio State Law Journal, Forthcoming) on SSRN. Here is the abstract:

The Internet of Things (IoT) promises us a life of automated convenience. Bright and shiny—if cheaply made and plasticky—“smart” thermostats, doorbells, cameras, and fridges carry out the functions once performed by “dumb” equivalents but in an automated, connected, and generally “better” way. This convenience comes at a significant cost. IoT devices listen to, record, and share our behavior, habits, speech, social interactions, and location minute-by-minute, 24/7. All of this information feeds a growing surveillance economy, as this data is bought, sold, and analyzed to predict our behavior, subject us to targeted advertising, and manipulate our actions. Many cheap IoT gadgets are developed on a shoestring budget, leaving them unsecure and vulnerable to attack. Malicious actors (and their automated computer programs) target IoT devices, breaking into them to spy on their owners or enlisting them into massive botnets used to cripple websites or critical infrastructure. These problems magnify over time, as IoT vendors focus on selling the next version of the device rather than on securing the preexisting installed base.

Consumers interested in protecting themselves from these harms may decide to replace outdated devices with newer, not-quite-yet-obsolete versions. Doing this does nothing to slow the growth of the surveillance economy and may even exacerbate it, as new devices tend to listen and record more than the models they replace. And even though replacing IoT devices can temporarily forestall security harms, asking consumers to replace all of their smart devices every few years introduces different harms. It harms the environment, filling our landfills with nonbiodegradable plastic housings and circuit parts which leach toxic materials into our air, soil, and water. It forces consumers to waste time, attention, and money tending to hard-wired, infrastructural devices that in the past would have lasted for decades. It compounds the harms of inequality, as those with more disposable income and connections to electricians and contractors have access to better security and privacy than those with less.

We propose a novel, simple, and concrete solution to address all of these problems. Every IoT device manufacturer should build a switch into their device called a “legacy switch.” When the consumer flips this switch, it should disable any smart feature that contributes to security or privacy risks. A legacy switch will render a smart thermostat just a thermostat and a smart doorbell just a doorbell. The switch will disable microphones, sensors, and wireless connectivity. Any user should find it easy to use and easy to verify whether the switch has been toggled.

This Article proposes legacy switches, elaborates key implementation details for any law requiring them, and connects them to the ongoing conversation about power, privacy, and platforms. The proposal to require legacy switches should be seen as a small but meaningful step toward taming the unchecked and destructive tendencies of the new networked economy.

Hartzog & Richards on Legislating Data Loyalty

Woodrow Hartzog (Boston U Law; Stanford Center for Internet and Society) and Neil M. Richards (Washington U Law; Yale ISP; Stanford Center for Internet and Society) have posted “Legislating Data Loyalty” (97 Notre Dame Law Review Reflection 356 (2022)) on SSRN. Here is the abstract:

Lawmakers looking to embolden privacy law have begun to consider imposing duties of loyalty on organizations trusted with people’s data and online experiences. The idea behind loyalty is simple: organizations should not process data or design technologies that conflict with the best interests of trusting parties. But the logistics and implementation of data loyalty need to be developed if the concept is going to be capable of moving privacy law beyond its “notice and consent” roots to confront people’s vulnerabilities in their relationship with powerful data collectors.

In this short Essay, we propose a model for legislating data loyalty. Our model takes advantage of loyalty’s strengths—it is well-established in our law, it is flexible, and it can accommodate conflicting values. Our Essay also explains how data loyalty can embolden our existing data privacy rules, address emergent dangers, solve privacy’s problems around consent and harm, and establish an antibetrayal ethos as America’s privacy identity.

We propose that lawmakers use a two-step process to (1) articulate a primary, general duty of loyalty, then (2) articulate “subsidiary” duties that are more specific and sensitive to context. Subsidiary duties regarding collection, personalization, gatekeeping, persuasion, and mediation would target the most opportunistic contexts for self-dealing and result in flexible open-ended duties combined with highly specific rules. In this way, a duty of data loyalty is not just appealing in theory—it can be effectively implemented in practice just like the other duties of loyalty our law has recognized for hundreds of years. Loyalty is thus not only flexible, but it is capable of breathing life into America’s historically tepid privacy frameworks.

Chistakis et al. on Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes

Theodore Christakis (University Grenoble-Alpes, CESICE, France. Senior Fellow Cross Border Data Forum & Future of Privacy Forum), Karine Bannelier (University Grenoble-Alpes, CESICE, France), Claude Castelluccia, and Daniel Le Métayer (INRIA) have posted “Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes” on SSRN. Here is the abstract:

This is the 1st ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (and private) spaces: for authorization purposes. This 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection, privacy and Human Rights; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and European citizens who will find here an accessible way to understand all these issues.

Part 1 of our “MAPping the use of Facial Recognition in public spaces in Europe” (MAPFRE) project reports explained in detail what “facial recognition” means, ad-dressed the issues surrounding definitions, presented the political landscape and set out the exact material and geographical scope of the study. Part 2 of our Reports presented, in the most accessible way possible, how facial recognition works and produced a “Classification Table” with illustrations, explanations and examples, detailing the uses of facial recognition/analysis in public spaces, in order to help avoid conflating the diverse ways in which facial recognition is used and to bring nuance and precision to the public debate.

This 3rd Report focuses on what is, undoubtedly, the most widespread way in which Facial Recognition Technologies (FRT) are used in public (and private) spaces: Facial Recognition for authorisation purposes.

Facial recognition is often used to authorise access to a space (e.g. access control) or to a service (e.g. to make a payment). Depending on the situation, both verification and identification functionalities (terms that are explained in our 2nd Report) can be used. Millions of people use FRT to unlock their phones every day. Private entities (such as banks) or public authorities (such as the French government in terms of the now abandoned ALICEM project) increasingly envisage using FRT as a means of providing strong authentication in order to control access to private or public online services, such as e-banking, or administrative websites that concern income, health or other personal matters. FRT is increasingly being considered as a means of improving security when controlling and managing access to private areas (building entrances, goods warehouses, etc.).

In public spaces, FRT is being used as an authentication tool for automated international border controls (for example at airports) or to manage access in places as diverse as airports, stadiums or schools. Pre COVID-19, there were a lot of projects to use in the future FRT in order to “accelerate people flows”, “improve the customer experience”, “speed up operations” and “reduce queuing time” for users of different services (e.g. passengers boarding a plane or shopping) but the advent of the COVID-19 pandemic has further boosted calls for investment in FRTs in order to provide contactless services and reduce the risk of contamination.

Supermarkets, such as Carrefour, which was involved in a pilot project in Romania, or transport utilities in “smart cities”, such as the EMT bus network in Madrid, which teamed with Mastercard to conduct a pilot project that enables users to pay on EMT buses using FRT, have implemented facial recognition payment systems that permit consumers to complete transactions by simply having their faces scanned. In Europe, similar pilot projects are currently being tested enabling the management of payments in restaurants, cafés and shops.

Despite this widespread existing use or projected use of FRT for authorisation purposes we are not aware of any detailed study that is focusing on this specific issue. We hope that the present analytic study will help fill this gap by focusing on the specific issue of the use of FRT for authorisation purposes in public spaces in Europe.

We have examined in detail seven “emblematic” cases of FRT being used for authorisation purposes in public spaces in Europe. We have reviewed the documents disseminated by data controllers concerning all of these cases (and several others). We have sought out the reactions of civil society and other actors. We have dived into EU and Member State laws. We have analysed a number of Data Protection Authority (DPA) opinions. We have identified Court decisions of relevance to this matter.

Our panoramic analysis enables the identification of convergences among EU Member States, but also the risks of divergence with regard to certain specific, important ways in which FRTs are used. It also permits an assessment of whether the GDPR, as interpreted by DPAs and Courts around Europe, is a sufficient means of regulating the use of FRT for authorisation purposes in public spaces in Europe – or whether new rules are needed.

What are the main issues in practice in terms of the legal basis invoked by data controllers? What is the difference between “consent” and “voluntary” in relation to the ways in which FRT is used? Are the “alternative (non-biometric) solutions” proposed satisfactory? What are the positions of DPAs and Courts around Europe on the important issues around necessity and proportionality, including the key “less intrusive means” criterion? What are the divergences among DPAs on these issues? Is harmonisation needed and if so, how is this to be achieved? What are the lessons learned concerning the issue of DPIAs and evaluations? These are some of the questions examined in this report.

Our study ends with a series of specific recommendations that we are making, in relation to data controllers, the EDPB as well as stakeholders making proposals for new FRT rules.
We make three recommendations vis-à-vis those data controllers wishing to use facial recognition applications for authorisation purposes:

1) Data controllers should understand that they have the burden of proof in terms of meeting all of the GDPR requirements, including understanding exactly how the necessity and proportionality principles as well as the principles relating to processing of personal data should be applied in this field.

2) Data controllers should understand the limits of the “cooperative” use of facial recognition when used for authorisation purposes. Deployments of FR systems for authorisation purposes in public spaces in Europe have almost always been based on consent or have been used in a “voluntary” way. However, this does not mean that consent is almighty. First, there are situations (such as the various failed attempts to introduce FRT in schools in Europe) where consent could not be justified as being “freely given” because of an imbalance of power between users and data controllers. Second, consensual and other “voluntary” uses of FRT imply the existence of alternative solutions which must be as available and as effective as those that involve the use of FRT.

3) Data controllers should conduct DPIAs and evaluation reports and publish them to the extent possible and compatible with industrial secrets and property rights. Our study found that there is a serious lack of information available on DPIAs and evaluations of the effectiveness of FRT systems. As we explain, this is regrettable for several reasons.

We make two recommendations in relation to the EDPB:

1) The EDPB should ensure that there is harmonization on issues such as the use of centralised databases, and those principles that relate to the processing of personal data. A diverging interpretation of the GDPR on issues such as the implementation of IATA’s “One ID” concept for air travel or “pay by face” applications in Europe could create legal tension and operational difficulties.

2) The EDPB could also produce guidance on the approach that should be followed both for DPIAs and evaluation reports where FRT authorisation applications are concerned.
Finally, a recommendation regarding policy makers and other stakeholders formulating new legislative proposals: there is often a great deal of confusion about the different proposals that concern the regulation of facial recognition. It is therefore important for all stakeholders to distinguish the numerous ways in which FRT is used for authorisation purposes from other use cases and to target their proposals accordingly. For instance, proposals calling for a broad ban on “biometric recognition in public spaces” are likely to result in all of the ways in which FRT is used for authorisation purposes being prohibited. Policymakers should take this into consideration, and make sure that this is their intention, before they make such proposals.

Jain on Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India

Chirag Jain (NYU Law) has posted “Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India” on SSRN. Here is the abstract:

Part of this paper focuses on how retail fashion stores leverage AI algorithms to offer enhanced interactive features in virtual try-on mirrors, and the other part analyzes the current state of biometric data privacy laws in the US, EU, and India, and their impact on the usage of AR technologies in the retail fashion industry. Specifically, the author has attempted to deep dive into the architectural design of virtual fitting rooms (one of the technologies that have recently gained traction in law firm articles discussing the surge in biometric privacy law litigations) and analyze several advanced AI techniques; (ii) discussed the ethical issues that can arise from the usage of underlying AI technologies in VFR; (iii) briefly compared and analyzed the biometric privacy law landscape in the US, EU, and India, and especially, in the US, analyze the approach followed by the Illinois’ Biometric Information Privacy Act, which has remained a cause of concern for various businesses engaged in the collection of biometric data; (iv) suggested various recommendations for technology vendors and fashion brands – to design VFRs with “privacy by design” principles being at the forefront; and (v) Lastly, made a recommendation for legislators, by suggesting that in almost all the biometric data protection laws proposed in each state in the US, and if possible in the existing laws, collection of “second-order data” (like body geometry) without any first-order data (i.e., a retina or iris scan, a fingerprint or voiceprint, a scan of hand or face geometry, or any other identifying characteristic) shall be excluded from the ambit of “biometric identifiers” as that can reduce the unnecessary regulatory pressure in the usage of technologies like VFRs for commercial purpose.

Van Loo on Privacy Pretexts

Rory Van Loo (Boston University – School of Law; Yale ISP) has posted “Privacy Pretexts” (Cornell Law Review, Forthcoming) on SSRN. Here is the abstract:

Data privacy’s ethos lies in protecting the individual from institutions. Increasingly, however, institutions are deploying privacy arguments in ways that harm individuals. Platforms like Amazon, Facebook, and Google wall off information from competitors in the name of privacy. Financial institutions under investigation justify withholding files from the Consumer Financial Protection Bureau by saying they must protect sensitive customer data. In these and other ways, the private sector is exploiting privacy to avoid competition and accountability. This Article highlights the breadth of privacy pretexts and uncovers their moral structure. Like most pretexts, there is an element of truth to the claims. But left unchallenged, they will pave a path contrary to privacy’s ethos by blocking individuals’ data allies—the digital helpers, competitors, and regulators who need access to personal data to advance people’s interests. Addressing this move requires recognizing and overcoming deep tensions in the field of privacy. Although data privacy’s roots are in guarding against access, its future depends on promoting allied access.

Recommended.