Ohm & Kim on The Internet of Things

Paul Ohm (Georgetown University Law Center) and Nathaniel Kim have posted “Legacy Switches: A Proposal to Protect Privacy, Security, Competition, and the Environment from the Internet of Things” (Ohio State Law Journal, Forthcoming) on SSRN. Here is the abstract:

The Internet of Things (IoT) promises us a life of automated convenience. Bright and shiny—if cheaply made and plasticky—“smart” thermostats, doorbells, cameras, and fridges carry out the functions once performed by “dumb” equivalents but in an automated, connected, and generally “better” way. This convenience comes at a significant cost. IoT devices listen to, record, and share our behavior, habits, speech, social interactions, and location minute-by-minute, 24/7. All of this information feeds a growing surveillance economy, as this data is bought, sold, and analyzed to predict our behavior, subject us to targeted advertising, and manipulate our actions. Many cheap IoT gadgets are developed on a shoestring budget, leaving them unsecure and vulnerable to attack. Malicious actors (and their automated computer programs) target IoT devices, breaking into them to spy on their owners or enlisting them into massive botnets used to cripple websites or critical infrastructure. These problems magnify over time, as IoT vendors focus on selling the next version of the device rather than on securing the preexisting installed base.

Consumers interested in protecting themselves from these harms may decide to replace outdated devices with newer, not-quite-yet-obsolete versions. Doing this does nothing to slow the growth of the surveillance economy and may even exacerbate it, as new devices tend to listen and record more than the models they replace. And even though replacing IoT devices can temporarily forestall security harms, asking consumers to replace all of their smart devices every few years introduces different harms. It harms the environment, filling our landfills with nonbiodegradable plastic housings and circuit parts which leach toxic materials into our air, soil, and water. It forces consumers to waste time, attention, and money tending to hard-wired, infrastructural devices that in the past would have lasted for decades. It compounds the harms of inequality, as those with more disposable income and connections to electricians and contractors have access to better security and privacy than those with less.

We propose a novel, simple, and concrete solution to address all of these problems. Every IoT device manufacturer should build a switch into their device called a “legacy switch.” When the consumer flips this switch, it should disable any smart feature that contributes to security or privacy risks. A legacy switch will render a smart thermostat just a thermostat and a smart doorbell just a doorbell. The switch will disable microphones, sensors, and wireless connectivity. Any user should find it easy to use and easy to verify whether the switch has been toggled.

This Article proposes legacy switches, elaborates key implementation details for any law requiring them, and connects them to the ongoing conversation about power, privacy, and platforms. The proposal to require legacy switches should be seen as a small but meaningful step toward taming the unchecked and destructive tendencies of the new networked economy.

Hartzog & Richards on Legislating Data Loyalty

Woodrow Hartzog (Boston U Law; Stanford Center for Internet and Society) and Neil M. Richards (Washington U Law; Yale ISP; Stanford Center for Internet and Society) have posted “Legislating Data Loyalty” (97 Notre Dame Law Review Reflection 356 (2022)) on SSRN. Here is the abstract:

Lawmakers looking to embolden privacy law have begun to consider imposing duties of loyalty on organizations trusted with people’s data and online experiences. The idea behind loyalty is simple: organizations should not process data or design technologies that conflict with the best interests of trusting parties. But the logistics and implementation of data loyalty need to be developed if the concept is going to be capable of moving privacy law beyond its “notice and consent” roots to confront people’s vulnerabilities in their relationship with powerful data collectors.

In this short Essay, we propose a model for legislating data loyalty. Our model takes advantage of loyalty’s strengths—it is well-established in our law, it is flexible, and it can accommodate conflicting values. Our Essay also explains how data loyalty can embolden our existing data privacy rules, address emergent dangers, solve privacy’s problems around consent and harm, and establish an antibetrayal ethos as America’s privacy identity.

We propose that lawmakers use a two-step process to (1) articulate a primary, general duty of loyalty, then (2) articulate “subsidiary” duties that are more specific and sensitive to context. Subsidiary duties regarding collection, personalization, gatekeeping, persuasion, and mediation would target the most opportunistic contexts for self-dealing and result in flexible open-ended duties combined with highly specific rules. In this way, a duty of data loyalty is not just appealing in theory—it can be effectively implemented in practice just like the other duties of loyalty our law has recognized for hundreds of years. Loyalty is thus not only flexible, but it is capable of breathing life into America’s historically tepid privacy frameworks.

Chistakis et al. on Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes

Theodore Christakis (University Grenoble-Alpes, CESICE, France. Senior Fellow Cross Border Data Forum & Future of Privacy Forum), Karine Bannelier (University Grenoble-Alpes, CESICE, France), Claude Castelluccia, and Daniel Le Métayer (INRIA) have posted “Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 3: Facial Recognition for Authorisation Purposes” on SSRN. Here is the abstract:

This is the 1st ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (and private) spaces: for authorization purposes. This 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection, privacy and Human Rights; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and European citizens who will find here an accessible way to understand all these issues.

Part 1 of our “MAPping the use of Facial Recognition in public spaces in Europe” (MAPFRE) project reports explained in detail what “facial recognition” means, ad-dressed the issues surrounding definitions, presented the political landscape and set out the exact material and geographical scope of the study. Part 2 of our Reports presented, in the most accessible way possible, how facial recognition works and produced a “Classification Table” with illustrations, explanations and examples, detailing the uses of facial recognition/analysis in public spaces, in order to help avoid conflating the diverse ways in which facial recognition is used and to bring nuance and precision to the public debate.

This 3rd Report focuses on what is, undoubtedly, the most widespread way in which Facial Recognition Technologies (FRT) are used in public (and private) spaces: Facial Recognition for authorisation purposes.

Facial recognition is often used to authorise access to a space (e.g. access control) or to a service (e.g. to make a payment). Depending on the situation, both verification and identification functionalities (terms that are explained in our 2nd Report) can be used. Millions of people use FRT to unlock their phones every day. Private entities (such as banks) or public authorities (such as the French government in terms of the now abandoned ALICEM project) increasingly envisage using FRT as a means of providing strong authentication in order to control access to private or public online services, such as e-banking, or administrative websites that concern income, health or other personal matters. FRT is increasingly being considered as a means of improving security when controlling and managing access to private areas (building entrances, goods warehouses, etc.).

In public spaces, FRT is being used as an authentication tool for automated international border controls (for example at airports) or to manage access in places as diverse as airports, stadiums or schools. Pre COVID-19, there were a lot of projects to use in the future FRT in order to “accelerate people flows”, “improve the customer experience”, “speed up operations” and “reduce queuing time” for users of different services (e.g. passengers boarding a plane or shopping) but the advent of the COVID-19 pandemic has further boosted calls for investment in FRTs in order to provide contactless services and reduce the risk of contamination.

Supermarkets, such as Carrefour, which was involved in a pilot project in Romania, or transport utilities in “smart cities”, such as the EMT bus network in Madrid, which teamed with Mastercard to conduct a pilot project that enables users to pay on EMT buses using FRT, have implemented facial recognition payment systems that permit consumers to complete transactions by simply having their faces scanned. In Europe, similar pilot projects are currently being tested enabling the management of payments in restaurants, cafés and shops.

Despite this widespread existing use or projected use of FRT for authorisation purposes we are not aware of any detailed study that is focusing on this specific issue. We hope that the present analytic study will help fill this gap by focusing on the specific issue of the use of FRT for authorisation purposes in public spaces in Europe.

We have examined in detail seven “emblematic” cases of FRT being used for authorisation purposes in public spaces in Europe. We have reviewed the documents disseminated by data controllers concerning all of these cases (and several others). We have sought out the reactions of civil society and other actors. We have dived into EU and Member State laws. We have analysed a number of Data Protection Authority (DPA) opinions. We have identified Court decisions of relevance to this matter.

Our panoramic analysis enables the identification of convergences among EU Member States, but also the risks of divergence with regard to certain specific, important ways in which FRTs are used. It also permits an assessment of whether the GDPR, as interpreted by DPAs and Courts around Europe, is a sufficient means of regulating the use of FRT for authorisation purposes in public spaces in Europe – or whether new rules are needed.

What are the main issues in practice in terms of the legal basis invoked by data controllers? What is the difference between “consent” and “voluntary” in relation to the ways in which FRT is used? Are the “alternative (non-biometric) solutions” proposed satisfactory? What are the positions of DPAs and Courts around Europe on the important issues around necessity and proportionality, including the key “less intrusive means” criterion? What are the divergences among DPAs on these issues? Is harmonisation needed and if so, how is this to be achieved? What are the lessons learned concerning the issue of DPIAs and evaluations? These are some of the questions examined in this report.

Our study ends with a series of specific recommendations that we are making, in relation to data controllers, the EDPB as well as stakeholders making proposals for new FRT rules.
We make three recommendations vis-à-vis those data controllers wishing to use facial recognition applications for authorisation purposes:

1) Data controllers should understand that they have the burden of proof in terms of meeting all of the GDPR requirements, including understanding exactly how the necessity and proportionality principles as well as the principles relating to processing of personal data should be applied in this field.

2) Data controllers should understand the limits of the “cooperative” use of facial recognition when used for authorisation purposes. Deployments of FR systems for authorisation purposes in public spaces in Europe have almost always been based on consent or have been used in a “voluntary” way. However, this does not mean that consent is almighty. First, there are situations (such as the various failed attempts to introduce FRT in schools in Europe) where consent could not be justified as being “freely given” because of an imbalance of power between users and data controllers. Second, consensual and other “voluntary” uses of FRT imply the existence of alternative solutions which must be as available and as effective as those that involve the use of FRT.

3) Data controllers should conduct DPIAs and evaluation reports and publish them to the extent possible and compatible with industrial secrets and property rights. Our study found that there is a serious lack of information available on DPIAs and evaluations of the effectiveness of FRT systems. As we explain, this is regrettable for several reasons.

We make two recommendations in relation to the EDPB:

1) The EDPB should ensure that there is harmonization on issues such as the use of centralised databases, and those principles that relate to the processing of personal data. A diverging interpretation of the GDPR on issues such as the implementation of IATA’s “One ID” concept for air travel or “pay by face” applications in Europe could create legal tension and operational difficulties.

2) The EDPB could also produce guidance on the approach that should be followed both for DPIAs and evaluation reports where FRT authorisation applications are concerned.
Finally, a recommendation regarding policy makers and other stakeholders formulating new legislative proposals: there is often a great deal of confusion about the different proposals that concern the regulation of facial recognition. It is therefore important for all stakeholders to distinguish the numerous ways in which FRT is used for authorisation purposes from other use cases and to target their proposals accordingly. For instance, proposals calling for a broad ban on “biometric recognition in public spaces” are likely to result in all of the ways in which FRT is used for authorisation purposes being prohibited. Policymakers should take this into consideration, and make sure that this is their intention, before they make such proposals.

Jain on Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India

Chirag Jain (NYU Law) has posted “Virtual Fitting Rooms: A Review of Underlying Artificial Intelligence Technologies, Current Developments, and the Biometric Privacy Laws in the US, EU and India” on SSRN. Here is the abstract:

Part of this paper focuses on how retail fashion stores leverage AI algorithms to offer enhanced interactive features in virtual try-on mirrors, and the other part analyzes the current state of biometric data privacy laws in the US, EU, and India, and their impact on the usage of AR technologies in the retail fashion industry. Specifically, the author has attempted to deep dive into the architectural design of virtual fitting rooms (one of the technologies that have recently gained traction in law firm articles discussing the surge in biometric privacy law litigations) and analyze several advanced AI techniques; (ii) discussed the ethical issues that can arise from the usage of underlying AI technologies in VFR; (iii) briefly compared and analyzed the biometric privacy law landscape in the US, EU, and India, and especially, in the US, analyze the approach followed by the Illinois’ Biometric Information Privacy Act, which has remained a cause of concern for various businesses engaged in the collection of biometric data; (iv) suggested various recommendations for technology vendors and fashion brands – to design VFRs with “privacy by design” principles being at the forefront; and (v) Lastly, made a recommendation for legislators, by suggesting that in almost all the biometric data protection laws proposed in each state in the US, and if possible in the existing laws, collection of “second-order data” (like body geometry) without any first-order data (i.e., a retina or iris scan, a fingerprint or voiceprint, a scan of hand or face geometry, or any other identifying characteristic) shall be excluded from the ambit of “biometric identifiers” as that can reduce the unnecessary regulatory pressure in the usage of technologies like VFRs for commercial purpose.

Van Loo on Privacy Pretexts

Rory Van Loo (Boston University – School of Law; Yale ISP) has posted “Privacy Pretexts” (Cornell Law Review, Forthcoming) on SSRN. Here is the abstract:

Data privacy’s ethos lies in protecting the individual from institutions. Increasingly, however, institutions are deploying privacy arguments in ways that harm individuals. Platforms like Amazon, Facebook, and Google wall off information from competitors in the name of privacy. Financial institutions under investigation justify withholding files from the Consumer Financial Protection Bureau by saying they must protect sensitive customer data. In these and other ways, the private sector is exploiting privacy to avoid competition and accountability. This Article highlights the breadth of privacy pretexts and uncovers their moral structure. Like most pretexts, there is an element of truth to the claims. But left unchallenged, they will pave a path contrary to privacy’s ethos by blocking individuals’ data allies—the digital helpers, competitors, and regulators who need access to personal data to advance people’s interests. Addressing this move requires recognizing and overcoming deep tensions in the field of privacy. Although data privacy’s roots are in guarding against access, its future depends on promoting allied access.

Recommended.

Marlan on The Dystopian Right of Publicity

Dustin Marlan (University of Massachusetts School of Law) has posted “The Dystopian Right of Publicity” (Berkeley Technology Law Journal, Vol. 37 2022) on SSRN. Here is the abstract:

Our society frequently describes privacy problems with the dystopian metaphor of George Orwell’s 1984. Understood through the Orwellian metaphor—and particularly the “Big Brother is watching you” maxim—privacy rights are forcefully invaded by the government’s constant surveillance and disclosures of personal information. Yet, privacy’s coined opposite, the right of publicity—“the right of every human being to control the commercial use of his or her identity”—still lacks an appropriate metaphor, making it difficult to conceptualize and thus to regulate effectively.

This Article suggests that the problems with a commercially transferable right of publicity can be usefully analogized to another chilling dystopia, Aldous Huxley’s Brave New World. Huxley wrote Brave New World as an expression of the anxiety of losing one’s individual identity in a technology-driven future. In the novel, Huxley envisioned a utilitarian society controlled through technological manipulation, conspicuous consumption, social conditioning, and entertainment addiction. In contrast to Big Brother’s forceful coercion, pacified citizens in the “World State” society willingly participate in their own servitude.

Commentators often focus on the fact that litigated publicity cases tend to overprotect celebrities’ fame to the detriment of creators’ First Amendment rights. The vast majority of publicity rights, however, actually belong to ordinary citizens. The Huxleyan metaphor’s depiction of technological manipulation, social conditioning, and identity loss thus reveals the constant, but constantly overlooked, publicity problem this Article labels the “pleasurable servitude.” In effect, by consenting to terms of service on social media, ordinary citizens voluntarily license rights in their identities to internet platforms in exchange for access to the pleasures of digital realities. Through this unregulated mass transfer of publicity rights, social networks strip away their users’ identities and sell them to advertisers as commodities. This Article claims that the pleasurable servitude is a form of surveillance capitalism deserving of regulation by means of “publicity policies” that would function analogously to privacy policies.

Li on Algorithmic Destruction

Tiffany C. Li (University of New Hampshire School of Law; Yale ISP) has posted “Algorithmic Destruction” (SMU Law Review, Forthcoming) on SSRN. Here is the abstract:

Contemporary privacy law does not go far enough to protect our privacy interests, particularly where artificial intelligence and machine learning are concerned. While many have written on problems of algorithmic bias or deletion, this article introduces the novel concept of the “algorithmic shadow,” the persistent imprint of data in a trained machine learning model, and uses the algorithmic shadow as a lens through which to view the failures of data deletion in dealing with the realities of machine learning. This article is also the first to substantively critique the novel privacy remedy of algorithmic disgorgement, also known as algorithmic destruction.

What is the algorithmic shadow? Simply put, when you feed a set of specific data to train a machine learning model, that data produces an impact on the model that results from such training. Even if you later delete data from the training data set, the already-trained model still contains a persistent “shadow” of the deleted data. The algorithmic shadow describes the persistent imprint of the data that has been fed into a machine learning model and used to refine that machine learning system.

The failure of data deletion to resolve the privacy losses caused by algorithmic shadows highlights the ineffectiveness of data deletion as a right and a remedy. Algorithmic destruction (deletion of models or algorithms trained on misbegotten data) has emerged as an alternative, or perhaps supplement, to data deletion. While algorithmic destruction or disgorgement may resolve some of the failures of data deletion, this remedy and potential right is also not without its own drawbacks.

This article has three goals: First, the article introduces and defines the concept of the algorithmic shadow, a novel concept that has so far evaded significant legal scholarly discussion, despite its importance in future discussions of artificial intelligence and privacy law. Second, the article explains why the algorithmic shadow exposes and exacerbates existing problems with data deletion as a privacy right and remedy. Finally, the article examines algorithmic destruction as a potential algorithmic right and algorithmic remedy, comparing it with data deletion, particularly in light with algorithmic shadow harms.

Solove on The Limitations of Privacy Rights

Daniel J. Solove (George Washington University Law School) has posted “The Limitations of Privacy Rights” (98 Notre Dame Law Review, forthcoming 2023) on SSRN. Here is the abstract:

Individual privacy rights are often at the heart of information privacy and data protection laws. The most comprehensive set of rights, from the European Union’s General Data Protection Regulation (GDPR), includes the right to access, right to rectification (correction), right to erasure, right to restriction, right to data portability, right to object, and right to not be subject to automated decisions. Privacy laws around the world include many of these rights in various forms.

In this article, I contend that although rights are an important component of privacy regulation, rights are often asked to do far more work than they are capable of doing. Rights can only give individuals a small amount of power. Ultimately, rights are at most capable of being a supporting actor, a small component of a much larger architecture. I advance three reasons why rights cannot serve as the bulwark of privacy protection. First, rights put too much onus on individuals when many privacy problems are systematic. Second, individuals lack the time and expertise to make difficult decisions about privacy, and rights cannot practically be exercised at scale with the number of organizations than process people’s data. Third, privacy cannot be protected by focusing solely on the atomistic individual. The personal data of many people is interrelated, and people’s decisions about their own data have implications for the privacy of other people.

The main goal of providing privacy rights aims to provide individuals with control over their personal data. However, effective privacy protection involves not just facilitating individual control, but also bringing the collection, processing, and transfer of personal data under control. Privacy rights are not designed to achieve the latter goal; and they fail at the former goal.

After discussing these overarching reasons why rights are insufficient for the oversized role they currently play in privacy regulation, I discuss the common privacy rights and why each falls short of providing significant privacy protection. For each right, I propose broader structural measures that can achieve its underlying goals in a more systematic, rigorous, and less haphazard way.

Recommended.

Scholz on Private Rights of Action in Privacy Law

Lauren Henry Scholz (Florida State University – College of Law) has posted “Private Rights of Action in Privacy Law” (William & Mary Law Review, Forthcoming) on SSRN. Here is the abstract:

Many privacy advocates assume that the key to providing individuals with more privacy protection is strengthening the power government has to directly sanction actors that hurt the privacy interests of citizens. This Article contests the conventional wisdom, arguing that private rights of action are essential for privacy regulation. First, I show how private rights of action make privacy law regime more effective in general. Private rights of action are the most direct regulatory access point to the private sphere. They leverage private expertise and knowledge, create accountability through discovery, and have expressive value in creating privacy-protective norms. Then to illustrate the general principle, I provide examples of how private rights of actions can improve privacy regulation in a suite of key modern privacy problems. We cannot afford to leave private rights of action out of privacy reform.

Chander & Schwartz on Privacy and/or Trade

Anupam Chander (Georgetown University Law Center) and Paul M. Schwartz (University of California, Berkeley – School of Law) have posted “Privacy and/or Trade” on SSRN. Here is the abstract:

International privacy and trade law developed together, but now are engaged in significant conflict. Current efforts to reconcile the two are likely to fail, and the result for globalization favors the largest international companies able to navigate the regulatory thicket. In a landmark finding, this Article shows that more than sixty countries outside the European Union are now evaluating whether foreign countries have privacy laws that are adequate to receive personal data. This core test for deciding on the permissibility of global data exchanges is currently applied in a nonuniform fashion with ominous results for the data flows that power trade today.

The promise of a global internet, with access for all, including companies from the Global South, is increasingly remote. This Article uncovers the forgotten and fateful history of the international regulation of privacy and trade that led to our current crisis and evaluates possible solutions to the current conflict. It proposes a Global Agreement on Privacy enforced within the trade order, but with external data privacy experts developing the treaty’s substantive norms.