Solove on The Limitations of Privacy Rights

Daniel J. Solove (George Washington University Law School) has posted “The Limitations of Privacy Rights” (98 Notre Dame Law Review, forthcoming 2023) on SSRN. Here is the abstract:

Individual privacy rights are often at the heart of information privacy and data protection laws. The most comprehensive set of rights, from the European Union’s General Data Protection Regulation (GDPR), includes the right to access, right to rectification (correction), right to erasure, right to restriction, right to data portability, right to object, and right to not be subject to automated decisions. Privacy laws around the world include many of these rights in various forms.

In this article, I contend that although rights are an important component of privacy regulation, rights are often asked to do far more work than they are capable of doing. Rights can only give individuals a small amount of power. Ultimately, rights are at most capable of being a supporting actor, a small component of a much larger architecture. I advance three reasons why rights cannot serve as the bulwark of privacy protection. First, rights put too much onus on individuals when many privacy problems are systematic. Second, individuals lack the time and expertise to make difficult decisions about privacy, and rights cannot practically be exercised at scale with the number of organizations than process people’s data. Third, privacy cannot be protected by focusing solely on the atomistic individual. The personal data of many people is interrelated, and people’s decisions about their own data have implications for the privacy of other people.

The main goal of providing privacy rights aims to provide individuals with control over their personal data. However, effective privacy protection involves not just facilitating individual control, but also bringing the collection, processing, and transfer of personal data under control. Privacy rights are not designed to achieve the latter goal; and they fail at the former goal.

After discussing these overarching reasons why rights are insufficient for the oversized role they currently play in privacy regulation, I discuss the common privacy rights and why each falls short of providing significant privacy protection. For each right, I propose broader structural measures that can achieve its underlying goals in a more systematic, rigorous, and less haphazard way.


Scholz on Private Rights of Action in Privacy Law

Lauren Henry Scholz (Florida State University – College of Law) has posted “Private Rights of Action in Privacy Law” (William & Mary Law Review, Forthcoming) on SSRN. Here is the abstract:

Many privacy advocates assume that the key to providing individuals with more privacy protection is strengthening the power government has to directly sanction actors that hurt the privacy interests of citizens. This Article contests the conventional wisdom, arguing that private rights of action are essential for privacy regulation. First, I show how private rights of action make privacy law regime more effective in general. Private rights of action are the most direct regulatory access point to the private sphere. They leverage private expertise and knowledge, create accountability through discovery, and have expressive value in creating privacy-protective norms. Then to illustrate the general principle, I provide examples of how private rights of actions can improve privacy regulation in a suite of key modern privacy problems. We cannot afford to leave private rights of action out of privacy reform.

Chander & Schwartz on Privacy and/or Trade

Anupam Chander (Georgetown University Law Center) and Paul M. Schwartz (University of California, Berkeley – School of Law) have posted “Privacy and/or Trade” on SSRN. Here is the abstract:

International privacy and trade law developed together, but now are engaged in significant conflict. Current efforts to reconcile the two are likely to fail, and the result for globalization favors the largest international companies able to navigate the regulatory thicket. In a landmark finding, this Article shows that more than sixty countries outside the European Union are now evaluating whether foreign countries have privacy laws that are adequate to receive personal data. This core test for deciding on the permissibility of global data exchanges is currently applied in a nonuniform fashion with ominous results for the data flows that power trade today.

The promise of a global internet, with access for all, including companies from the Global South, is increasingly remote. This Article uncovers the forgotten and fateful history of the international regulation of privacy and trade that led to our current crisis and evaluates possible solutions to the current conflict. It proposes a Global Agreement on Privacy enforced within the trade order, but with external data privacy experts developing the treaty’s substantive norms.

Swire & Kennedy-Mayo on The Effects of Data Localization on Cybersecurity

Peter Swire (Georgia Institute of Technology) and DeBrae Kennedy-Mayo (same) have posted “The Effects of Data Localization on Cybersecurity” on SSRN. Here is the abstract:

This paper is the first systematic examination of the effects of data localization laws on cybersecurity. This paper focuses on the effects of “hard” data localization, where transfer of data is prohibited to other countries. Other “softer” versions of data localization also exist, such as where a country requires a copy of data to be stored or mirrored in the country, but transfer of the data remains lawful. The discussion includes both de jure and de facto effects, including China’s explicit laws, recent enforcement actions in the European Union, and proposed privacy legislation in India. The focus is on effects on cybersecurity defense, rather than offensive cyber measures.

Part I provides background. Part II examines privacy and non-privacy reasons driving localization laws, including examining ways that cybersecurity might either reinforce privacy or exist in tension with it. Part III addresses the research for this paper. In addition to a traditional literature review, we reviewed approximately 200 comments submitted to the European Data Protection Board in late 2020 concerning data transfers. Approximately 25% of the comments discussed data localization or a similar concept.

Part IV provides a new categorization of the effects of data localization on cybersecurity. First, our analysis shows that data localization would threaten an organization’s ability to achieve integrated management of cybersecurity risk. 13 of the 14 ISO 27002 controls, as well as multiple sub-controls, would be negatively affected by data localization. As a specific finding, required localization in two or more nations clearly restricts the ability to conduct integrated cybersecurity management.

Second, the analysis explains how data localization pervasively limits provision of cybersecurity-related services by third parties, a global market of roughly $200 billion currently. Notably, data localization laws supported in the name of cybersecurity often undermine cybersecurity – purchasers in the locality are deprived of best-in-breed cybersecurity services, thereby making them systematically easier targets for attackers. Third, data localization threatens non-fee cooperation on cybersecurity defense. Notably, localization undermines information sharing for cybersecurity purposes, which policy leaders have emphasized as vital to effective cybersecurity.

Finally, until and unless proponents of localization address these concerns, scholars, policymakers, and practitioners have strong reason to consider significant cybersecurity harms in any overall analysis of whether to require localization.

Froomkin, Arencibia & Colangelo on Safety as Privacy

A. Michael Froomkin (University of Miami – School of Law; Yale ISP), Phillip J. Arencibia (Duane Morris LLP), and Zak Colangelo (Lewis Brisbois Bisgaard & Smith LLP) have posted “Safety as Privacy” on SSRN. Here is the abstract:

New technologies, such as internet-connected home devices we have come to call ‘the Internet of Things (IoT)’, connected cars, sensors, drones, internet-connected medical devices, and workplace monitoring of every sort, create privacy gaps that can cause danger to people. In Privacy as Safety, 95 Wash. L. Rev. 141 (2020), two of us sought to emphasize the deep connection between privacy and safety, in order to lay a foundation for arguing that U.S. administrative agencies with a safety mission can and should make privacy protection one of their goals. This article builds on that foundation with a detailed look at the safety missions of several agencies. In each case, we argue that the agency has the discretion, if not necessarily the duty, to demand enhanced privacy practices from those within its jurisdiction, and that the agency should make use of that discretion.

This is the first article in the legal literature to identify the substantial gains to personal privacy that several U.S. agencies tasked with protecting safety could achieve under their existing statutory authority. Examples of agencies with untapped potential include the Federal Trade Commission (FTC), the Consumer Product Safety Commission (CPSC), the Food and Drug Administration (FDA), the National Highway Traffic Safety Administration (NHTSA), the Federal Aviation Administration (FAA), and the Occupational Safety and Health Administration (OSHA). Five of these agencies have an explicit duty to protect the public against threats to safety (or against risk of injury) and thus – as we have argued previously – should protect the public’s privacy when the absence of privacy can create a danger. The FTC’s general authority to fight unfair practices in commerce enables it to regulate commercial practices threatening consumer privacy. The FAA’s duty to ensure air safety could extend beyond airworthiness to regulating spying via drones. The CPSC’s authority to protect against unsafe products authorizes it to regulate products putting consumers’ physical and financial privacy at risk, thus sweeping in many products associated with the IoT. NHTSA’s authority to regulate dangerous practices on the road encompasses authority to require smart car manufacturers include precautions protecting drivers from misuses of connected car data due to the car-maker’s intention and due to security lapses caused by its inattention. Lastly, OSHA’s authority to require safe work environments encompasses protecting workers from privacy risks that threaten their physical and financial safety on the job.

Arguably an omnibus, federal statute regulating data privacy would be preferable to doubling down on the U.S.’s notoriously sectoral approach to privacy regulation. Here, however, we say only that until the political stars align for some future omnibus proposal, there is value in exploring methods that are within our current means. It may be only second best, but it is also much easier to implement. Thus, we offer reasonable legal constructions of certain extant federal statutes that would justify more extensive privacy regulation in the name of providing enhanced safety, a regime that would we argue would be a substantial improvement over the status quo yet not require any new legislation, just a better understanding of certain agencies’ current powers and authorities. Agencies with suitably capacious safety missions should take the opportunity to regulate to protect relevant personal privacy without delay.

Solove & Keats Citron on Standing and Privacy Harms: A Critique of TransUnion v. Ramirez

Daniel J. Solove (George Washington University Law School) and Danielle Keats Citron (University of Virginia School of Law) have posted “Standing and Privacy Harms: A Critique of TransUnion v. Ramirez”
(101 Boston University Law Review Online 62 (2021)). Here is the abstract:

Through the standing doctrine, the U.S. Supreme Court has taken a new step toward severely limiting the effective enforcement of privacy laws.  The recent Supreme Court decision, TransUnion v. Ramirez (U.S. June 25, 2021) revisits the issue of standing and privacy harms under the Fair Credit Reporting Act (FCRA) that began with Spokeo v. Robins, 132 S. Ct. 1441 (2012). In TransUnion, a group of plaintiffs sued TransUnion under FCRA for falsely labeling them as potential terrorists in their credit reports. The Court concluded that only some plaintiffs had standing – those whose credit reports were disseminated. Plaintiffs whose credit reports weren’t disseminated lacked a “concrete” injury and accordingly lacked standing – even though Congress explicitly granted them a private right of action to sue for violations like this and even though a jury had found that TransUnion was at fault.

In this essay, Professors Daniel J. Solove and Danielle Keats Citron engage in an extensive critique of the TransUnion case. They contend that existing standing doctrine incorrectly requires concrete harm. For most of U.S. history, standing required only an infringement on rights. Moreover, when assessing harm, the Court has a crabbed and inadequate understanding of privacy harms. Additionally, allowing courts to nullify private rights of action in federal privacy laws is a usurpation of legislative power that upends the compromises and balances that Congress establishes in laws.  Private rights of action are essential enforcement mechanisms.

Richards on Why Privacy Matters

Neil M. Richards (Washington University School of Law) has posted “Why Privacy Matters: An Introduction” (Oxford Press 2021) on SSRN. Here is the abstract:

Everywhere we look, companies and governments are spying on us–seeking information about us and everyone we know. Ad networks monitor our web-surfing to send us “more relevant” ads. The NSA screens our communications for signs of radicalism. Schools track students’ emails to stop school shootings. Cameras guard every street corner and traffic light, and drones fly in our skies. Databases of human information are assembled for purposes of “training” artificial intelligence programs designed to predict everything from traffic patterns to the location of undocumented migrants. We’re even tracking ourselves, using personal electronics like Apple watches, Fitbits, and other gadgets that have made the “quantified self” a realistic possibility. As Facebook’s Mark Zuckerberg once put it, “the Age of Privacy is over.” But Zuckerberg and others who say “privacy is dead” are wrong. In Why Privacy Matters, Neil Richards explains that privacy isn’t dead, but rather up for grabs.

Richards shows how the fight for privacy is a fight for power that will determine what our future will look like, and whether it will remain fair and free. If we want to build a digital society that is consistent with our hard-won commitments to political freedom, individuality, and human flourishing, then we must make a meaningful commitment to privacy. Privacy matters because good privacy rules can promote the essential human values of human identity, political freedom, and consumer protection. If we want to preserve our commitments to these precious yet fragile values, we will need privacy rules. Richards explains why privacy remains so important and offers strategies that can help us protect it from the forces that are working to undermine it. Pithy and forceful, this is essential reading for anyone interested in a topic that sits at the center of so many current problems.

Selinger & Rhee on Normalizing Surveillance

Evan Selinger (Rochester Institute of Technology) and Judy Hyojoo Rhee (Duke University) have posted “Normalizing Surveillance” (Northern European Journal of Philosophy 22, 1 (2021): 49-74) on SSRN. Here is the abstract:

Definitions of privacy change, as do norms for protecting it. Why, then, are privacy scholars and activists currently worried about “normalization”? This essay explains what normalization means in the context of surveillance concerns and clarifies why normalization has significant governance consequences. We emphasize two things. First, the present is a transitional moment in history. AI-infused surveillance tools offer a window into the unprecedented dangers of automated real-time monitoring and analysis. Second, privacy scholars and ac- tivists can better integrate supporting evidence to counter skepticism about their most disturbing and speculative claims about normalization. Empirical results in moral psychology support the assertion that widespread surveillance typically will lead people to become favorably disposed toward it. If this causal dynamic is pervasive, it can diminish autonomy and contribute to a slippery slope trajectory that diminishes privacy and civil liberties.

Cooper on Congressional Surveillance

Aaron Cooper (Georgetown University Law Center) has posted “Congressional Surveillance” (American University Law Review, Vol. 70, No. 1799, 2021) on SSRN. Here is the abstract:

In recent years, Congress has increasingly used electronic surveillance in high-profile investigations. Reactions to what this Article calls “congressional surveillance” indicate a deep unease among both legal scholars and the broader public about the nature of Congress’s surveillance authority and its normative implications. Despite our ongoing preoccupation with government surveillance, congressional surveillance remains largely unexplored. There is virtually no discussion of how congressional surveillance is treated under key statutory and Fourth Amendment constraints; no consideration of the process or political limits of congressional surveillance; and little scrutiny of congressional surveillance as a tool within the separation of powers.

This Article fills that gap by presenting the first scholarly treatment of congressional surveillance. It argues that to address congressional surveillance, we must first understand its hybrid features of both government surveillance and congressional political power.

Specifically, the Article makes two contributions. First, the Article argues that congressional surveillance operates under fundamentally different constraints than traditional government surveillance. Congressional processes and politics (“process limits”) constrain congressional surveillance more than established statutory and Fourth Amendment mechanisms (“external limits”) or the inherent constraints of congressional authority (“internal limits”).

Second, this Article argues that congressional surveillance is justified as an essential practice within the separation of powers. It offers legitimate benefits to Congress in inter-branch information disputes with the executive and in carrying out basic digital governance. The Article also argues that the Supreme Court’s decision in Trump v. Mazars USA, LLP mistakes a privacy concern that congressional surveillance poses as a threat to the separation of powers. At the same time, this Article rejects the traditional law enforcement approach to protecting individual privacy through judicial gatekeeping. Instead, the Article argues that the treatment of congressional surveillance must account for individual privacy interests while preserving Congress’s ability to assert itself as a co-equal branch—not the Mazars approach, and not a law enforcement approach, but something different.

Hamilton et al. on Developing a Measure of Social, Ethical, and Legal Content for Intelligent Cognitive Assistants

Clovia Hamilton (SUNY Korea), William Swart (East Carolina University), and Gerald M. Stokes (SUNY Korea) have posted “Developing a Measure of Social, Ethical, and Legal Content for Intelligent Cognitive Assistants” (Journal of Strategic Innovation and Sustainability 2021) on SSRN. Here is the abstract:

We address the issue of consumer privacy against the backdrop of the national priority of maintaining global leadership in artificial intelligence, the ongoing research in Artificial Cognitive Assistants, and the explosive growth in the development and application of Voice Activated Personal Assistants (VAPAs) such as Alexa and Siri, spurred on by the needs and opportunities arising out of the COVID-19 global pandemic. We first review the growth and associated legal issues of the of VAPAs in private homes, banks, healthcare, and education. We then summarize the policy guidelines for the development of VAPAs. Then, we classify these into five major categories with associated traits. We follow by developing a relative importance weight for each of the traits and categories; and suggest the establishment of a rating system related to the legal, ethical, functional, and social content policy guidelines established by these organizations. We suggest the establishment of an agency that will use the proposed rating system to inform customers of the implications of adopting a particular VAPA in their sphere.