Solove & Hartzog on Data Vu: Why Breaches Involve the Same Stories Again and Again

Daniel J. Solove (George Washington University Law School) and Woodrow Hartzog (Boston University School of Law; Stanford Law School Center for Internet and Society) have posted “Data Vu: Why Breaches Involve the Same Stories Again and Again” (Scientific American (July 2022)) on SSRN. Here is the abstract:

This short essay discusses why data security law fails to effectively combat data breaches, which continue to increase. With a few exceptions, current laws about data security do not look too far beyond the blast radius of the most data breaches. Only so much marginal benefit can be had by increasing fines to breached entities. Instead, the law should target a broader set of risky actors, such as producers of insecure software and ad networks that facilitate the distribution of malware. Organizations that have breaches almost always could have done better, but there’s only so much marginal benefit from beating them up. Laws could focus on holding other actors more accountable, so responsibility is more aptly distributed.

Zuboff on Surveillance Capitalism or Democracy? The Death Match of Institutional Orders and the Politics of Knowledge

Shoshana Zuboff (Harvard Business School; Harvard Kennedy School) has posted “Surveillance Capitalism or Democracy? The Death Match of Institutional Orders and the Politics of Knowledge in Our Information Civilization” (Organization Theory, 3(3), 2022) on SSRN. Here is the abstract:

Surveillance capitalism is what happened when US democracy stood down. Two decades later, it fails any reasonable test of responsible global stewardship of digital information and communications. The abdication of the world’s information spaces to surveillance capitalism has become the meta-crisis of every republic because it obstructs solutions to all other crises. The surveillance capitalist giants–Google, Apple, Facebook, Amazon, Microsoft, and their ecosystems–now constitute a sweeping political-economic institutional order that exerts oligopolistic control over most digital information and communication spaces, systems, and processes.

The commodification of human behavior operationalized in the secret massive-scale extraction of human-generated data is the foundation of surveillance capitalism’s two-decade arc of institutional development. However, when revenue derives from commodification of the human, the classic economic equation is scrambled. Imperative economic operations entail accretions of governance functions and impose substantial social harms. Concentration of economic power produces collateral concentrations of governance and social powers. Oligopoly in the economic realm shades into oligarchy in the societal realm. Society’s ability to respond to these developments is thwarted by category errors. Governance incursions and social harms such as control over AI or rampant disinformation are too frequently seen as distinct crises and siloed, each with its own specialists and prescriptions, rather than understood as organic effects of causal economic operations.

In contrast, this paper explores surveillance capitalism as a unified field of institutional development. Its four already visible stages of development are examined through a two-decade lens on expanding economic operations and their societal effects, including extraction and the wholesale destruction of privacy, the consequences of blindness-by-design in human-to-human communications, the rise of AI dominance and epistemic inequality, novel achievements in remote behavioral actuation such as the Trump 2016 campaign, and Apple-Google’s leverage of digital infrastructure control to subjugate democratic governments desperate to fight a pandemic. Structurally, each stage creates the conditions and constructs the scaffolding for the next, and each builds on what went before. Substantively, each stage is characterized by three vectors of accomplishment: novel economic operations, governance carve-outs, and fresh social harms. These three dimensions weave together across time in a unified architecture of institutional development. Later-stage harms are revealed as effects of the foundational-stage economic operations required for commodification of the human.

Surveillance capitalism’s development is understood in the context of a larger contest with the democratic order—the only competing institutional order that poses an existential threat. The democratic order retains the legitimate authority to contradict, interrupt, and abolish surveillance capitalism’s foundational operations. Its unique advantages include the ability to inspire action and the necessary power to make, impose, and enforce the rule of law. While the liberal democracies have begun to engage with the challenges of regulating today’s privately owned information spaces, I argue that regulation of institutionalized processes that are innately catastrophic for democratic societies cannot produce desired outcomes. The unified field perspective suggests that effective democratic contradiction aimed at eliminating later-stage harms, such as “disinformation,” depends upon the abolition and reinvention of the early-stage economic operations that operationalize the commodification of the human, the source from which such harms originate.

The clash of institutional orders is a death match over the politics of knowledge in the digital century. Surveillance capitalism’s antidemocratic economic imperatives produce a zero-sum dynamic in which the deepening order of surveillance capitalism propagates democratic disorder and deinstitutionalization. Without new public institutions, charters of rights, and legal frameworks purpose-built for a democratic digital century, citizens march naked, easy prey for all who steal and hunt with human data. Only one of these contesting orders will emerge with the authority and power to rule, while the other will drift into deinstitutionalization, its functions absorbed by the victor. Will these contradictions ultimately defeat surveillance capitalism, or will democracy suffer the greater injury? It is possible to have surveillance capitalism, and it is possible to have a democracy. It is not possible to have both.

Blanke on The CCPA, ‘Inferences Drawn,’ and Federal Preemption

Jordan Blanke (Mercer University) has posted “The CCPA, ‘Inferences Drawn,’ and Federal Preemption”
(Richmond Journal of Law and Technology, Vol. 29, No. 1 (Forthcoming 2022)) on SSRN. Here is the abstract:

In 2018 California passed an extensive data privacy law. One of its most significant features was the inclusion of “inferences drawn” within its definition of “personal information.” The law was significantly strengthened in 2020 with the expansion of rights for California consumers, new obligations on businesses, including the incorporation of GDPR-like principles of data minimization, purpose limitation, and storage limitation, and the creation of an independent agency to enforce these laws. In 2022 the Attorney General of California issued an Opinion that provided for an extremely broad interpretation of “inferences drawn.” Thereafter the American Data Privacy Protection Act was introduced in Congress. It does not provide nearly the protection for inferences that California law does, but it threatens to preempt almost all of it. This article argues that, given the importance of California being able to finally regulate inferences drawn, any federal bill must either provide similar protection, exclude California law from preemption, or be opposed.

Abraha on The Role of Article 88 GDPR in Upholding Privacy in the Workplace

Halefom H. Abraha (University of Oxford) has posted “A pragmatic compromise? The role of Article 88 GDPR in upholding privacy in the workplace” on SSRN. Here is the abstract:

The distinct challenges of data processing at work have led to long-standing calls for sector-specific regulation. This leaves the European legislature with a dilemma. While the distinct features of employee data processing give rise to novel issues that cannot adequately be addressed by an omnibus data protection regime, a combination of legal, political, and constitutional factors have hindered efforts towards adopting harmonised employment-specific legislation at the EU level. The ‘opening clause’ in Art. 88 GDPR aims to square this circle. It aims to ensure adequate and consistent protection of employees while also promoting regulatory diversity, respecting national peculiarities, and protecting Member State autonomy. This paper examines whether the opening clause has delivered on its promises. It argues that while the compromise has delivered on some of its promises in promoting diverse and innovative regulatory approaches, it also runs counter to the fundamental objectives of the GDPR itself by creating further fragmentation, legal uncertainty, and inconsistent implementation, interpretation, and enforcement of data protection rules.

Botero Arcila on The Case for Local Data Sharing Ordinances

Beatriz Botero Arcila (Sciences Po Law; Harvard Berkman Klein) has posted “The Case for Local Data Sharing Ordinances”
(William & Mary Bill of Rights Journal) on SSRN. Here is the abstract:

Cities in the US have started to enact data-sharing rules and programs to access some of the data that technology companies operating under their jurisdiction – like short-term rental or ride hailing companies – collect. This information allows cities to adapt too to the challenges and benefits of the digital information economy. It allows them to understand what their impact is on congestion, the housing market, the local job market and even the use of public spaces. It also empowers them to act accordingly by, for example, setting vehicle caps or mandating a tailored minimum pay for gig-workers. These companies, however, sometimes argue that sharing this information attempts against their users’ privacy rights and their privacy rights, because this information is theirs; it’s part of their business records. The question is thus what those rights are, and whether it should and could be possible for local governments to access that information to advance equity and sustainability, without harming the legitimate privacy interests of both individuals and companies. This Article argues that within current Fourth Amendment doctrine and privacy law there is space for data-sharing programs. Privacy law, however, is being mobilized to alter the distribution of power and welfare between local governments, companies, and citizens within current digital information capitalism to extend those rights beyond their fair share and preempt permissible data-sharing requests. The Article warns that if the companies succeed in their challenges, privacy law will have helped shield corporate power from regulatory oversight, while still leaving individuals largely unprotected and submitting local governments further to corporate interests.

Richards on The GDPR as Privacy Pretext and the Problem of Co-Opting Privacy

Neil M. Richards (Washington U Law) has posted “The GDPR as Privacy Pretext and the Problem of Co-Opting Privacy” (73 Hastings Law Journal 1511 (2022)) on SSRN. Here is the abstract:

Privacy and data protection law’s expansion brings with it opportunities for mischief as privacy rules are used pretextually to serve other ends. This Essay examines the problem of such co-option of privacy using a case study of lawsuits in which defendants seek to use the EU’s General Data Protection Regulation (“GDPR”) to frustrate ordinary civil discovery. In a series of cases, European civil defendants have argued that the GDPR requires them to redact all names from otherwise valid discovery requests for relevant evidence produced under a protective order, thereby turning the GDPR from a rule designed to protect the fundamental data protection rights of European Union (EU) citizens into a corporate litigation tool to frustrate and delay the production of evidence of alleged wrongdoing.

This Essay uses the example of pretextual GDPR use to frustrate civil discovery to make three contributions to the privacy literature. First, it identifies the practice of defendants attempting strategically to co-opt the GDPR to serve their own purposes. Second, it offers an explanation of precisely why and how this practice represents not merely an incorrect reading of the GDPR, but more broadly, a significant departure from its purposes—to safeguard the fundamental right of data protection secured by European constitutional and regulatory law. Third, it places the problem of privacy pretexts and the GDPR in the broader context of the co-option of privacy rules more generally, offers a framework for thinking about such efforts, and argues that this problem is only likely to deepen as privacy and data protection rules expand through the ongoing processes of reform.

Botero Arcila & Groza on the EU Data Act

Beatriz Botero Arcila (Sciences Po Law; Harvard Berkman Klein) and Teodora Groza (Sciences Po Law) have posted “Comments to the Data Act from the Law and Technology Group of Sciences Po Law School” on SSRN. Here is the abstract:

These are comments submitted by members of the Law and Technology Group of Sciences Po Law School to the Proposal for a Regulation of the European Parliament and the Council on harmonised rules on fair access to and use of data (“Data Act”), open for open consultation and feedback from stakeholders from 14 March to 13 May 2022.

We welcome the Commission’s initiative and share the general concern and idea that data concentration and barriers to data sharing contribute to the concentration of the digital economy in Europe. Similarly, based on our own research we share the Commission’s diagnosis that legal and technical barriers prevent different actors to enter in voluntary data-sharing agreements and transactions.

In general, we believe the Data Act is a good initiative, that will flexibilize some of the barriers that exist in the European market to facilitate the creation of value from data by different stakeholders, and not only those who produce it. In this document, however, we focus on five key clarifications that should be taken into account to further achieve this goal: (1) relieving the user from the burden the “data-sharing” mechanism, as this mechanism may be asking users to act beyond their rational capabilities; (2) the definition of the market as the one for related services fails to unlock the competitive potential of data sharing and might increase concentration in the primary markets for IoT devices; (3) service providers need to nudge users into sharing their data; (4) the difficulty of working with the personal – non personal data binary suggested by the act; and (5) the obligation to make data available to public sector bodies sets a barre that may be too hard to meet and may hamper the usefulness of this provision.

Montagnani & Verstraete on What Makes Data Personal

Maria Lillà Montagnani (Bocconi University – Department of Law) and Mark Verstraete (UCLA School of Law) have posted “What Makes Data Personal?” (UC Davis Law Review, Vol. 56, No. 3, Forthcoming 2023) on SSRN. Here is the abstract:

Personal data is an essential concept for information privacy law. Privacy’s boundaries are set by personal data: for a privacy violation to occur, personal data must be involved. And an individual’s right to control information extends only to personal data. However, current theorizing about personal data is woefully incomplete. In light of this incompleteness, this Article offers a new conceptual approach to personal data. To start, this Article argues that personal data is simply a legal construct that describes the set of information or circumstances where an individual should be able to exercise control over a piece of information.

After displacing the mythology about the naturalness of personal data, this Article fashions a new theory of personal data that more adequately tracks when a person should be able to control specific information. Current approaches to personal data rightly examine the relationship between a person and information; however, they misunderstand what relationship is necessary for legitimate control interests. Against the conventional view, this Article suggests that how the information is used is an indispensable part of the analysis of the relationship between a person and data that determines whether the data should be considered personal. In doing so, it employs the philosophical concept of separability as a method for making determinations about which uses of information are connected to a person and, therefore, should trigger individual privacy protections and which are not.

This framework offers a superior foundation to extant theories for capturing the existence and scope of individual interests in data. By doing so, it provides an indispensable contribution for crafting an ideal regime of information governance. Separability enables privacy and data protection laws to better identify when a person’s interests are at stake. And further, separability offers a resilient normative foundation for personal data that grounds interests of control in a philosophical foundation of autonomy and dignity values—which are incorrectly calibrated in existing theories of personal data. Finally, this Article’s reimagination of personal data will allow privacy and data protection laws to more effectively combat modern privacy harms such as manipulation and inferences.

Jarovsky on Transparency by Design: Reducing Informational Vulnerabilities Through UX Design

Luiza Jarovsky (Tel Aviv University, Buchmann Faculty of Law) has posted “Transparency by Design: Reducing Informational Vulnerabilities Through UX Design” on SSRN. Here is the abstract:

Can transparency help us solve the challenges posed by dark patterns and other unfair practices online? Despite the many weaknesses of transparency obligations in the data protection arena, I suggest that a Transparency by Design (TbD) approach can assist us in better achieving data protection goals, especially by empowering data subjects with accessible information, facilitating the exercise of data protection rights, and helping to reduce informational vulnerabilities. TbD proposes that compliance with transparency rules should happen in all levels of design and user interaction, instead of being restricted to Privacy Policies (PPs) or similar legal statements. In a previous work, I discussed how manipulative design can exploit behavioral biases and generate unfairness; here, I show how failing to support data subjects with accessible information, adequate design and meaningful choices can similarly create an unfair online environment.

This work highlights the shortcomings of transparency rules in the context of the General Data Protection Regulation (GDPR). I demonstrate that, in practice, GDPR obligations do not result in effective transparency for data subjects, increasing unfairness in the data protection context. Consequently, data subjects are most of the time unaware of how, why, and when their data is collected, are uninformed about the risks or broader consequences of their personal data- fueled online activities, do not know their rights regarding their data, and do not have access to meaningful choices.

In order to answer these shortcomings, I propose TbD, so that we – the data subjects – are not only effectively informed of the collection and use of our data, but can also exercise our data subjects’ rights, make meaningful privacy choices, and mitigate our informational vulnerabilities.

The main goal of TbD is that data subjects will be served with information that is meaningful and actionable, instead of a standard block of text that acts as a liability document for the controller’s legal department – as currently happens with PPs. Design, manifested through User Experience (UX), is a central tool in this framework, as it should embed TbD’s values and premises and empower data subjects throughout their interaction with the controller.

Lu on Data Privacy, Human Rights, and Algorithmic Opacity

Sylvia Lu (UC Berkeley School of Law) has posted “Data Privacy, Human Rights, and Algorithmic Opacity” (California Law Review, Vol. 110, 2022) on SSRN. Here is the abstract:

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.

The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.