Abraha on The Role of Article 88 GDPR in Upholding Privacy in the Workplace

Halefom H. Abraha (University of Oxford) has posted “A pragmatic compromise? The role of Article 88 GDPR in upholding privacy in the workplace” on SSRN. Here is the abstract:

The distinct challenges of data processing at work have led to long-standing calls for sector-specific regulation. This leaves the European legislature with a dilemma. While the distinct features of employee data processing give rise to novel issues that cannot adequately be addressed by an omnibus data protection regime, a combination of legal, political, and constitutional factors have hindered efforts towards adopting harmonised employment-specific legislation at the EU level. The ‘opening clause’ in Art. 88 GDPR aims to square this circle. It aims to ensure adequate and consistent protection of employees while also promoting regulatory diversity, respecting national peculiarities, and protecting Member State autonomy. This paper examines whether the opening clause has delivered on its promises. It argues that while the compromise has delivered on some of its promises in promoting diverse and innovative regulatory approaches, it also runs counter to the fundamental objectives of the GDPR itself by creating further fragmentation, legal uncertainty, and inconsistent implementation, interpretation, and enforcement of data protection rules.

Botero Arcila on The Case for Local Data Sharing Ordinances

Beatriz Botero Arcila (Sciences Po Law; Harvard Berkman Klein) has posted “The Case for Local Data Sharing Ordinances”
(William & Mary Bill of Rights Journal) on SSRN. Here is the abstract:

Cities in the US have started to enact data-sharing rules and programs to access some of the data that technology companies operating under their jurisdiction – like short-term rental or ride hailing companies – collect. This information allows cities to adapt too to the challenges and benefits of the digital information economy. It allows them to understand what their impact is on congestion, the housing market, the local job market and even the use of public spaces. It also empowers them to act accordingly by, for example, setting vehicle caps or mandating a tailored minimum pay for gig-workers. These companies, however, sometimes argue that sharing this information attempts against their users’ privacy rights and their privacy rights, because this information is theirs; it’s part of their business records. The question is thus what those rights are, and whether it should and could be possible for local governments to access that information to advance equity and sustainability, without harming the legitimate privacy interests of both individuals and companies. This Article argues that within current Fourth Amendment doctrine and privacy law there is space for data-sharing programs. Privacy law, however, is being mobilized to alter the distribution of power and welfare between local governments, companies, and citizens within current digital information capitalism to extend those rights beyond their fair share and preempt permissible data-sharing requests. The Article warns that if the companies succeed in their challenges, privacy law will have helped shield corporate power from regulatory oversight, while still leaving individuals largely unprotected and submitting local governments further to corporate interests.

Richards on The GDPR as Privacy Pretext and the Problem of Co-Opting Privacy

Neil M. Richards (Washington U Law) has posted “The GDPR as Privacy Pretext and the Problem of Co-Opting Privacy” (73 Hastings Law Journal 1511 (2022)) on SSRN. Here is the abstract:

Privacy and data protection law’s expansion brings with it opportunities for mischief as privacy rules are used pretextually to serve other ends. This Essay examines the problem of such co-option of privacy using a case study of lawsuits in which defendants seek to use the EU’s General Data Protection Regulation (“GDPR”) to frustrate ordinary civil discovery. In a series of cases, European civil defendants have argued that the GDPR requires them to redact all names from otherwise valid discovery requests for relevant evidence produced under a protective order, thereby turning the GDPR from a rule designed to protect the fundamental data protection rights of European Union (EU) citizens into a corporate litigation tool to frustrate and delay the production of evidence of alleged wrongdoing.

This Essay uses the example of pretextual GDPR use to frustrate civil discovery to make three contributions to the privacy literature. First, it identifies the practice of defendants attempting strategically to co-opt the GDPR to serve their own purposes. Second, it offers an explanation of precisely why and how this practice represents not merely an incorrect reading of the GDPR, but more broadly, a significant departure from its purposes—to safeguard the fundamental right of data protection secured by European constitutional and regulatory law. Third, it places the problem of privacy pretexts and the GDPR in the broader context of the co-option of privacy rules more generally, offers a framework for thinking about such efforts, and argues that this problem is only likely to deepen as privacy and data protection rules expand through the ongoing processes of reform.

Botero Arcila & Groza on the EU Data Act

Beatriz Botero Arcila (Sciences Po Law; Harvard Berkman Klein) and Teodora Groza (Sciences Po Law) have posted “Comments to the Data Act from the Law and Technology Group of Sciences Po Law School” on SSRN. Here is the abstract:

These are comments submitted by members of the Law and Technology Group of Sciences Po Law School to the Proposal for a Regulation of the European Parliament and the Council on harmonised rules on fair access to and use of data (“Data Act”), open for open consultation and feedback from stakeholders from 14 March to 13 May 2022.

We welcome the Commission’s initiative and share the general concern and idea that data concentration and barriers to data sharing contribute to the concentration of the digital economy in Europe. Similarly, based on our own research we share the Commission’s diagnosis that legal and technical barriers prevent different actors to enter in voluntary data-sharing agreements and transactions.

In general, we believe the Data Act is a good initiative, that will flexibilize some of the barriers that exist in the European market to facilitate the creation of value from data by different stakeholders, and not only those who produce it. In this document, however, we focus on five key clarifications that should be taken into account to further achieve this goal: (1) relieving the user from the burden the “data-sharing” mechanism, as this mechanism may be asking users to act beyond their rational capabilities; (2) the definition of the market as the one for related services fails to unlock the competitive potential of data sharing and might increase concentration in the primary markets for IoT devices; (3) service providers need to nudge users into sharing their data; (4) the difficulty of working with the personal – non personal data binary suggested by the act; and (5) the obligation to make data available to public sector bodies sets a barre that may be too hard to meet and may hamper the usefulness of this provision.

Montagnani & Verstraete on What Makes Data Personal

Maria Lillà Montagnani (Bocconi University – Department of Law) and Mark Verstraete (UCLA School of Law) have posted “What Makes Data Personal?” (UC Davis Law Review, Vol. 56, No. 3, Forthcoming 2023) on SSRN. Here is the abstract:

Personal data is an essential concept for information privacy law. Privacy’s boundaries are set by personal data: for a privacy violation to occur, personal data must be involved. And an individual’s right to control information extends only to personal data. However, current theorizing about personal data is woefully incomplete. In light of this incompleteness, this Article offers a new conceptual approach to personal data. To start, this Article argues that personal data is simply a legal construct that describes the set of information or circumstances where an individual should be able to exercise control over a piece of information.

After displacing the mythology about the naturalness of personal data, this Article fashions a new theory of personal data that more adequately tracks when a person should be able to control specific information. Current approaches to personal data rightly examine the relationship between a person and information; however, they misunderstand what relationship is necessary for legitimate control interests. Against the conventional view, this Article suggests that how the information is used is an indispensable part of the analysis of the relationship between a person and data that determines whether the data should be considered personal. In doing so, it employs the philosophical concept of separability as a method for making determinations about which uses of information are connected to a person and, therefore, should trigger individual privacy protections and which are not.

This framework offers a superior foundation to extant theories for capturing the existence and scope of individual interests in data. By doing so, it provides an indispensable contribution for crafting an ideal regime of information governance. Separability enables privacy and data protection laws to better identify when a person’s interests are at stake. And further, separability offers a resilient normative foundation for personal data that grounds interests of control in a philosophical foundation of autonomy and dignity values—which are incorrectly calibrated in existing theories of personal data. Finally, this Article’s reimagination of personal data will allow privacy and data protection laws to more effectively combat modern privacy harms such as manipulation and inferences.

Jarovsky on Transparency by Design: Reducing Informational Vulnerabilities Through UX Design

Luiza Jarovsky (Tel Aviv University, Buchmann Faculty of Law) has posted “Transparency by Design: Reducing Informational Vulnerabilities Through UX Design” on SSRN. Here is the abstract:

Can transparency help us solve the challenges posed by dark patterns and other unfair practices online? Despite the many weaknesses of transparency obligations in the data protection arena, I suggest that a Transparency by Design (TbD) approach can assist us in better achieving data protection goals, especially by empowering data subjects with accessible information, facilitating the exercise of data protection rights, and helping to reduce informational vulnerabilities. TbD proposes that compliance with transparency rules should happen in all levels of design and user interaction, instead of being restricted to Privacy Policies (PPs) or similar legal statements. In a previous work, I discussed how manipulative design can exploit behavioral biases and generate unfairness; here, I show how failing to support data subjects with accessible information, adequate design and meaningful choices can similarly create an unfair online environment.

This work highlights the shortcomings of transparency rules in the context of the General Data Protection Regulation (GDPR). I demonstrate that, in practice, GDPR obligations do not result in effective transparency for data subjects, increasing unfairness in the data protection context. Consequently, data subjects are most of the time unaware of how, why, and when their data is collected, are uninformed about the risks or broader consequences of their personal data- fueled online activities, do not know their rights regarding their data, and do not have access to meaningful choices.

In order to answer these shortcomings, I propose TbD, so that we – the data subjects – are not only effectively informed of the collection and use of our data, but can also exercise our data subjects’ rights, make meaningful privacy choices, and mitigate our informational vulnerabilities.

The main goal of TbD is that data subjects will be served with information that is meaningful and actionable, instead of a standard block of text that acts as a liability document for the controller’s legal department – as currently happens with PPs. Design, manifested through User Experience (UX), is a central tool in this framework, as it should embed TbD’s values and premises and empower data subjects throughout their interaction with the controller.

Lu on Data Privacy, Human Rights, and Algorithmic Opacity

Sylvia Lu (UC Berkeley School of Law) has posted “Data Privacy, Human Rights, and Algorithmic Opacity” (California Law Review, Vol. 110, 2022) on SSRN. Here is the abstract:

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.

The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.

Kuner on The Path to Recognition of Data Protection in India: The Role of the GDPR and International Standards

Christopher Kuner (Vrije Universiteit Brussel – LSTS; Maastricht University – Faculty of Law; Centre for European Legal Studies) has posted “The Path to Recognition of Data Protection in India: The Role of the GDPR and International Standards” (National Law Review of India, vol. 33 no. 1 (2021)) on SSRN. Here is the abstract:

By providing rules of the road for data processing, data protection legislation has become a key enabler of the information society. The European Union’s General Data Protection Regulation (GDPR) has been highly influential around the world, and the recent Schrems II judgment of the Court of Justice of the EU, which strengthened restrictions on international data transfers under EU law, has important implications for India as it prepares to adopt data protection legislation. While the Puttaswamy judgment that recognised privacy as a fundamental right represents a great stride forward for privacy protection in India, legislation is necessary to establish the right to data protection in the Indian legal system. The proposed Personal Data Protection Bill does not provide a sufficiently high standard of data protection, particularly in light of surveillance initiatives and legal mandates to collect data under Indian law. India should view the strengthening of its legal framework for data protection not just as a way to receive an EU adequacy decision, but also as having broad societal benefits. In adopting data protection legislation India should align itself both with the GDPR and also more broadly with data protection standards of important international bodies, such as those of the Council of Europe and the OECD.

Aguiar et al. on Facebook Shadow Profiles

Luis Aguiar (University of Zurich – Department of Business Administration) et al. have posted “Facebook Shadow Profiles” on SSRN. Here is the abstract:

Data is often at the core of digital products and services, especially when related to online advertising. This has made data protection and privacy a major policy concern. When surfing the web, consumers leave digital traces that can be used to build user profiles and infer preferences. We quantify the extent to which Facebook can track web behavior outside of their own platform. The network of engagement buttons, placed on third-party websites, lets Facebook follow users as they browse the web. Tracking users outside its core platform enables Facebook to build shadow profiles. For a representative sample of US internet users, 52 percent of websites visited, accounting for 40 percent of browsing time, employ Facebook’s tracking technology. Small differences between Facebook users and non-users are largely explained by differing user activity. The extent of shadow profiling Facebook may engage in is similar on privacy-sensitive domains and across user demographics, documenting the possibility for indiscriminate tracking.

Congiu, Sabatino & Sapi on The Impact of Privacy Regulation on Web Traffic: Evidence From the GDPR

Raffaele Congiu, Lorien Sabatino, and Geza Sapi (European Commission; University of Dusseldorf) have posted “The Impact of Privacy Regulation on Web Traffic: Evidence From the GDPR” on SSRN. Here is the abstract:

We use traffic data from around 5,000 web domains in Europe and United States to investigate the effect of the European Union’s General Data Protection Regulation (GDPR) on website visits and user behaviour. We document an overall traffic reduction of approximately 15% in the long-run and find a measurable reduction of engagement with websites. Traffic from direct visits, organic search, email marketing, social media links, display ads, and referrals dropped significantly, but paid search traffic – mainly Google search ads – was barely affected. We observe an inverted U-shaped relationship between website size and change in visits due to privacy regulation: the smallest and largest websites lost visitors, while medium ones were less affected. Our results are consistent with the view that users care about privacy and may defer visits in response to website data handling policies. Privacy regulation can impact market structure and may increase dependence on large advertising service providers. Enforcement matters as well: The effects were amplified considerably in the long-run, following the first significant fine issued eight months after the entry into force of the GDPR.

Recommended.