Jarovsky on Transparency by Design: Reducing Informational Vulnerabilities Through UX Design

Luiza Jarovsky (Tel Aviv University, Buchmann Faculty of Law) has posted “Transparency by Design: Reducing Informational Vulnerabilities Through UX Design” on SSRN. Here is the abstract:

Can transparency help us solve the challenges posed by dark patterns and other unfair practices online? Despite the many weaknesses of transparency obligations in the data protection arena, I suggest that a Transparency by Design (TbD) approach can assist us in better achieving data protection goals, especially by empowering data subjects with accessible information, facilitating the exercise of data protection rights, and helping to reduce informational vulnerabilities. TbD proposes that compliance with transparency rules should happen in all levels of design and user interaction, instead of being restricted to Privacy Policies (PPs) or similar legal statements. In a previous work, I discussed how manipulative design can exploit behavioral biases and generate unfairness; here, I show how failing to support data subjects with accessible information, adequate design and meaningful choices can similarly create an unfair online environment.

This work highlights the shortcomings of transparency rules in the context of the General Data Protection Regulation (GDPR). I demonstrate that, in practice, GDPR obligations do not result in effective transparency for data subjects, increasing unfairness in the data protection context. Consequently, data subjects are most of the time unaware of how, why, and when their data is collected, are uninformed about the risks or broader consequences of their personal data- fueled online activities, do not know their rights regarding their data, and do not have access to meaningful choices.

In order to answer these shortcomings, I propose TbD, so that we – the data subjects – are not only effectively informed of the collection and use of our data, but can also exercise our data subjects’ rights, make meaningful privacy choices, and mitigate our informational vulnerabilities.

The main goal of TbD is that data subjects will be served with information that is meaningful and actionable, instead of a standard block of text that acts as a liability document for the controller’s legal department – as currently happens with PPs. Design, manifested through User Experience (UX), is a central tool in this framework, as it should embed TbD’s values and premises and empower data subjects throughout their interaction with the controller.

Lu on Data Privacy, Human Rights, and Algorithmic Opacity

Sylvia Lu (UC Berkeley School of Law) has posted “Data Privacy, Human Rights, and Algorithmic Opacity” (California Law Review, Vol. 110, 2022) on SSRN. Here is the abstract:

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.

The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.

Kuner on The Path to Recognition of Data Protection in India: The Role of the GDPR and International Standards

Christopher Kuner (Vrije Universiteit Brussel – LSTS; Maastricht University – Faculty of Law; Centre for European Legal Studies) has posted “The Path to Recognition of Data Protection in India: The Role of the GDPR and International Standards” (National Law Review of India, vol. 33 no. 1 (2021)) on SSRN. Here is the abstract:

By providing rules of the road for data processing, data protection legislation has become a key enabler of the information society. The European Union’s General Data Protection Regulation (GDPR) has been highly influential around the world, and the recent Schrems II judgment of the Court of Justice of the EU, which strengthened restrictions on international data transfers under EU law, has important implications for India as it prepares to adopt data protection legislation. While the Puttaswamy judgment that recognised privacy as a fundamental right represents a great stride forward for privacy protection in India, legislation is necessary to establish the right to data protection in the Indian legal system. The proposed Personal Data Protection Bill does not provide a sufficiently high standard of data protection, particularly in light of surveillance initiatives and legal mandates to collect data under Indian law. India should view the strengthening of its legal framework for data protection not just as a way to receive an EU adequacy decision, but also as having broad societal benefits. In adopting data protection legislation India should align itself both with the GDPR and also more broadly with data protection standards of important international bodies, such as those of the Council of Europe and the OECD.

Aguiar et al. on Facebook Shadow Profiles

Luis Aguiar (University of Zurich – Department of Business Administration) et al. have posted “Facebook Shadow Profiles” on SSRN. Here is the abstract:

Data is often at the core of digital products and services, especially when related to online advertising. This has made data protection and privacy a major policy concern. When surfing the web, consumers leave digital traces that can be used to build user profiles and infer preferences. We quantify the extent to which Facebook can track web behavior outside of their own platform. The network of engagement buttons, placed on third-party websites, lets Facebook follow users as they browse the web. Tracking users outside its core platform enables Facebook to build shadow profiles. For a representative sample of US internet users, 52 percent of websites visited, accounting for 40 percent of browsing time, employ Facebook’s tracking technology. Small differences between Facebook users and non-users are largely explained by differing user activity. The extent of shadow profiling Facebook may engage in is similar on privacy-sensitive domains and across user demographics, documenting the possibility for indiscriminate tracking.

Congiu, Sabatino & Sapi on The Impact of Privacy Regulation on Web Traffic: Evidence From the GDPR

Raffaele Congiu, Lorien Sabatino, and Geza Sapi (European Commission; University of Dusseldorf) have posted “The Impact of Privacy Regulation on Web Traffic: Evidence From the GDPR” on SSRN. Here is the abstract:

We use traffic data from around 5,000 web domains in Europe and United States to investigate the effect of the European Union’s General Data Protection Regulation (GDPR) on website visits and user behaviour. We document an overall traffic reduction of approximately 15% in the long-run and find a measurable reduction of engagement with websites. Traffic from direct visits, organic search, email marketing, social media links, display ads, and referrals dropped significantly, but paid search traffic – mainly Google search ads – was barely affected. We observe an inverted U-shaped relationship between website size and change in visits due to privacy regulation: the smallest and largest websites lost visitors, while medium ones were less affected. Our results are consistent with the view that users care about privacy and may defer visits in response to website data handling policies. Privacy regulation can impact market structure and may increase dependence on large advertising service providers. Enforcement matters as well: The effects were amplified considerably in the long-run, following the first significant fine issued eight months after the entry into force of the GDPR.

Recommended.

Turillazzi et al. on The Digital Services Act: An Analysis of Its Ethical, Legal, and Social Implications

Aina Turillazzi (Tilburg University) et al. have posted “The Digital Services Act: An Analysis of Its Ethical, Legal, and Social Implications” on SSRN. Here is the abstract:

In December 2020, the European Commission issued the Digital Services Act (DSA), a legislative proposal for a single market of digital services, focusing on fundamental rights, data privacy, and the protection of stakeholders. The DSA seeks to promote European digital sovereignty, among other goals. This article reviews the literature and related documents on the DSA to map and evaluate its ethical, legal, and social implications. It examines four macro-areas of interest regarding the digital services offered by online platforms. The analysis concludes that, so far, the DSA has led to contrasting interpretations, ranging from some stakeholders expecting it to be more challenging for gatekeepers, to others objecting that the proposed obligations are unjustified. The article contributes to this debate by arguing that a more robust framework for the benefit of all stakeholders should be defined.

Richards on The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows

Neil M. Richards (Washington University School of Law; Yale ISP; Stanford Center for Internet and Society) has posted “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows: Testimony of Professor Neil Richards before the United States Senate” on SSRN. Here is the abstract:

This is the prepared testimony and statement for the records, including responses to questions for the record of Professor Neil Richards before the United States Senate Commerce Committee on December 9, 2020. The testimony explains that while Congress has failed to pass a comprehensive privacy bill despite many opportunities, the judgment of the European Court of Justice in Data Protection Commissioner v. Facebook, (commonly known as “Schrems 2”) represents a real opportunity for it to do just that in the near future. The testimony argues first that Congress should not just pass a comprehensive privacy bill, but one that gets it right, that provides clear but substantive rules for companies, and which provides adequate protections and effective remedies for consumers. A law that meets these features will not just protect consumers – it will be good for business as well, by helping enable transatlantic data flows and building the consumer trust that is essential for long-term sustainable economic prosperity for all. Second, it explains what the judgment in Schrems 2 requires, with particular emphasis on factors within the jurisdiction of the Senate Commerce Committee. Third, it offers some ways in which the Committee’s work can solve some of the challenges for data flows and privacy law that the Schrems 2 judgment raises or illustrates. Fourth, it argues that this Committee should pass a strong privacy law (including a duty of loyalty) that builds the consumer trust that is so essential to sustainable and profitable commerce.

Hu on Individuals as Gatekeepers against Data Misuse

Ying Hu (National University of Singapore – Faculty of Law) has posted “Individuals as Gatekeepers against Data Misuse” (28 Michigan Technology Law Review 115 (2021)) on SSRN. Here is the abstract:

This article makes a case for treating individual data subjects as gatekeepers against misuse of personal data. Imposing gatekeeper responsibility on individuals is most useful where (a) the primary wrongdoers engage in data misuse intentionally or recklessly; (b) misuse of personal data is likely to lead to serious harm; and (c) one or more individuals are able to detect and prevent data misuse at a reasonable cost.

As gatekeepers, individuals should have a legal duty to take reasonable measures to prevent data misuse where they are aware of facts indicating that the person seeking personal data from them is highly likely to misuse it or to facilitate its misuse. Recognizing a legal duty to prevent data misuse provides a framework for determining the boundaries of appropriate behavior when dealing with personal data that people have legally acquired. It does not, however, abrogate the need to impose gatekeeping obligations on big technology companies.

In addition, individuals should also owe a social duty to protect the personal data in their possession. Whether individuals have sufficient incentive to protect their personal data in a particular situation depends not only on the cost of the relevant security measures, but also on their expectation of the security decisions made by others who also possess that data. Even a privacy conscious individual would have little incentive to invest in privacy protective measures if he believes that his personal data is possessed by a sufficiently large number of persons who do not invest in such measures. On the flip side, an individual’s decision to protect his personal data generates positive externalities—it incentivizes others to invest in security measures. As such, promoting the norm of data security is likely to lead to a self-reinforcing virtuous cycle which helps improve the level of data security in a given community.

Creemers on China’s Emerging Data Protection Framework

Rogier Creemers (Leiden University) has posted “China’s Emerging Data Protection Framework” on SSRN. Here is the abstract:

Over the past five years, the People’s Republic of China has accelerated efforts to establish a legal architecture for data protection. With the promulgation of the Personal Information Protection Law (PIPL) and the Data Protection Law (DSL) in the summer of 2021, the first phase of these efforts have been concluded. These will have a significant impact on data flows within China, but also merit foreign attention. They provide a new approach to data protection to be subjected to comparative analysis, and may influence the development of data protection legislation in other states, particularly those with close digital connections to China. Doing so requires a greater understanding of how this legislation is shaped by the Chinese political and economic context.

Drawing on a thorough review of government documents, supplemented by Chinese-language academic sources, this article reviews the evolution of the two pillars of China’s data protection architecture, from the early stage of fragmentation via the promulgation of the Cybersecurity Law in 2016, up to the present day. It finds that the PIPL and its attendant regulations serve to primarily regulate the relationship between large technology companies and consumers, as well as prevent cyber crime. It does not create meaningful constraints on data collection and use by the state. Even so, the PIPL bears a clear family resemblance to personal data protection regimes elsewhere in the world. In contrast, the DSL is a considerable innovation, attempting to prevent harm to national security and the public interest inflicted through data-enabled means. While implementing structures for this Law remain under construction, it will likely herald a thorough reorganization of the way through which data is collected, stored and managed within all kinds of Chinese actors.

Lubin on The Prohibition on Extraterritorial Enforcement Jurisdiction in the Datasphere

Asaf Lubin (Indiana University Maurer School of Law; Berkman Klein Center for Internet & Society; Yale University – Information Society Project; Federmann Cybersecurity Center, Hebrew University of Jerusalem Faculty of Law) has posted “The Prohibition on Extraterritorial Enforcement Jurisdiction in the Datasphere” (Handbook on Extraterritoriality in International Law (Austen L. Parrish and Cedric Ryngaert eds., forthcoming, 2022)) on SSRN. Here is the abstract:

The omnipresent and ever-fluid nature of the datasphere complicates the work of our cyber constables. Our conventional understanding of a sovereign’s right to exclude others—the prohibition on extraterritorial enforcement jurisdiction that was reaffirmed in the famous Lotus case—may start to feel somewhat anachronistic in the face of new emerging technologies for remote searches and seizures. Modern law enforcement agencies are further bolstered by a data ecosystem which centers around powerful corporate intermediaries who may, on occasion, be coopted or coerced to collaborate in incidents of extraterritorial enforcement overreach.

Consider, for example, the following non-exhaustive list of cyber enforcement activities. Which of these techniques might you deem tolerable when employed against a target abroad without the consent or knowledge of the foreign state? Which of these might you consider to be crossing a threshold, and what factual and legal factors might influence your determination?

(1) Data scraping from social media platforms, other websites, and open-access databases located on servers abroad to import information.
(2) Subverting the command-and-control server of an anonymized botnet operating from one of the corners of the “dark web.”
(3) Electronically tracing and restoring cryptocurrency payments that were paid to a foreign criminal cyber gang involved in a crippling ransomware attack.
(4) Compelling a domestically registered company to release certain data concerning a national involved in a domestic crime, where the data is stored abroad.

In this chapter I explore each of these four scenarios. Each scenario ties to a different aspect of the datasphere which frays at the edges of traditional doctrine. These four aspects are: (1) consent, (2) anonymization, (3) piracy, and (4) data un-territoriality. For each of these aspects I try to demonstrate how jurisdictional rules may evolve, as a matter of lex ferenda, to better balance territorial integrity and cyber stability. My analysis thus attempts to provide a preliminary taxonomy of certain categories of cyber policing activity that could serve as a roadmap for future rule-prescribers and rule-appliers. Given the rise in cybercrime in recent years the paper ultimately challenges the normative validity and factual sustainability of the current doctrinal tradeoffs between external sovereignty and cyber stability.