Botero Arcila on The Case for Local Data Sharing Ordinances

Beatriz Botero Arcila (Sciences Po Law; Harvard Berkman Klein) has posted “The Case for Local Data Sharing Ordinances”
(William & Mary Bill of Rights Journal) on SSRN. Here is the abstract:

Cities in the US have started to enact data-sharing rules and programs to access some of the data that technology companies operating under their jurisdiction – like short-term rental or ride hailing companies – collect. This information allows cities to adapt too to the challenges and benefits of the digital information economy. It allows them to understand what their impact is on congestion, the housing market, the local job market and even the use of public spaces. It also empowers them to act accordingly by, for example, setting vehicle caps or mandating a tailored minimum pay for gig-workers. These companies, however, sometimes argue that sharing this information attempts against their users’ privacy rights and their privacy rights, because this information is theirs; it’s part of their business records. The question is thus what those rights are, and whether it should and could be possible for local governments to access that information to advance equity and sustainability, without harming the legitimate privacy interests of both individuals and companies. This Article argues that within current Fourth Amendment doctrine and privacy law there is space for data-sharing programs. Privacy law, however, is being mobilized to alter the distribution of power and welfare between local governments, companies, and citizens within current digital information capitalism to extend those rights beyond their fair share and preempt permissible data-sharing requests. The Article warns that if the companies succeed in their challenges, privacy law will have helped shield corporate power from regulatory oversight, while still leaving individuals largely unprotected and submitting local governments further to corporate interests.

Richards on The GDPR as Privacy Pretext and the Problem of Co-Opting Privacy

Neil M. Richards (Washington U Law) has posted “The GDPR as Privacy Pretext and the Problem of Co-Opting Privacy” (73 Hastings Law Journal 1511 (2022)) on SSRN. Here is the abstract:

Privacy and data protection law’s expansion brings with it opportunities for mischief as privacy rules are used pretextually to serve other ends. This Essay examines the problem of such co-option of privacy using a case study of lawsuits in which defendants seek to use the EU’s General Data Protection Regulation (“GDPR”) to frustrate ordinary civil discovery. In a series of cases, European civil defendants have argued that the GDPR requires them to redact all names from otherwise valid discovery requests for relevant evidence produced under a protective order, thereby turning the GDPR from a rule designed to protect the fundamental data protection rights of European Union (EU) citizens into a corporate litigation tool to frustrate and delay the production of evidence of alleged wrongdoing.

This Essay uses the example of pretextual GDPR use to frustrate civil discovery to make three contributions to the privacy literature. First, it identifies the practice of defendants attempting strategically to co-opt the GDPR to serve their own purposes. Second, it offers an explanation of precisely why and how this practice represents not merely an incorrect reading of the GDPR, but more broadly, a significant departure from its purposes—to safeguard the fundamental right of data protection secured by European constitutional and regulatory law. Third, it places the problem of privacy pretexts and the GDPR in the broader context of the co-option of privacy rules more generally, offers a framework for thinking about such efforts, and argues that this problem is only likely to deepen as privacy and data protection rules expand through the ongoing processes of reform.

Ashley on Teaching Law and Digital Age Legal Practice with an AI and Law Seminar

Kevin Ashley (U Pitt Law) has posted “Teaching Law and Digital Age Legal Practice with an AI and Law Seminar: Justice, Lawyering and Legal Education in the Digital Age” (Chicago-Kent Law Review, Vol. 88, p. 783, 2013) on SSRN. Here is the abstract:

A seminar on Artificial Intelligence (“Al”) and Law can teach law students lessons about legal reasoning and legal practice in the digital age. Al and Law is a subfield of Al/computer science research that focuses on designing computer programs—computational models—that perform legal reasoning. These computational models are used in building tools to assist in legal practice and pedagogy and in studying legal reasoning in order to contribute to cognitive science and jurisprudence. Today, subject to a number of qualifications, computer programs can reason with legal rules, apply legal precedents, and even argue like a legal advocate.

This article provides a guide and examples to prepare law students for the digital age by means of an Al and Law seminar. After introducing the science of Artificial Intelligence and its application to law, the paper presents the syllabus for an up-to-date Al and Law seminar. With the Syllabus as a framework, the paper showcases some characteristic Al and Law programs, and illustrates the pedagogically important lessons that Al and Law has learned about reasoning with legal rules and cases, about legal argument and about the digital documents technologies that are becoming available, and even the norm, in legal practice.

Davis on Robolawyers and Robojudges

Joshua P. Davis (University of San Francisco – School of Law) has posted “Of Robolawyers and Robojudges” (Hastings Law Journal, 2022) on SSRN. Here is the abstract:

Artificial intelligence (AI) may someday play various roles in litigation, particularly complex litigation. It may be able to provide strategic advice, advocate through legal briefs and in court, help judges assess class action settlements, and propose or impose compromises. It may even write judicial opinions and decide cases. For it to perform those litigation tasks, however, would require two breakthroughs: one involving a form of instrumental reasoning that we might loosely call common sense or more precisely call abduction and the other involving a form of reasoning that we will label purposive, that is, the formation of ends or objectives. This Article predicts that AI will likely make strides at abductive reasoning but not at purposive reasoning. If those predictions prove accurate, it contends, AI will be able to perform sophisticated tasks usually reserved for lawyers, but it should not be trusted to perform similar tasks reserved for judges. In short, we might welcome a role for robolawyers but resist the rise of robojudges.

Grimmelmann & Windawi on Blockchains as Infrastructure and Semicommons

James Grimmelmann (Cornell Law School; Cornell Tech) & A. Jason Windawi (Princeton University – Department of Sociology) have posted “Blockchains as Infrastructure and Semicommons” (William & Mary Law Review (2023, Forthcoming)) on SSRN. Here is the abstract:

Blockchains are not self-executing machines. They are resource systems, designed by people, maintained by people, and governed by people. Their technical protocols help to solve some difficult problems in shared resource management, but behind those protocols there are always communities of people struggling with familiar challenges in governing their provision and use of common infrastructure.

In this Essay, we describe blockchains as shared, distributed transactional ledgers using two frameworks from commons theory. Brett Frischmann’s theory of infrastructure provides an external view, showing how blockchains provide useful, generic infrastructure for recording transactions, and why that infrastructure is most naturally made available on common, non-discriminatory terms. Henry’s Smith’s theory of semicommons provides an internal view, showing how blockchains intricately combine private resources (such as physical hardware and on-chain assets) with common resources (such as the shared transactional ledger and the blockchain protocol itself). We then detail how blockchains struggle with many the governance challenges that these frameworks predict, requiring blockchain communities to engage in extensive off-chain governance work to coordinate their uses and achieve consensus. Blockchains function as infrastructure and semicommons not in spite of the human element, but because of it.

Husovec & Roche Laguna on the Digital Services Act: A Short Primer

Martin Husovec (London School of Economics – Law School) and Irene Roche Laguna (European Commission) have posted “Digital Services Act: A Short Primer” (in Principles of the Digital Services Act (Oxford University Press, Forthcoming 2023)) on SSRN. Here is the abstract:

This article provides a short primer on the forthcoming Digital Services Act. DSA is an EU Regulation aiming to assure fairness, trust, and safety in the digital environment. It preserves and upgrades the liability exemptions for online intermediaries that exist in the European framework since 2000. It exempts digital infrastructure-layer services, such as internet access providers, and application-layer services, such as social networks and file-hosting services, from liability for third-party content. Simultaneously, DSA imposes due diligence obligations concerning the design and operation of such services, in order to ensure a safe, transparent and predictable online ecosystem. These due diligence obligations aim to regulate the general design of services, content moderation practices, advertising, and transparency, including sharing of information. The due diligence obligations focus mainly on the process and design rather than the content itself, and usually correspond to the size and social relevance of various services. Very large online platforms and very large online search engines are subject to the most extensive risk mitigation responsibilities, which are subject to independent auditing.

Kaminski on Technological ‘Disruption’ of the Law’s Imagined Scene

Margot E. Kaminski (U Colorado Law; Yale ISP; U Colorado – Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Technological ‘Disruption’ of the Law’s Imagined Scene: Some Lessons from Lex Informatica” (Berkeley Technology Law Journal, Vol. 36, 2022) on SSRN. Here is the abstract:

Joel Reidenberg in his 1998 Article Lex Informatica observed that technology can be a distinct regulatory force in its own right and claimed that law would arise in response to human needs. Today, law and technology scholarship continues to ask: does technology ever disrupt the law? This Article articulates one particular kind of “legal disruption”: how technology (or really, the social use of technology) can alter the imagined setting around which policy conversations take place—what Jack Balkin and Reva Siegal call the “imagined regulatory scene.” Sociotechnical change can alter the imagined regulatory scene’s architecture, upsetting a policy balance and undermining a particular regulation or regime’s goals. That is, sociotechnical change sometimes disturbs the imagined paradigmatic scenario not by departing from it entirely but by constraining, enabling, or mediating actors’ behavior that we want the law to constrain or protect. This Article identifies and traces this now common move in recent law and technology literature, drawing on Reidenberg’s influential and prescient work.

Hartzog & Richards on Legislating Data Loyalty

Woodrow Hartzog (Boston U Law; Stanford Center for Internet and Society) and Neil M. Richards (Washington U Law; Yale ISP; Stanford Center for Internet and Society) have posted “Legislating Data Loyalty” (97 Notre Dame Law Review Reflection 356 (2022)) on SSRN. Here is the abstract:

Lawmakers looking to embolden privacy law have begun to consider imposing duties of loyalty on organizations trusted with people’s data and online experiences. The idea behind loyalty is simple: organizations should not process data or design technologies that conflict with the best interests of trusting parties. But the logistics and implementation of data loyalty need to be developed if the concept is going to be capable of moving privacy law beyond its “notice and consent” roots to confront people’s vulnerabilities in their relationship with powerful data collectors.

In this short Essay, we propose a model for legislating data loyalty. Our model takes advantage of loyalty’s strengths—it is well-established in our law, it is flexible, and it can accommodate conflicting values. Our Essay also explains how data loyalty can embolden our existing data privacy rules, address emergent dangers, solve privacy’s problems around consent and harm, and establish an antibetrayal ethos as America’s privacy identity.

We propose that lawmakers use a two-step process to (1) articulate a primary, general duty of loyalty, then (2) articulate “subsidiary” duties that are more specific and sensitive to context. Subsidiary duties regarding collection, personalization, gatekeeping, persuasion, and mediation would target the most opportunistic contexts for self-dealing and result in flexible open-ended duties combined with highly specific rules. In this way, a duty of data loyalty is not just appealing in theory—it can be effectively implemented in practice just like the other duties of loyalty our law has recognized for hundreds of years. Loyalty is thus not only flexible, but it is capable of breathing life into America’s historically tepid privacy frameworks.

Chen on How Equalitarian Regulation of Online Hate Speech Turns Authoritarian: A Chinese Perspective

Ge Chen (Durham Law School) has posted “How Equalitarian Regulation of Online Hate Speech Turns Authoritarian: A Chinese Perspective” ((2022) Journal of Media Law 14(1)) on SSRN. Here is the abstract:

This article reveals how the heterogeneous legal approaches of balancing online hate speech against equality rights in liberal democracies have informed China in its manipulative speech regulation. In an authoritarian constitutional order, the regulation of hate speech is politically relevant only because the hateful topics are related to regime-oriented concerns. The article elaborates on the infrastructure of an emerging authoritarian regulatory patchwork of online hate speech in the global context and identifies China’s unique approach of restricting political contents under the aegis of protecting equality rights. Ultimately, both the regulation and dis-regulation of online hate speech form a statist approach that deviates from the paradigm protective of equality rights in liberal democracies and serves to fend off open criticism of government policies and public discussion of topics that potentially contravene the mainstream political ideologies.

Whittaker on Corporate Capture of AI

Meredith Whittaker (NYU) has posted “The Steep Cost of Capture” on SSRN. Here is the abstract:

In considering how to tackle this onslaught of industrial AI, we must first recognize that the “advances” in AI celebrated over the past decade were not due to fundamental scientific breakthroughs in AI techniques. They were and are primarily the product of significantly concentrated data and compute resources that reside in the hands of a few large tech corporations. Modern AI is fundamentally dependent on corporate resources and business practices, and our increasing reliance on such AI cedes inordinate power over our lives and institutions to a handful of tech firms. It also gives these firms significant influence over both the direction of AI development and the academic institutions wishing to research it.

Meaning that tech firms are startlingly well positioned to shape what we do—and do not—know about AI and the business behind it, at the same time that their AI products are working to shape our lives and institutions.

Examining the history of the U.S. military’s influence over scientific research during the Cold War, we see parallels to the tech industry’s current influence over AI. This history also offers alarming examples of the way in which U.S. military dominance worked to shape academic knowledge production, and to punish those who dissented.

Today, the tech industry is facing mounting regulatory pressure, and is increasing its efforts to create tech-positive narratives and to silence and sideline critics in much the same way the U.S. military and its allies did in the past. Taken as a whole, we see that the tech industry’s dominance in AI research and knowledge production puts critical researchers and advocates within, and beyond, academia in a treacherous position. This threatens to deprive frontline communities, policymakers, and the public of vital knowledge about the costs and consequences of AI and the industry responsible for it—right at the time that this work is most needed.