James Grimmelmann (Cornell Law School; Cornell Tech) & A. Jason Windawi (Princeton University – Department of Sociology) have posted “Blockchains as Infrastructure and Semicommons” (William & Mary Law Review (2023, Forthcoming)) on SSRN. Here is the abstract:
Blockchains are not self-executing machines. They are resource systems, designed by people, maintained by people, and governed by people. Their technical protocols help to solve some difficult problems in shared resource management, but behind those protocols there are always communities of people struggling with familiar challenges in governing their provision and use of common infrastructure.
In this Essay, we describe blockchains as shared, distributed transactional ledgers using two frameworks from commons theory. Brett Frischmann’s theory of infrastructure provides an external view, showing how blockchains provide useful, generic infrastructure for recording transactions, and why that infrastructure is most naturally made available on common, non-discriminatory terms. Henry’s Smith’s theory of semicommons provides an internal view, showing how blockchains intricately combine private resources (such as physical hardware and on-chain assets) with common resources (such as the shared transactional ledger and the blockchain protocol itself). We then detail how blockchains struggle with many the governance challenges that these frameworks predict, requiring blockchain communities to engage in extensive off-chain governance work to coordinate their uses and achieve consensus. Blockchains function as infrastructure and semicommons not in spite of the human element, but because of it.
Karen Yeung (The University of Birmingham) on “Constitutional Principles in a Networked Digital Society” on SSRN. Here is the abstract:
This is the text of a keynote address delivered at the International Association of Constitutional Law (IACL) Roundtable, The Impact of Digitization on Constitutional Law, Copenhagen on 31 January 2022. In this short address, I ask: are our existing constitutional principles fit for purpose in an increasingly datafied, networked digital age? I suggest that our constitutional principles, including our rights discourse, has the potential to adapt to meet the altered conditions of our increasingly digitised and datafied age, but whether they will succeed in doing so remains an open question.
Elad Gil (Hebrew University of Jerusalem – Faculty of Law) has posted “Cyber Checks and Balances” (Cornell International Law Journal, Forthcoming) on SSRN. Here is the abstract:
How does the digital era affect the ability of governments to ‘govern’? On the one hand, global connectivity and data-driven technologies provide governments with powerful new ways to exercise coercion. Digital surveillance, content takedowns (i.e., censorship), forced data ‘localization’, and hacking, to take a few examples, have become widely adopted techniques in the toolkits of many democratic states. These techniques enable encroachments on liberty that only two decades ago would seem unthinkable. On the other hand, the exclusive status of the state as “the sovereign” is challenged in cyberspace more than in any other arena by a variety of non-state actors, as well as foreign states. Scholarly accounts accordingly split between two narratives: some scholars view the digital era as the beginning of an era of awesome state power, while others see signs of state decline.
This Article challenges both narratives, arguing that ‘government power’ in cyberspace cannot be theorized as a static concept. Rather, it is determined by a web of interactions with and pressures from forces and actors that, although operating outside the constitutional structure, are akin in their effect to constitutional checks and balances. Aiming to fill a gap in the literature, this Article conceptualizes the cyber checks and balances ecosystem, identifies and analyzes its four principal components—the private sector, the ‘architecture’ of cyberspace, international law, and international politics—and examines the interwoven effects. It demonstrates how cyber checks and balances constrain the government in some ways but empower it in others, sometimes even enabling the government to circumvent legal limitations on its own authority. After mapping this ecosystem, the Article assesses its normative implications. Viewing the balance of power between the state and other forces in cyberspace as a system of checks and balances affords a more accurate and nuanced analysis of governmental exercises of power in the digital domain. More importantly, this Article shows that understanding how this ecosystem is shaping state power can help the traditional forces within the constitutional system—lawmakers, judges, and executive gatekeepers—optimize their checking and balancing, ensuring that government power in cyberspace is exercised effectively yet responsibly.
Céline Castets-Renard (University of Toulouse) has posted “Human Rights and Algorithmic Impact Assessment for Predictive Policing” (Constitutional Challenges in the Algorithmic Society, CUP 2021) on SSRN. Here is the abstract:
Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. For instance, New York, Chicago, and Los Angeles use predictive policing systems built by private actors, such as PredPol, Palantir and Hunchlab, to assess crime risk and forecast its occurrence, in hope of mitigating it. More often, such systems predict the places where crimes are most likely to happen in a given time window (place-based) based on input data, such as location and timing of previously reported crimes. Other systems analyze who will be involved in a crime as either victim or perpetrator (person-based). Predictions can focus on variables such as places, people, groups or incidents. The goal is also to better deploy officers in a time of declining budgets and staffing. Such tools are mainly used in the US, but European police forces have expressed an interest in using them to protect the largest cities. Predictive policing systems and pilot projects have already been deployed , such as PredPol, used by the Kent Police in the UK.
However, these predictive systems challenge fundamental rights and guarantees of the criminal procedure (part. 2). I will address these issues by taking into account the enactment of ethical norms to reinforce constitutional rights (part. 3), as well as the use of a practical tool, namely Algorithmic Impact Assessment, to mitigate the risks of such systems (part. 4).
Adam M. Gershowitz (William & Mary Law School) has posted “The Tesla Meets the Fourth Amendment” on SSRN. Here is the abstract:
Can police search a smart car’s computer without a warrant? Although the Supreme Court banned warrantless searches of cell phones incident to arrest in Riley v. California, the Court left the door open to warrantless searches under other exceptions to the warrant requirement. This article argues that the Fourth Amendment’s automobile exception currently permits the police to warrantlessly dig into a vehicle’s computer system and extract vast amounts of cell phone data. Just as the police can rip open seats or slash tires to search for drugs under the automobile exception, there is a strong argument that the police can warrantlessly extract data stored in a vehicle’s infotainment system.
When a driver uses Bluetooth to connect their cell phone to a vehicle, the driver transfers text messages, call history, contacts, emails, photos, videos, and even social media information from their phone to the car’s infotainment system. Police departments can then use a sophisticated data extraction device to download all of that cell phone data.
Police in multiple states have already acknowledged extracting rudimentary digital data from cars without a warrant. As Tesla and other smart cars become ubiquitous, police departments will be tempted to use more sophisticated data extraction tools to examine private cell phone data without first obtaining a warrant. Because the Supreme Court moves extremely slowly in addressing the legality of high-tech searches, this article argues that Congress and state legislatures should amend outdated privacy statutes to require police to obtain search warrants before extracting private cell phone data from a vehicle’s computer system.
Eric Alston (Finance Division, University of Colorado Boulder), Wilson Law (Baylor University),
Ilia Murtazashvili (University of Pittsburgh – Graduate School of Public and International Affairs), and Martin B. H. Weiss (University of Pittsburgh – School of Computing and Information) have posted “Blockchain Networks as Constitutional and Competitive Polycentric Orders” on SSRN. Here is the abstract:
Permissionless blockchains have been described as a novel institutional building block for voluntary economic exchange, with unique protocol features such as automated contract execution, high levels of network and process transparency, and uniquely distributed governance. We argue that conventional institutional economics analysis of blockchain networks is incomplete absent a more robust application of descriptive polycentric analysis. Though the distributed governance that permissionless blockchain protocols provide is novel, these networks nonetheless require ongoing coordination between stakeholders and are subject to competitive pressures much like other private organizations pursuing similar goals for a set of users who can choose among providers. We characterize change on blockchain networks as resulting from internal sources and external sources. These internal sources include constitutional (protocol) design and the related need for collective choice processes to update protocols. In addition to law and regulation, competitive pressure is itself a critical external source of governance. Predominantly through analysis of two leading cryptocurrency networks, Bitcoin and Ethereum, we illustrate how conceptualizing of blockchain as a polycentric enterprise enhances our predictive and descriptive understanding of these networks.
Rashida Richardson (Northeastern University School of Law) and Amba Kak (New York University (NYU)) have posted “Suspect Development Systems: Databasing Marginality and Enforcing Discipline” (University of Michigan Journal of Law Reform, Vol. 55, Forthcoming) on SSRN. Here is the abstract:
Algorithmic accountability law — focused on the regulation of data-driven systems like artificial intelligence (AI) or automated decision-making (ADM) tools — is the subject of lively policy debates, heated advocacy, and mainstream media attention. Concerns have moved beyond data protection and individual due process to encompass a broader range of group-level harms such as, discrimination and modes of democratic participation. While this is a welcome and long overdue shift, this discourse has ignored systems like databases, that are viewed as technically ‘rudimentary’ and often siloed from regulatory scrutiny and public attention. Additionally, burgeoning regulatory proposals like algorithmic impact assessments are not structured to surface important yet often overlooked social, organizational and political economy contexts that are critical to evaluating the practical functions and outcomes of technological systems.
This article presents a new categorical lens and analytical framework that aims to address and overcome these limitations. “Suspect Development Systems” (SDS) refers to: (1) information technologies used by government and private actors, (2) to manage vague or often immeasurable social risk based on presumed or real social conditions (e.g. violence, corruption, substance abuse), (3) that subjects targeted individuals or groups to greater suspicion, differential treatment, and more punitive and exclusionary outcomes. This frame includes some of the most recent and egregious examples of data-driven tools (such as predictive policing or risk assessments) but critically, it is also inclusive of a broader range of database systems that are currently at the margins of technology policy discourse. By examining the use of various criminal intelligence databases in India, the United Kingdom, and the United States, we developed a framework of five categories of features (technical, legal, political economy, organizational, and social) that together and separately influence how these technologies function in practice, the ways they are used, and the outcomes they produce. We then apply this analytical framework to welfare system databases, universal or ID number databases, and citizenship databases to demonstrate the value of this framework in both identifying and evaluating emergent or under-examined technologies in other sensitive social domains.
Suspect Development Systems is an intervention in legal scholarship and practice as it provides a much-needed definitional and analytical framework for understanding an ever-evolving ecosystem of technologies embedded and employed in modern governance. Our analysis also helps redirect attention toward important yet often under-examined contexts, conditions, and consequences that are pertinent to the development of meaningful legislative or regulatory interventions in the field of algorithmic accountability. The cross-jurisdictional evidence put forth across this Article illuminates the value of examining commonalities between the Global North and South to inform our understanding of how seemingly disparate technologies and contexts are in fact coaxial, which is the basis for building more global solidarity.
Alan Z. Rozenshtein (University of Minnesota Law School) has posted “Cost-Benefit Analysis and the Digital Fourth Amendment” (40 Criminal Justice Ethics (2021 Forthcoming)) (reviewing Review of Ric Simmons, Smart Surveillance: How to Interpret the Fourth Amendment in the Twenty-First Century (2019)) on SSRN. Here is the abstract:
In “Smart Surveillance,” Ric Simmons argues for the application of cost-benefit analysis (CBA) to digital surveillance. This review argues that, although Simmons is right to look to CBA as a tool for applying the Fourth Amendment to new technology, his faith in the courts as the main practitioners of surveillance CBA is misguided. Across a variety of dimensions of institutional competence, the political branches, not the courts, are best placed to make surveillance policy under conditions of technological change.
Any Cyphert (WVU College of Law) has posted “Reprogramming Recidivism: The First Step Act and Algorithmic Prediction of Risk” (Seton Hall Law Review, Vol. 51, 2020) on SSRN. Here is the abstract:
The First Step Act, a seemingly miraculous bipartisan criminal justice reform bill, was signed into law in late 2018. The Act directed the Attorney General to develop a risk and needs assessment tool that would effectively determine who would be eligible for early release based on an algorithmic prediction of recidivism. The resulting tool—PATTERN—was released in the summer of 2019 and quickly updated in January of 2020. It was immediately put to use in an unexpected manner, helping to determine who was eligible for early release during the COVID-19 pandemic. It is now the latest in a growing list of algorithmic recidivism prediction tools, tools that first came to mainstream notice with critical reporting about the COMPAS sentencing algorithm.
This Article evaluates PATTERN, both in its development as well as its still-evolving implementation. In some ways, the PATTERN algorithm represents tentative steps in the right direction on issues like transparency, public input, and use of dynamic factors. But PATTERN, like many algorithmic decision-making tools, will have a disproportionate impact on Black inmates; it provides fewer opportunities for inmates to reduce their risk score than it claims and is still shrouded in some secrecy due to the government’s decision to dismiss repeated calls to release more information about it. Perhaps most perplexing, it is unclear whether the tool actually advances accuracy with its predictions. This Article concludes that PATTERN is a decent first step, but it still has a long way to go before it is truly reformative.
The Download of the Week is “Internet Federalism” by Tejas N. Narechania (University of California, Berkeley – School of Law) and Erik Stallman (University of California, Berkeley – School of Law). Here is the abstract:
The internet is not a cloud. Rather, it is a series of real cables, wires, radio links, and switches connecting places that employ real people and house real computers. Each of these components of the internet’s infrastructure operates in a distinct locale and is subject to distinct market constraints.
This basic yet significant descriptive insight has important implications for regulatory cooperation and competition between state and federal authorities regarding broadband internet access. In the latest battles over network neutrality regulation, for example, state and federal regulators have found themselves at odds over the scopes of their respective authority. The Federal Communications Commission has suggested that the internet’s “inherently interstate” nature, together with its own disavowal of regulatory power over broadband carriage, preempts state regulatory power. But some states insist that they retain the authority to regulate various aspects of these services, local and beyond.
So where on the internet does state power end and federal power begin? The long history of competition and cooperation between state and federal communications regulators suggests a tradition—incomplete and imperfect in places—of subsidiarity, locating decisional power at the most immediately salient local jurisdiction. Such an approach can—and should—inform the allocation of shared regulatory authority over the internet’s infrastructure and services, too. The internet is not one grand, monolithic, interstate thing, immune from state and local regulation under the dormant commerce clause, doctrines of federal preemption, or related limits on state power. Rather, whether states may (and should) regulate depends on technical specifics and regulatory effects. Hence, where local concerns predominate, local authorities may be our most competent regulators; and where federal concerns predominate, federal authorities should take the regulatory reins. Such a pragmatic approach yields important lessons for regulatory power online. Some aspects of internet service require a local touch. Others do not. But identifying an appropriate site for resolving such policy matters demands a close understanding of the technical, market, and regulatory structure of the communications networks that form the modern internet.