Kristina Irion (University of Amsterdam) et al. have posted “Governing ‘European values’ Inside Data Flows: Interdisciplinary Perspectives” (Internet Policy Review, 10(3)) on SSRN. Here is the abstract:
This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows. Protecting fundamental human rights and critical public interests that undergird European societies in a global digital ecosystem poses complex challenges, especially because the United States and China are leading in novel technologies. We envision a research agenda calling upon different disciplines to further identify and understand European values that can adequately perform under conditions of transnational data flows.
Stephen W. Smith (Stanford Law School Center for Internet and Society) has posted “Clouds on the Horizon: Cross-Border Surveillance Under the US CLOUD Act” on SSRN. Here is the abstract:
The CLOUD Act of 2018 was hailed by proponents as a significant breakthrough in the ability of U.S. law enforcement to obtain electronic data stored abroad. Far less attention has been paid to another law enforcement-friendly aspect of this law–enabling real-time surveillance in a foreign country. This chapter takes a closer look at CLOUD Act provisions that authorize, expressly or (perhaps) implicitly, live monitoring of activities by criminal suspects and others abroad. While wiretaps and pen registers are explicitly covered, two other common and extremely intrusive surveillance techniques–cell phone tracking and remote access computer monitoring (i.e. hacking)–are not mentioned at all. What are we to infer from their omission? That these common techniques are not covered at all? Or that they are covered, but buried under ambiguous verbiage unlikely to attract attention and generate opposition? At this point it is not obvious which is more likely to be the case.
This legal uncertainty is disconcerting for many reasons. First, to the extent the CLOUD Act authorizes U.S. law enforcement to unilaterally engage in real-time surveillance on foreign soil, it may violate the international law principle of territorial sovereignty. Second, U.S. jurisprudence is currently unsettled as applied to new surveillance techniques such as smartphone tracking and computer hacking; as a result, foreign governments might well be disinclined to enter into a CLOUD Act executive agreement with the U.S. permitting such activities on their soil. Finally, the extraterritorial impact of modern electronic surveillance can be dramatic, especially in the case of remote access to foreign servers and devices. Several EU countries have already recognized the special dangers posed by government hacking–to privacy, internet security, and foreign relations–and have developed a panoply of protections to mitigate those risks. By contrast, the U.S. has failed to enact any special substantive and procedural protections against the risks posed by such intrusive surveillance.
The CLOUD Act should be amended to unambiguously exclude coverage of real-time surveillance techniques. Until that is accomplished, any foreign power negotiating a CLOUD Act executive agreement should be aware of the limits and uncertainties of U.S. law concerning these surveillance methods, and insist upon robust legal standards and procedures governing their use.
Eugene Volokh (UCLA – School of Law) has posted “Treating Social Media Platforms Like Common Carriers?” (1 Journal of Free Speech Law 377 (2021)) on SSRN. Here is the abstract:
The rise of massively influential social media platforms—and their growing willingness to exclude certain material that can be central to political debates—raises, more powerfully than ever, the concerns about economic power being leveraged into political power. There is a plausible (though far from open-and-shut) argument that these concerns can justify requiring the platforms not to discriminate based on viewpoint in choosing what material they host, much as telephone companies and package delivery services are barred from such viewpoint discrimination. PruneYard Shopping Center v. Robins, Turner Broadcasting System v. FCC, and Rumsfeld v. FAIR suggest such common-carrier-like mandates would be constitutional. On the other hand, platforms do have the First Amendment right to choose what to affirmatively and selectively recommend to their users.
Benjamin Seymour (Yale Law School) has posted “The New Fintech Federalism” (24 Yale J.L. & Tech. (2022, Forthcoming) on SSRN. Here is the abstract:
U.S. law has struggled to accommodate the rise of fintech. Instead, the United States has lumbered under a division of regulatory authority between the state and federal governments designed for a financial landscape comprised of banks and large, systemically important shadow banks.
To catch up to the market, state and federal officials have under-taken a diverse array of initiatives. Numerous regulators have relied on the prevailing paradigm of the past century, seeking to extend its already stretched logic into the realm of fintech and exacerbating its many shortcomings in the process. But several regulatory initiatives of the past decade have broken with prior thinking and charted a different path, one that redefines the relative realms of the federal and state governments and promises a legal regime suited to the technological realities of twenty-first century finance.
This emergent paradigm—the New Fintech Federalism—constitutes a radical reversal of the prior division of authority between state and federal actors. Through both cooperative and unilateral initiatives, the states are increasingly adopting an entity-based approach rooted in interstate reciprocity that inures the benefits of jurisdictional competition and reduces the costs of redundant mandates. Meanwhile, by focusing on financial activities, the federal government is pursuing a consumer protection framework less prone to arbitrage and a view of prudential risk suited to the fragmentation of fintech.
This Article is the first to identify the New Fintech Federalism, examining how its disparate set of legal experiments could revolutionize U.S. financial regulation. It also details a statutory intervention that would promote the interests of entrepreneurs and consumer protection advocates alike by codifying this emergent approach. Far from jettisoning federalism, this Article’s proposed legislation would harness the distinctive strengths of the state and federal governments to bolster America’s economic vitality and global competitiveness.
Jon Penney (University of Toronto) has posted “Understanding Chilling Effects” (106 Minnesota Law Review, forthcoming) on SSRN. Here is the abstract:
With digital surveillance and censorship on the rise, the amount of data available online unprecedented, and corporate and governmental actors increasingly employing emerging technologies like artificial intelligence (AI), machine learning, and facial recognition technology (FRT) for surveillance and data analytics, concerns about “chilling effects”, that is, the capacity for these activities “chill” or deter people from exercising their rights and freedoms have taken on greater urgency and importance. Yet, there remains a clear dearth in systematic theoretical and empirical work point. This has left significant gaps in understanding. This article has attempted to fill that void, synthesizing theoretical and empirical insights from law, privacy, and a range of social science fields toward a more comprehensive and unified understanding.
I argue that conventional theories, based on fear of legal or privacy harm, are narrow, empirically weak, cannot predict or explain chilling effects in a range of different contexts, and neglect its productive dimensions—how chilling effects shape behavior. Drawing extensively on social science literature, I argue that chilling effects are best understood as a form of social conformity. Chilling effects arise out of contexts of ambiguity and uncertainty—like the ambiguity of public or private sector surveillance—but have deeper psychological foundations as well. In moments of situational uncertainty, people conform to, and comply with, the relevant social norm in that context. Sometimes this means self-censorship, but most often it means more socially conforming speech or conduct. A theory of chilling effects as social conformity has important normative, theoretical, and empirical advantages, including greater explanatory and predictive power, clarifying what chilling effects theory is for and what it produces, as well as providing a basis to navigate competing and differing chilling effect claims. It also has implications, I argue, for constitutional standing as well as the First Amendment chilling effects doctrine.
Kelvin F.K. Low (NUS Law), Wai Yee Wan (City University of Hong Kong), and Ying-Chieh WU (SNU Law) have posted “The Future of Machines: Property and Personhood” (The Cambridge Handbook of Private Law and Artificial Intelligence, Forthcoming) on SSRN. Here is the abstract:
The use of tools was once believed to be a distinguishing feature of human intelligence which allowed us to deny personhood to animals, which like tools, were property rather than persons. As we get increasingly dependent on our increasingly sophisticated tools, the law will need to consider when (if ever) machines cease to be mere tools and become a part of our person. Could they even increase in sophistication to the point when they may be conferred legal personhood? Or will rapidly advancing machine intelligence first strip us of our personhood? Might the law of property prove to be a bulwark against such an outcome?
Jens Ludwig (Georgetown University; NBER ) and Sendhil Mullainathan (University of Chicago) have posted “Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System” on SSRN. Here is an excerpt:
One reason for their fragility comes from important econometric problems that are often overlooked in building algorithms. Decades of empirical work by economists show that in almost every data application the data is incomplete, not fully representing either the objectives or the information that decision-makers possess. For example, judges rely on much more information
than is available to algorithms, and judges’ goals are often not well-represented by the outcomes provided to algorithms. These problems, familiar to economists, riddle every case where algorithms are being applied. […] Existing regulations provide
weak incentives for those building or buying algorithms, and little ability to police these choices.
For a method of providing stronger incentives for those building and buying algorithms, see Frank Fagan & Saul Levmore, Competing Algorithms for Law, 88 U. Chicago Law Rev.
Quinten Steenhuis (Suffolk University Law School) and David Colarusso (Suffolk University Law School) have posted “Digital Curb Cuts: Towards an Inclusive Open Forms Ecosystem” (Akron Law Review, Forthcoming) on SSRN. Here is the abstract:
In this paper we focus on digital curb cuts created during the pandemic: improvements designed to increase accessibility that benefit people beyond the population that they are intended to help. As much as 86% of civil legal needs are unmet, according to a 2017 study by the Legal Services Corporation. Courts and third parties designed many innovations to meet the emergency needs of the pandemic: we argue that these innovations should be extended and enhanced to address this ongoing access to justice crisis. Specifically, we use the Suffolk University Law School’s Document Assembly Line as a case study. The Document Assembly Line rapidly automated more than two dozen court processes, providing pro se litigants remote, user-friendly, step-by-step guidance in areas such as domestic violence protection orders and emergency housing needs and made them available at courtformsonline.org. The successes of this project can extend beyond the pandemic with the adoption of an open-source, open-standards ecosystem centered on document and form automation. We give special attention to the value of integrated electronic filing in serving the needs of litigants, a tool that has been underutilized in the non-profit form automation space because of complexities and the difficulty in obtaining court cooperation.
Daniel Kiat Boon Seng (Director, Centre for Technology, Robotics, AI and the Law, Faculty of Law, National University of Singapore) has posted “Artificial Intelligence and Information Intermediaries” (Artificial Intelligence and Private Law 2021) on SSRN. Here is the abstract:
The explosive growth of the Internet was supported by the Communications Decency Act (CDA) and the Digital Millennium Copyright Act (DMCA). Together, these pieces of legislation have been credited with shielding Internet intermediaries from onerous liabilities, and, in doing so, enabled the Internet to flourish. However, the use of machine learning systems by Internet intermediaries in their businesses threatens to upend this delicate legal balance. Would this affect the intermediaries’ CDA and DMCA immunities, or expose them to greater liability for their actions? Drawing on both substantive and empirical research, this paper concludes that automation used by intermediaries largely reinforces their immunities. In the consequence of this is that intermediaries are left with little incentive to exercise their discretion to filter out illicit, harmful and invalid content. These developments brought about by AI are worrisome and require a careful recalibration of the immunity rules in both the CDA and DMCA to ensure the continued relevance of these rules.
Daniel Maggen (Yale Law School) has posted “Predict and Suspect: The Emergence of Artificial Legal Meaning” (North Carolina Journal of Law and Technology, Vol. 23, No. 1, 2021) on SSRN. Here is the abstract:
Recent theoretical writings on the possibility that algorithms would someday be able to create law have delayed algorithmic law-making, and the need to decide on its legitimacy, to some future time in which algorithms would be able to replace human lawmakers. This Article argues that such discussions risk essentializing an anthropomorphic image of the algorithmic lawmaker as a unified decision-maker and divert attention away from algorithmic systems that are already performing functions that together have a profound effect on legal implementation, interpretation, and development. Adding to the rich scholarship of the distortive effects of algorithmic systems, the Article suggests that state-of-the-art algorithms capable of limited legal analysis can have the effect of preventing legal development. Such algorithm-induced ossification, the Article argues, raises questions of legitimacy that are no less consequential than those raised by some futuristic algorithms that can actively create norms.
To demonstrate this point, the Article puts forward a hypothetical example of algorithms performing limited legal analysis to assist healthcare professionals in reporting suspected child maltreatment. Already in use are systems performing risk analysis to aid child protective services in screening maltreatment reports. Drawing on the example of algorithms increasingly used today in social media content moderation, the Article suggests that similar systems could be used for flagging cases that show signs of suspected abuse. Such assistive systems, the Article argues, will likely cement the prevailing legal meaning of maltreatment. As mandated reporters increasingly rely on such systems, the result would be the absence of legal evolution, preventing changes to contentious elements in the legal definition of reportable suspicion, including the scope of acceptable physical disciplining. Together with the familiar effect of existing systems, the effect of this hypothetical system could have a profound effect on the path of the law on child maltreatment, equivalent in its significance to the effect autonomous algorithmic adjudication would have.