Quinten Steenhuis (Suffolk University Law School) and David Colarusso (Suffolk University Law School) have posted “Digital Curb Cuts: Towards an Inclusive Open Forms Ecosystem” (Akron Law Review, Forthcoming) on SSRN. Here is the abstract:
In this paper we focus on digital curb cuts created during the pandemic: improvements designed to increase accessibility that benefit people beyond the population that they are intended to help. As much as 86% of civil legal needs are unmet, according to a 2017 study by the Legal Services Corporation. Courts and third parties designed many innovations to meet the emergency needs of the pandemic: we argue that these innovations should be extended and enhanced to address this ongoing access to justice crisis. Specifically, we use the Suffolk University Law School’s Document Assembly Line as a case study. The Document Assembly Line rapidly automated more than two dozen court processes, providing pro se litigants remote, user-friendly, step-by-step guidance in areas such as domestic violence protection orders and emergency housing needs and made them available at courtformsonline.org. The successes of this project can extend beyond the pandemic with the adoption of an open-source, open-standards ecosystem centered on document and form automation. We give special attention to the value of integrated electronic filing in serving the needs of litigants, a tool that has been underutilized in the non-profit form automation space because of complexities and the difficulty in obtaining court cooperation.
Daniel Kiat Boon Seng (Director, Centre for Technology, Robotics, AI and the Law, Faculty of Law, National University of Singapore) has posted “Artificial Intelligence and Information Intermediaries” (Artificial Intelligence and Private Law 2021) on SSRN. Here is the abstract:
The explosive growth of the Internet was supported by the Communications Decency Act (CDA) and the Digital Millennium Copyright Act (DMCA). Together, these pieces of legislation have been credited with shielding Internet intermediaries from onerous liabilities, and, in doing so, enabled the Internet to flourish. However, the use of machine learning systems by Internet intermediaries in their businesses threatens to upend this delicate legal balance. Would this affect the intermediaries’ CDA and DMCA immunities, or expose them to greater liability for their actions? Drawing on both substantive and empirical research, this paper concludes that automation used by intermediaries largely reinforces their immunities. In the consequence of this is that intermediaries are left with little incentive to exercise their discretion to filter out illicit, harmful and invalid content. These developments brought about by AI are worrisome and require a careful recalibration of the immunity rules in both the CDA and DMCA to ensure the continued relevance of these rules.
Daniel Maggen (Yale Law School) has posted “Predict and Suspect: The Emergence of Artificial Legal Meaning” (North Carolina Journal of Law and Technology, Vol. 23, No. 1, 2021) on SSRN. Here is the abstract:
Recent theoretical writings on the possibility that algorithms would someday be able to create law have delayed algorithmic law-making, and the need to decide on its legitimacy, to some future time in which algorithms would be able to replace human lawmakers. This Article argues that such discussions risk essentializing an anthropomorphic image of the algorithmic lawmaker as a unified decision-maker and divert attention away from algorithmic systems that are already performing functions that together have a profound effect on legal implementation, interpretation, and development. Adding to the rich scholarship of the distortive effects of algorithmic systems, the Article suggests that state-of-the-art algorithms capable of limited legal analysis can have the effect of preventing legal development. Such algorithm-induced ossification, the Article argues, raises questions of legitimacy that are no less consequential than those raised by some futuristic algorithms that can actively create norms.
To demonstrate this point, the Article puts forward a hypothetical example of algorithms performing limited legal analysis to assist healthcare professionals in reporting suspected child maltreatment. Already in use are systems performing risk analysis to aid child protective services in screening maltreatment reports. Drawing on the example of algorithms increasingly used today in social media content moderation, the Article suggests that similar systems could be used for flagging cases that show signs of suspected abuse. Such assistive systems, the Article argues, will likely cement the prevailing legal meaning of maltreatment. As mandated reporters increasingly rely on such systems, the result would be the absence of legal evolution, preventing changes to contentious elements in the legal definition of reportable suspicion, including the scope of acceptable physical disciplining. Together with the familiar effect of existing systems, the effect of this hypothetical system could have a profound effect on the path of the law on child maltreatment, equivalent in its significance to the effect autonomous algorithmic adjudication would have.
Aaron Cooper (Georgetown University Law Center) has posted “Congressional Surveillance” (American University Law Review, Vol. 70, No. 1799, 2021) on SSRN. Here is the abstract:
In recent years, Congress has increasingly used electronic surveillance in high-profile investigations. Reactions to what this Article calls “congressional surveillance” indicate a deep unease among both legal scholars and the broader public about the nature of Congress’s surveillance authority and its normative implications. Despite our ongoing preoccupation with government surveillance, congressional surveillance remains largely unexplored. There is virtually no discussion of how congressional surveillance is treated under key statutory and Fourth Amendment constraints; no consideration of the process or political limits of congressional surveillance; and little scrutiny of congressional surveillance as a tool within the separation of powers.
This Article fills that gap by presenting the first scholarly treatment of congressional surveillance. It argues that to address congressional surveillance, we must first understand its hybrid features of both government surveillance and congressional political power.
Specifically, the Article makes two contributions. First, the Article argues that congressional surveillance operates under fundamentally different constraints than traditional government surveillance. Congressional processes and politics (“process limits”) constrain congressional surveillance more than established statutory and Fourth Amendment mechanisms (“external limits”) or the inherent constraints of congressional authority (“internal limits”).
Second, this Article argues that congressional surveillance is justified as an essential practice within the separation of powers. It offers legitimate benefits to Congress in inter-branch information disputes with the executive and in carrying out basic digital governance. The Article also argues that the Supreme Court’s decision in Trump v. Mazars USA, LLP mistakes a privacy concern that congressional surveillance poses as a threat to the separation of powers. At the same time, this Article rejects the traditional law enforcement approach to protecting individual privacy through judicial gatekeeping. Instead, the Article argues that the treatment of congressional surveillance must account for individual privacy interests while preserving Congress’s ability to assert itself as a co-equal branch—not the Mazars approach, and not a law enforcement approach, but something different.
Daphne Keller (Stanford Cyber Policy Center) has posted “U.S. International Trade Commission Testimony” on SSRN. Here is the abstract:
This testimony responds to a Congressional inquiry on the subject of “foreign censorship.” It focuses on laws regulating Internet platforms, and because of the ITC’s trade focus also prioritizes issues with potential economic impact.
The initial filing June 24, 2021 filing addresses
(1) competing concepts of “censorship” in platform regulation,
(2) the role of informal government pressure or “jawboning” on platforms’ global speech rules,
(3) recent developments in non-U.S. governments’ formal claims of extraterritorial jurisdiction to regulate speech,
(4) emerging jurisdictional “hardball” practices, including “hostage” provisions in national laws regulating platforms, and
(5) potential economic and competitive impact of recent and pending platform law developments.
The post-hearing submission, addressing questions raised in the July 1, 2021 hearing, addresses
(1) developments in India,
(2) state action in intermediary liability laws,
(3) studies attempting to quantify or otherwise empirically assess the economic impact of intermediary liability laws,
(4) threats to end-to-end-encryption with both economic and speech-related consequences, with a list of experts and sources on the topic.
Clovia Hamilton (SUNY Korea), William Swart (East Carolina University), and Gerald M. Stokes (SUNY Korea) have posted “Developing a Measure of Social, Ethical, and Legal Content for Intelligent Cognitive Assistants” (Journal of Strategic Innovation and Sustainability 2021) on SSRN. Here is the abstract:
We address the issue of consumer privacy against the backdrop of the national priority of maintaining global leadership in artificial intelligence, the ongoing research in Artificial Cognitive Assistants, and the explosive growth in the development and application of Voice Activated Personal Assistants (VAPAs) such as Alexa and Siri, spurred on by the needs and opportunities arising out of the COVID-19 global pandemic. We first review the growth and associated legal issues of the of VAPAs in private homes, banks, healthcare, and education. We then summarize the policy guidelines for the development of VAPAs. Then, we classify these into five major categories with associated traits. We follow by developing a relative importance weight for each of the traits and categories; and suggest the establishment of a rating system related to the legal, ethical, functional, and social content policy guidelines established by these organizations. We suggest the establishment of an agency that will use the proposed rating system to inform customers of the implications of adopting a particular VAPA in their sphere.