Calo on Modeling Through

Ryan Calo (U Washington School of Law) has posted “Modeling Through” (Duke Law Journal, Vol. 72, Forthcoming 2021) on SSRN. Here is the abstract:

Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers cannot see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can.

Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and system analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm.

The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill-equipped to address. Concerns include privacy, brittleness, and automation bias of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm but there are sharks in it.

These tensions are not new. And there is danger in hewing to the status quo. (We should still pursue renewable energy even though wind turbines as presently configured waste energy and kill wildlife.) As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling.

Recommended.

Di Porto on Artificial Intelligence and Competition Law: A Computational Analysis of the DMA and DSA

Fabiana Di Porto (University of Salento; LUISS; Hebrew University) has posted “Artificial Intelligence and Competition Law. A Computational Analysis of the DMA and DSA” (Concurrences, 3 (2021)) on SSRN. Here is the abstract:

This Article investigates whether all stakeholder groups share the same understanding and use of the relevant terms and concepts of the DSA and DMA. Leveraging the power of computational text analysis, we find significant differences in the employment of terms like “gatekeepers,” “self-preferencing,” “collusion,” and others in the position papers of the consultation process that informed the drafting of the two latest Commission proposals. Added to that, sentiment analysis shows that in some cases these differences also come with dissimilar attitudes. While this may not be surprising for new concepts such as gatekeepers or self-preferencing, the same is not true for other terms, like “self-regulatory,” which not only is used differently by stakeholders but is also viewed more favorably by medium and big companies and organizations than by small ones. We conclude by sketching out how different computational text analysis tools, could be combined to provide many helpful insights for both rulemakers and legal scholars.

Schwemer, Tomada & Pasini on Legal AI Systems in the EU’s proposed Artificial Intelligence Act

Sebastian Felix Schwemer (University of Copenhagen), Letizia Tomada (University of Copenhagen), and Tommaso Pasini (University of Copenhagen) have posted “Legal AI Systems in the EU’s proposed Artificial Intelligence Act” on SSRN. Here is the abstract:

In this paper we examine how human-machine interaction in the legal sector is suggested to be regulated in the EU’s recently proposed Artificial Intelligence Act. First, we provide a brief background and overview of the proposal. Then we turn towards the assessment of high-risk AI systems for the legal tasks as well as the obligations for such AI systems in terms of human-machine interaction. We argue that whereas the proposed definition of AI system is broad, the concrete high-risk area of ‘administration of justice and democratic processes’, despite coming with considerable legal uncertainty, is narrow and unlikely to extent into many uses of legal AI and IA systems. Nonetheless, these regulatory developments may be of great relevance for current and future legal AI and IA systems.

Bruner on Artificially Intelligent Boards and the Future of Delaware Corporate Law

Christopher M. Bruner (University of Georgia School of Law) has posted “Artificially Intelligent Boards and the Future of Delaware Corporate Law” on SSRN. Here is the abstract:

The prospects for Artificial Intelligence (AI) to impact the development of Delaware corporate law are at once over- and under-stated. As a general matter, claims to the effect that AI systems might ultimately displace human directors not only exaggerate the foreseeable technological potential of these systems, but also tend to ignore doctrinal and institutional impediments intrinsic to Delaware’s competitive model – notably, heavy reliance on nuanced and context-specific applications of the fiduciary duty of loyalty by a true court of equity. At the same time, however, there are specific applications of AI systems that might not merely be accommodated by Delaware corporate law, but perhaps eventually required. Such an outcome would appear most plausible in the oversight context, where fiduciary loyalty has been interpreted to require good faith effort to adopt a reasonable compliance monitoring system, an approach driven by an implicit cost-benefit analysis that could lean decisively in favor of AI-based approaches in the foreseeable future.
This article discusses the prospects for AI to impact Delaware corporate law in both general and specific respects and evaluates their significance. Section II describes the current state of the technology and argues that AI systems are unlikely to develop to the point that they could displace the full range of functions performed by human boards in the foreseeable future. Section III, then, argues that even if the technology were to achieve more impressive results in the near-term than I anticipate, acceptance of non-human directors would likely be blunted by doctrinal and institutional structures that place equity at the very heart of Delaware corporate law. Section IV, however, suggests that there are nevertheless discrete areas within Delaware corporate law where reliance by human directors upon AI systems for assistance in board decision-making might not merely be accommodated, but eventually required. This appears particularly plausible in the oversight context, where fiduciary loyalty has become intrinsically linked with adoption of compliance monitoring systems that are themselves increasingly likely to incorporate AI technologies. Section V briefly concludes.

Case on Google, Big Data, & Antitrust

Megan Case (University of Maryland School of Law) has posted “Google, Big Data, & Antitrust” (Delaware Journal of Corporate Law (Vol. 46, Issue No. 2)) on SSRN. Here is the abstract:

Google occupies a powerful position within the United States economy, a position which many have begun to consider too powerful. Google’s power is derived almost entirely from how it uses the billions of pieces of information it collects on its users—a collection of information known as big data. Since last October, five separate antitrust lawsuits have been filed against Google by multiple states and the Department of Justice. Several sweeping antitrust reform measures have also been proposed in Congress to target big tech companies.

This Article discusses the unique antitrust challenges posed by companies like Google and argues that those challenges can be addressed without a massive overhaul of antitrust law. In doing so, it builds on recent legal scholarship that advocates for more vigorous antitrust scrutiny of mergers and acquisitions and more aggressive treatment of exclusionary conduct by dominant firms. While this Article echoes such recommendations, it develops an added focus on the manner in which dominant firms use big data to properly diagnose the unique anticompetitive concerns raised by companies like Google. In order to successfully keep antitrust enforcement abreast the challenges of our growing digital economy, antitrust authorities must begin to emphasize the central role big data plays in today’s digital arena. This approach yields an important normative insight: the sweeping legislative reforms proposed under the guise of protecting competition in the digital age could have an opposite and chilling effect on competition and innovation.

Hamilton on Platform-Enabled Crimes

Rebecca J. Hamilton (American University – Washington College of Law) has posted “Platform-Enabled Crimes” (B.C. L. Rev (forthcoming 2022)) on SSRN. Here is the abstract:

Online intermediaries are omnipresent. Each day, across the globe, the corporations that run these platforms execute policies and practices that serve their profit model, typically by sustaining user engagement. Sometimes, these seemingly banal business activities enable principal perpetrators to commit crimes; yet online intermediaries are almost never held to account for their complicity in the resulting harms.

This Article introduces the term and concept of platform-enabled crimes into the legal literature to draw attention to way that the ordinary business activities of online intermediaries can enable the commission of crime. It then singles out a subset of platform-enabled crimes—those where a social media company has facilitated international crimes—for the purpose of understanding and addressing the accountability gap associated with them.

Adopting a survivor-centered methodology, and using Facebook’s complicity in the Rohingya genocide in Myanmar as a case study, this Article begins the work of addressing the accountability deficit for platform-enabled crimes. It advances a menu of options to be pursued in parallel, including amending domestic legislation, strengthening transnational cooperation between international and domestic prosecutors for criminal and civil corporate liability cases, and pursuing de-monopolizing regulatory action. I conclude by acknowledging that the advent of platform-enabled crimes is not something that any single body of law is equipped to respond to. However by pursuing a plurality of options to address this previously overlooked form of criminal facilitation, we can make a vast improvement on the status quo.

Zambrano, Guha & Henderson on Vulnerabilities in Discovery Tech

Diego Zambrano (Stanford), Neel Guha (Stanford), and Peter Henderson (Stanford) have posted “Vulnerabilities in Discovery Tech” (Harvard Journal of Law & Technology, (2022 Forthcoming) on SSRN. Here is the abstract:

Recent technological advances are changing the litigation landscape, especially in the context of discovery. For nearly two decades, technologies have reinvented document searches in complex litigation, normalizing the use of machine learning algorithms under the umbrella of “Technology Assisted Review” (TAR). But the latest technological developments are placing discovery beyond the reach of attorney understanding and firmly in the realm of computer science and engineering. As lawyers struggle to keep up, a creeping sense of anxiety is spreading in the legal profession about a lack of transparency and the potential for discovery abuse. Judges, attorneys, bar associations, and scholars warn that lawyers need to closely supervise the technical aspects of TAR and avoid the dangers of sabotage, intentional hacking, or abuse. But none of these commentators have defined with precision what the risks entail, furnished a clear outline of potential dangers, or defined the appropriate boundaries of debate.

This Article provides the first systematic assessment of the potential for abuse in technology-assisted discovery. The Article offers three contributions. First, our most basic aim is to provide a technical but accessible assessment of vulnerabilities in the TAR process. To do so, we use the latest computer science research to identify and catalogue the different ways that TAR can go awry, either due to intentional abuse or mistakes. Second, with a better understanding of how discovery can be subverted, we then map potential remedies and reassess current debates in a more helpful light. The upshot is that abuse of technology-assisted discovery is possible but can be preventable if the right review processes are in place. Finally, we propose reforms to improve the system in the short and medium term, with an emphasis on improved metrics that can more fully measure the quality of TAR. By exploring the technical background of discovery abuse, the Article demystifies the engineering substrate of modern discovery. Undertaking this study shows that lawyers can safeguard technology-assisted discovery without surrendering professional jurisdiction to engineers.

Sunstein on Governing by Algorithm? No Noise and (Potentially) Less Bias

Cass R. Sunstein (Harvard Law School) has posted “Governing by Algorithm? No Noise and (Potentially) Less Bias” on SSRN. Here is the abstract:

As intuitive statisticians, human beings suffer from identifiable biases, cognitive and otherwise. Human beings can also be “noisy,” in the sense that their judgments show unwanted variability. As a result, public institutions, including those that consist of administrative prosecutors and adjudicators, can be biased, noisy, or both. Both bias and noise produce errors. Algorithms eliminate noise, and that is important; to the extent that they do so, they prevent unequal treatment and reduce errors. In addition, algorithms do not use mental short-cuts; they rely on statistical predictors, which means that they can counteract or even eliminate cognitive biases. At the same time, the use of algorithms, by administrative agencies, raises many legitimate questions and doubts. Among other things, they can encode or perpetuate discrimination, perhaps because their inputs are based on discrimination, perhaps because what they are asked to predict is infected by discrimination. But if the goal is to eliminate discrimination, properly constructed algorithms nonetheless have a great deal of promise for administrative agencies.

Lessig on the First Amendment and Replicants

Lawrence Lessig (Harvard Law School) has posted “The First Amendment Does Not Protect Replicants” (Social Media and Democracy (Lee Bollinger & Geoffrey Stone, eds., Oxford 2022), Forthcoming) on SSRN. Here is the abstract:

As the semantic capability of computer systems increases, the law should resolve clearly whether the First Amendment protects machine speech. This essay argues it should not be read to reach sufficiently sophisticated — “replicant” — speech.

Rauch on Customized Speech and the First Amendment

Daniel Rauch (Yale Law School) has posted “Customized Speech and the First Amendment” (Harvard Journal of Law & Technology, Vol. 35, 2022 Forthcoming) on SSRN. Here is the abstract:

Customized Speech — speech targeted or tailored based on knowledge of one’s audience — is pervasive. It permeates our relationships, our culture, and, especially, our politics. Until recently, customization drew relatively little attention. Cambridge Analytica changed that. Since 2016, a consensus has decried Speech Customization as causing political manipulation, disunity, and destabilization. On this account, machine learning, social networks and Big Data make political Customized Speech a threat we constitutionally can, and normatively should, curtail.

That view is mistaken. In this Article, I offer the first systematic analysis of Customized Speech and the First Amendment. I reach two provocative results: Doctrinally, the First Amendment robustly protects Speech Customization. And normatively, even amidst Big Data, this protection can help society and democracy.

Doctrinally, the use of audience information to customize speech is, itself, core protected speech. Further, audience-information collection, while less protected, may still only be regulated by carefully drawn, content-neutral, generally applicable laws. And unless and until the state affirmatively enacts such laws (as, overwhelmingly, it has not), it may not curtail speakers’ otherwise-lawful use of such information in political Speech Customization.

What does this mean for democratic government? Today, Customized Speech raises fears about democratic discourse, hyper-partisan factions, and citizen autonomy. But these are less daunting than the consensus suggests, and are offset by key benefits: modern Customized Speech activates the apathetic, empowers the marginalized, and checks government overreach. Accordingly, many current proposals to restrict such Customized Speech — from disclosure requirements to outright bans — are neither constitutionally viable nor normatively required.