Tippett, Alexander, and Branting on Predicting Judicial Decisions from Legal Briefs

Elizabeth Chika Tippett (University of Oregon School of Law), Charlotte Alexander (Georgia State University – Institute for Insight; Georgia State University College of Law), and L. Karl Branting (University of Wyoming) have posted “Does Lawyering Matter? Predicting Judicial Decisions from Legal Briefs, and What That Means for Access to Justice” (Texas Law Review, Forthcoming) to SSRN. Here is the abstract:

This study uses linguistic analysis and machine learning techniques to predict summary judgment outcomes from the text of the parties’ briefs. We test the predictive power of textual characteristics, stylistic features, and citation usage, and find that citations to precedent – their frequency, their patterns, and their popularity in other briefs – are the most predictive of a summary judgment win. This suggests that good lawyering may boil down to good legal research. However, good legal research is expensive, and the primacy of citations in our models raises concerns about access to justice. Here, our citation-based models also suggest promising solutions. We propose a freely available, computationally-enabled citation identification and brief bank tool, which would extend to all litigants the benefits of good lawyering and open up access to justice.

Recommended.

Hirsch et al. on Business Data Ethics: Emerging Trends in the Governance of Advanced Analytics and AI

Dennis D. Hirsch (Ohio State University (OSU) – Michael E. Moritz College of Law; Capital University Law School) and others have posted “Business Data Ethics: Emerging Trends in the Governance of Advanced Analytics and AI” on SSRN. Here is the abstract:

Advanced analytics and artificial intelligence are powerful technologies that, along with their benefits, create new threats to privacy, equality, fairness and transparency. Existing law does not yet protect sufficiently against these threats. This has led some organizations to pursue what they call “data ethics” or “AI ethics” in an attempt to bring advanced analytics and AI more into line with societal values and so legitimate their growing use of these technologies.

To date, much of the scholarship on data ethics has sought either to define the ethical principles to which organization should aspire, or to map out the laws and regulations needed to push organizations towards these ethical goals. While these two lines of inquiry are important, the literature is missing a critical third dimension: empirical work on how organizations are actually governing the threats that their use of advanced analytics and AI can generate. Good regulatory design requires such knowledge. Yet, while there have been important studies of how organizations manage privacy “on the ground” (Bamberger and Mulligan 2015), there has been little such work on the governance of advanced analytics and AI.

This report begins to fill this gap. Focusing on private sector organizations, the authors interviewed corporate privacy managers deemed by their peers to be leaders in the governance of advanced analytics and AI, as well as the lawyers, consultants and thought leaders who advise them on this topic. They also surveyed a wider range of privacy mangers. The study sought to answer three, fundamental questions about business data ethics management: (1) How do leading companies conceptualize the threats that their use of advanced analytics and AI pose for individuals, groups and the broader society? (2) If it is true that the law does not yet require companies to reduce these risks, then why are they pursuing data ethics? and (3) How are companies pursuing data ethics? What substantive benchmarks, management processes and technological solutions do they use towards this end?

The authors previously shared on SSRN their preliminary findings. This final report provides a much fuller picture. The report should provide legislators and policymakers with an empirical foundation for their efforts to regulate advanced analytics and AI, at the same time as it gives interested organizations ideas on how to improve their data ethics management.

Chesterman on Weapons of Mass Disruption: Artificial Intelligence and International Law

Simon Chesterman (National University of Singapore (NUS) – Faculty of Law) has posted “Weapons of Mass Disruption: Artificial Intelligence and International Law” (Cambridge International Law Journal (forthcoming)) on SSRN. Here is the abstract:

The answers each political community finds to the law reform questions posed artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction — indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To coordinate those activities and enforce global ‘red lines’, this paper posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.

Bambauer on Cybersecurity for Idiots

Derek E. Bambauer (University of Arizona – James E. Rogers College of Law) has posted “Cybersecurity for Idiots” (106 Minnesota Law Review Headnotes __ (2021 Forthcoming)) on SSRN. Here is the abstract:


Cybersecurity remains a critical issue facing regulators, particularly with the advent of the Internet of Things. General-purpose security regulators such as the Federal Trade Commission continually struggle with limited resources and information in their oversight. This Essay contends that a new approach to cybersecurity modeled on the negligence per se doctrine in tort law will significantly improve cybersecurity and reduce regulatory burdens. It introduces a taxonomy of regulators based upon the scope of their oversight and the pace of technological change in industries within their purview. Then, the Essay describes negligence per se for cybersecurity, which establishes a floor for security precautions that draws upon extant security standards. By focusing on the worst offenders, this framework improves notice to regulated entities, reduces information asymmetries, and traverses objections from legal scholars about the cost and efficacy of cybersecurity mandates. The Essay concludes by offering an emerging case study for its approach: regulation of quasi-medical devices by the Food and Drug Administration. As consumer devices increasingly offer functionality for both medical and non-medical purposes, the FDA will partly transition to a general-purpose regulator of information technology, and the negligence per se model can help the agency balance security precautions with promoting innovation.