Martínez, Mollica & Gibson on How Poor Writing, not Specialized Concepts, Drives Processing Difficulty in Legal Language

Eric Martínez (MIT), Frank Mollica (Edinburgh), and Edward Gibson (MIT) have posted “Poor Writing, not Specialized Concepts, Drives Processing Difficulty in Legal Language” (Cognition 2022) on SSRN. Here is the abstract:

Despite their ever-increasing presence in everyday life, contracts remain notoriously inaccessible to laypeople. Why? Here, a corpus analysis (n≈225 million words) revealed that contracts contain startlingly high proportions of certain difficult-to-process features–including low-frequency jargon, center-embedded clauses (leading to long-distance syntactic dependencies), passive voice structures, and non-standard capitalization–relative to nine other baseline genres of written and spoken English. An experiment (N=184) further revealed that excerpts containing these features were recalled and comprehended at lower rates than excerpts without these features, even for experienced readers, and that center-embedded clauses inhibited recall more-so than other features. These findings (a) undermine the specialized concepts account of legal theory, according to which law is a system built upon expert knowledge of technical concepts; (b) suggest such processing difficulties result largely from working-memory limitations imposed by long-distance syntactic dependencies (i.e., poor writing) as opposed to a mere lack of specialized legal knowledge; and (c) suggest editing out problematic features of legal texts would be tractable and beneficial for society at-large.

Tucker on Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder

Emily Tucker (Center on Privacy & Technology at Georgetown Law) has posted “Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder” (New York University Review of Law & Social Change, Vol. 46, No. 1, 2022) on SSRN. Here is the abstract:

In the many debates about whether and how algorithmic technologies should be used in law enforcement, all sides seem to share one assumption: that, in the struggle for justice and equity in our systems of governance, the subjectivity of human judgment is something to be overcome. While there is significant disagreement about the extent to which, for example, a machine-generated risk assessment might ever be unpolluted by the problematic biases of its human creators and users, no one in the scholarly literature has so far suggested that if such a thing were achievable, it would be undesirable.

This essay argues that it only becomes possible for policing to be something other than mere brutality when the activities of policing are themselves a way of deliberating about what policing is and should be, and that algorithms are definitionally opposed to such deliberation. An algorithmic process, whether carried out by a human brain or by a computer, can only operate at all if the terms that govern its operations have fixed definitions. Fixed definitions may be useful or necessary for human endeavors—like getting bread to rise or designing a sturdy foundation for a building—which can be reduced to techniques of measurement and calculation. But the fixed definitions that underlie policing algorithms (what counts as transgression, which transgressions warrant state intervention, etc) relate to an ancient, fundamental, and enduring political question, one that cannot be expressed by equation or recipe: the question of justice. The question of justice is not one to which we can ever give a final answer, but one that must be the subject of ongoing ethical deliberation within human communities.

Recommended.

Lee on Investor Protection on Crowdfunding Platforms

Joseph Lee (School of Law, University of Manchester) has posted “Investor Protection on Crowdfunding Platforms” (The EU Crowdfunding Regulation, OUP) on SSRN. Here is the abstract:

This paper discusses the protection of investors on crowdfunding platforms under the Crowdfunding Regulation. Although there are many provisions in the regulation that protect investors, this paper concentrates specifically on those included under the heading of Caper IV ‘Investor protection’ of the Crowdfunding Regulation.

This paper focuses on how investor protection can contribute to the objectives of crowdfunding and, in particular, how the provisions of the Crowdfunding Regulation serve this purpose. To this end, Section 2 discusses the investor-focused objectives of crowdfunding, and the role that technology can play in realising these objectives. Section 3 considers the meaning of investor protection within the scope of the Crowdfunding Regulation, and identifies areas where the current regime might be extended in the future. Section 4 discusses the categorisation of investors and the relevance thereof for the investor protection. Major provisions pertinent to investor protection are subsequently discussed in Sections 5 to 9, including the information to be provided to clients, default rate disclosure, the entry knowledge test and the simulation of ability to bear loss, the pre-contractual reflection period, and the key investment information sheet. The Sections also contain reflections pertinent to the different topics discussed in order to put them in a greater context. Section 10 concludes.

Alarie & Griffin on Using Machine Learning to Crack the Tax Code

Benjamin Alarie (University of Toronto – Faculty of Law) and Bettina Xue Griffin (Blue J Legal) have posted “Using Machine Learning to Crack the Tax Code” (Tax Notes Federal, January 31, 2022, p. 661) on SSRN. Here is the abstract:

In this article, we provide general observations about how tax practitioners are beginning to learn how to leverage the insights of machine learning to “crack the tax code.” We also examine how tax practitioners are using machine learning to quantify risks for their clients and ensure that tax advice can properly withstand scrutiny from the IRS and the courts. The goal is to guide tax experts in their tax planning and to help them devise the most effective ways to resolve tax disputes, leveraging new tools and technologies.

Tiamiyu on The Impending Battle for the Soul of Online Dispute Resolution

Oladeji Tiamiyu (Harvard Law School) has posted “The Impending Battle for the Soul of Online Dispute Resolution” (Cardozo J. Conflict Resol. 21) on SSRN. Here is the abstract:

Legal professionals and disputants are increasingly recognizing the value of online dispute resolution (“ODR”). While the coronavirus pandemic forced many to resolve disputes exclusively online, potentially resulting in long-term changed preferences for different stakeholders, the pre-pandemic trend has involved a dramatic increase in technological tools that can be used for resolving disputes, particularly with facilitative technologies, artificial intelligence, and blockchains. Though this has the added benefit of increasing optionality in the dispute resolution process, these novel technologies come with their own limitations and also raise challenging ethical considerations for how ODR should be designed and implemented. In considering whether the pandemic’s tectonic shifts will have a permanent impact, this piece has important implications for the future of the legal profession, as greater reliance on ODR technologies may change what it means to be a judge, lawyer, and disputant. The impending battle for the soul of ODR raises important considerations for fairness, access to justice, and effective dispute resolution—principles that will continue to be ever-present in the field.

Moerland & Kafrouni on Online Shopping with Artificial Intelligence: What Role to play for Trade Marks?

Anke Moerland (Maastricht University – Department of International and European Law) and Christie Kafrouni have posted “Online Shopping with Artificial Intelligence: What Role to play for Trade Marks?” on SSRN. Here is the abstract:

The debate on how artificial intelligence (AI) influences intellectual property protection has so far mainly focussed on its effects for patent and copyright protection. Not much attention has been paid to the effects of artificial intelligence technology for trade mark law. In particular, what has not yet been sufficiently investigated is the question as to whether trade marks still fulfil their role in a world in which consumers are assisted by AI technology when purchasing in the online market place. In how far do we still need trade marks to avoid consumer confusion? Or do the other functions of trade marks justify their continuous protection? In view of the fact that intellectual property rights have a market-distorting effect, it is in society’s interest to question whether trade mark protection is still justified.

Roberts et al. on Governing Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical Outcomes

Huw Roberts (University of Oxford – Oxford Internet Institute et al. have posted “Governing Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical Outcomes” on SSRN. Here is the abstract:

In this article, we compare the artificial intelligence (AI) strategies of China and the European Union (EU), assessing the key similarities and differences regarding what the high-level aims of each government’s strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterise China’s strategy by its current primary focus on fostering innovation and the EU’s on promoting ethical outcomes. Building on this comparative analysis, we consider where China’s AI strategy could learn from, and improve upon, the EU’s ethics-first approach to AI governance. We outline three recommendations which are to i) agree within government as to where responsibility for the ethical governance of AI should lie, ii) explicate high-level principles in an ethical manner, and iii) define and regulate high-risk applications of AI. Adopting these recommendations would enable the Chinese government better to fulfil its stated aim of governing AI ethically.