Holcomb on The Moral Case for Adopting a U.S. Right to be Forgotten

Lindsay Holcomb (University of Pennsylvania Law School) has posted “The Moral Case for Adopting a U.S. Right to be Forgotten” (4 J. L. & Tech. at Tx. 151 (2021)) on SSRN. Here is the abstract:

This article challenges the notion that the right to be forgotten is in direct opposition to the American values of free expression and the public’s right to know by arguing that such a right has roots in American moral culture as well as jurisprudence in the right to rehabilitation, and ultimately, suggests adopting a form of the right to privacy in the U.S. Part I reviews the origins of the European right to be forgotten, focusing on the Google Spain decision and relevant articles of the General Data Protection Regulation (GDPR). Part II argues that the U.S. has long supported a rehabilitative notion of privacy, which provides sturdy ground on which the right to be forgotten could stand in the U.S. Part III addresses First Amendment criticisms of the right. Part IV assesses how the right to be forgotten might be operationalized in the U.S. This article concludes with a discussion of the moral benefits of a right to be forgotten, particularly in how a more forgiving society can in fact increase speech and democratic participation.

Linford & Nelson on Trademark Fame and Corpus Linguistics

Kyra Nelson (J. Reuben Clark Law School) & Jake Linford (Florida State University – College of Law) have posted “Trademark Fame and Corpus Linguistics” on SSRN. Here is the abstract:

Trademark law recognizes and embraces an inherent homonymy in commercial communication: The same word can mean different things in different commercial contexts. Thus, legal protection might extend to two or more owners who use the same symbol (like Delta) to indicate different sources of disparate goods or services (airlines, faucets). Generally, only those uses that threaten to confuse consumers – the use of a similar symbol on identical or related goods – are subject to legal sanction.

There are exceptions to this homonymous structure of trademark law. The law extends special protection to famous trademarks, not only against confusing use, but also against dilution: non-confusing use that blurs or tarnishes the distinctiveness of the famous mark. The result of protection against blurring is that the law treats the famous mark as if it were legally monosemous, i.e., as if the sole proper use of the term in the commercial context is to designate goods and services from the famous mark’s owner.

Protection against dilution extends only to famous marks, but courts and scholars apply differing standards for assessing fame. Nonetheless, the trend over time has been to treat fame as a threshold requiring both sufficient renown – the famous mark must be a household name – and relatively singular use approaching if not quite reaching monosemy.

This article argues that corpus linguistic analysis can provide evidence of whether a mark is sufficiently prominent and singular to qualify for anti-dilution protection. Corpus linguistics detects language patterns and meaning from analyzing actual language use. This article uses data from three large, publicly accessible databases (corpora) to investigate whether litigated trademarks are both prominent and unique. Courts and parties can consider frequency evidence to establish or refute prominence, and contextual evidence like concordance and collocation to establish relative singularity.

Corpus evidence has some advantages over standard methods of assessing fame. It is much cheaper to generate than survey evidence may be equally probative. Corpus analysis can help right-size dilution litigation: A litigant could estimate the prominence and singularity of an allegedly famous mark using corpus evidence prior to discovery and better predict whether the mark should qualify for anti-dilution protection. Judges should be able to rely on the results of corpus analysis with reasonable confidence. Additionally, corpus evidence can show use of a mark over time, providing courts with tools to assess when a mark first became famous, a question that a survey generated for litigation cannot readily answer.

Recommended.

Coglianese & Lampmann on Contracting for Algorithmic Accountability

Cary Coglianese (University of Pennsylvania Law School) and Erik Lampmann (University of Pennsylvania Law School) have posted “Contracting for Algorithmic Accountability” (Administrative Law Review Accord, vol. 6, p. 175, 2021) on SSRN. Here is the abstract:

As local, state, and federal governments increase their reliance on artificial intelligence (AI) decision-making tools designed and operated by private contractors, so too do public concerns increase over the accountability and transparency of such AI tools. But current calls to respond to these concerns by banning governments from using AI will only deny society the benefits that prudent use of such technology can provide. In this Article, we argue that government agencies should pursue a more nuanced and effective approach to governing the governmental use of AI by structuring their procurement contracts for AI tools and services in ways that promote responsible use of algorithms. By contracting for algorithmic accountability, government agencies can act immediately, without any need for new legislation, to reassure the public that governmental use of machine-learning algorithms will be deployed responsibly. Furthermore, unlike with the adoption of legislation, a contracting approach to AI governance can be tailored to meet the needs of specific agencies and particular uses. Contracting can also provide a means for government to foster improved deployment of AI in the private sector, as vendors that serve government agencies may shift their practices more generally to foster responsible AI practices with their private sector clients. As a result, we argue that government procurement officers and agency officials should consider several key governance issues in their contract negotiations with AI vendors. Perhaps the most fundamental issue relates to vendors’ claims to trade secret protection—an issue that we show can be readily addressed during the procurement process. Government contracts can be designed to balance legitimate protection of proprietary information with the vital public need for transparency about the design and operation of algorithmic systems used by government agencies. We further urge consideration in government contracting of other key governance issues, including data privacy and security, the use of algorithmic impact statements or audits, and the role for public participation in the development of AI systems. In an era of increasing governmental reliance on artificial intelligence, public contracting can serve as an important and tractable governance strategy to promote the responsible use of algorithmic tools.

McCarl on The Limits of Law and AI

Ryan McCarl (UCLA School of Law) has posted “The Limits of Law and AI” (University of Cincinnati Law Review, 2022) on SSRN. Here is the abstract:

For thirty years, scholars in the field of law and artificial intelligence (AI) have explored the extent to which tasks performed by lawyers and judges can be assisted by computers. This article describes the medium-term outlook for AI technologies and explains the obstacles to making legal work computable. I argue that while AI-based software is likely to improve legal research and support human decisionmaking, it is unlikely to replace traditional legal work or otherwise transform the practice of law.

Jessop on Supervising the Tech Giants

Julian Jessop (Institute of Economic Affairs) has posted “Supervising the Tech Giants” (Institute of Economic Affairs Current Controversies No. 56) on SSRN. Here is the abstract:

The rise of the ‘tech giants’ is, of course, a significant commercial threat to more traditional media, but it also raises some potentially important issues of public policy. These companies have variously been accused of facilitating the spread of ‘fake news’ and extremist material, dodging taxes, and exploiting their market dominance. In reality, ‘fake news’ is nothing new, nor is it as influential as many assume. Most people rely on multiple sources for information. Television and newspapers are still trusted far more than online platforms. The market is also coming up with its own checks and balances, such as fact-checking services. The internet may have provided more channels for ‘fake news’, but new technology has also made it easier to find the truth. The UK newspaper industry itself shows how self-regulation can be effective, especially when supported by the backstops of existing criminal and civil law. The internet is not the regulation-free zone that some suppose. But, in any event, the tech companies have a strong economic interest in protecting their brands and being responsive to the demands of their customers and advertisers. It may be worth considering some ways in which these pressures could be strengthened, such as obliging new platforms to publish a code of practice like those adopted by newspapers. However, most already do, and the rest will surely follow. The taxation of tech giants raises many issues relevant to any multinational company. It seems reasonable to expect firms to explain clearly what tax they pay. But an additional levy on the activities of tech companies would be inconsistent with the general principles of fair and efficient taxation.

Robertson & Hoffman on Professional Speech at Scale

Cassandra Burke Robertson (Case Western Reserve University School of Law) and Sharona Hoffman (Case Western Reserve University School of Law) have posted “Professional Speech at Scale” (UC Davis Law Review, Forthcoming) on SSRN. Here is the abstract:

Regulatory actions affecting professional speech are facing new challenges from all sides. On one side, the Supreme Court has grown increasingly protective of professionals’ free speech rights, and it has subjected regulations affecting that speech to heightened levels of scrutiny that call into question traditional regulatory practices in both law and medicine. On the other side, technological developments, including the growth of massive digital platforms and the introduction of artificial intelligence programs, have created brand new problems of regulatory scale. Professional speech is now able to reach a wide audience faster than ever before, creating risks that misinformation will cause public harm long before regulatory processes can gear up to address it.

This article examines how these two trends interact in the fields of health-care regulation and legal practice. It looks at how these forces work together both to create new regulatory problems and to shape the potential government responses to those problems. It analyzes the Supreme Court’s developing caselaw on professional speech and predicts how the Court’s jurisprudence is likely to shape current legal challenges in law and medicine. The Article further examines the regulatory challenges posed by the change in scale generated by massive digital platforms and the introduction of artificial intelligence. It concludes by recommending ways in which government regulators can meet the new challenges posed by technological development without infringing on protected speech. The crux of our proposal is that incremental change in the traditional state regulatory process is insufficient to meet the challenges posed by changes in technological scale. Instead, it is time to ask bigger questions about the underlying goals and first principles of professional regulation.

Tosoni on The Right To Object to Automated Individual Decisions

Luca Tosoni (University of Oslo) has posted “The Right To Object to Automated Individual Decisions: Resolving the Ambiguity of Article 22(1) of the General Data Protection Regulation” (11 International Data Privacy Law (2021) on SSRN. Here is the abstract:

This article provides a critical analysis of Article 22(1) of the European Union’s General Data Protection Regulation (‘GDPR’). In particular, the article examines whether, as a matter of lex lata, the enigmatic ‘right not to be subject to a decision based solely on automated processing’ provided for in Article 22(1) should be interpreted as a general prohibition or as a right to be exercised by the data subject. These two possible interpretations offer very different protection to the interests of data subjects and controllers: if the basic rule in Article 22(1) were interpreted as a prohibition, controllers would basically not be allowed to make individual decisions solely based on automated processing, unless one of the specific exceptions specified in Article 22(2) applies; conversely, if that in Article 22(1) were interpreted as a right to be actively exercised, the use of automated individual decisions would normally be restricted under the GDPR only where the data subject has expressly objected to it. Thus, resolving the ambiguity of Article 22(1) is critical to understand what is the scope left for automated decision-making under EU data protection law. Based on a textual, contextual, systematic and teleological interpretation of Article 22(1), the article concludes that such a provision is better characterized as conferring upon data subjects a right that they may exercise at their discretion, rather than establishing a general ban on individual decisions based solely on automated processing.

Colonna on Legal Implications of Using AI as an Exam Invigilator

Liane Colonna (Stockholm University – Faculty of Law) has posted “Legal Implications of Using AI as an Exam Invigilator” on SSRN. Here is the abstract:

This article considers the legal implications of the use of remote proctoring using artificial intelligence (AI) to monitor online exams and, in particular, to validate students’ identities and to flag suspicious activities during the exam to discourage academic misconduct like plagiarism, unauthorized collaboration and sharing of test questions or answers. The emphasis is on AI-based facial recognition technologies (FRT) that can be used during the authentication process for remote users during the online exam process as well as to identify dubious behavior throughout the examination. The central question explored is whether these systems are necessary and lawful based on European human rights law.

The first part of the paper explores the use of AI-based remote proctoring technologies in higher education, both from the institutional perspective as well as from the student perspective. It emphasizes how universities are shifting from a reliance on systems that include human oversight, like proctors overseeing the examinations from remote locations, towards more algorithmically driven practices that rely on processing biometric data. The second part of the paper examines how the use of AI-based remote proctoring technologies in higher education impacts the fundamental rights of students, focusing on the fundamental rights to privacy, data protection, and non-discrimination. Next, it provides a brief overview of the legal frameworks that exists to limit the use of this technology. Finally, the paper closely examines the issue of legality of processing in an effort to unpack and understand the complex legal and ethical issues that arise in this context.

Recommended.

Shackleton on on Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence

J. R. SHACKLETON (Institute of Economic Affairs (IEA), Westminster Business School, University of Buckingham) has posted “Robocalypse Now? Why we Shouldn’t Panic about Automation, Algorithms and Artificial Intelligence” (Institute of Economic Affairs Current Controversies No. 61) on SSRN. Here is the abstract:

It is claimed that robots, algorithms and artificial intelligence are going to destroy jobs on an unprecedented scale. These developments, unlike past bouts of technical change, threaten rapidly to affect even highly-skilled work and lead to mass unemployment and/or dramatic falls in wages and living standards, while accentuating inequality. As a result, we are threatened with the ‘end of work’, and should introduce radical new policies such as a robot tax and a universal basic income. However the claims being made of massive job loss are based on highly contentious technological assumptions and are contested by economists who point to flaws in the methodology. In any case, ‘technological determinism’ ignores the engineering, economic, social and regulatory barriers to adoption of many theoretically possible innovations. And even successful innovations are likely to take longer to materialise than optimists hope and pessimists fear. Moreover, history strongly suggests that jobs destroyed by technical change will be replaced by new jobs complementary to these technologies – or else in unrelated areas as spending power is released by falling prices. Current evidence on new types of job opportunity supports this suggestion. The UK labour market is currently in a healthy state and there is little evidence that technology is having a strongly negative effect on total employment. The problem at the moment may be a shortage of key types of labour rather than a shortage of work. The proposal for a robot tax is ill-judged. Defining what is a robot is next to impossible, and concerns over slow productivity growth anyway suggest we should be investing more in automation rather than less. Even if a workable robot tax could be devised, it would essentially duplicate the effects, and problems, of corporation tax. Universal basic income is a concept with a long history. Despite its appeal, it would be costly to introduce, could have negative effects on work incentives, and would give governments dangerous powers. Politicians already seem tempted to move in the direction of these untested policies. They would be foolish to do so. If technological change were to create major problems in the future, there are less problematic policies available to mitigate its effects – such as reducing taxes on employment income, or substantially deregulating the labour market.

Hellman on Personal Responsibility in an Unjust World

Deborah Hellman (University of Virginia School of Law) has posted “Personal Responsibility in an Unjust World: A Reply to Eidelson” (The American Journal of Law and Equality (Forthcoming)) on SSRN. Here is the abstract:

In this reply to Benjamin Eidelson’s Patterned Inequality, Compounding Injustice and Algorithmic Prediction, I argue that moral unease about algorithmic prediction is not fully explained by the importance of dismantling what Eidelson terms “patterned inequality.” Eidelson is surely correct that patterns of inequality that track socially salient traits like race are harmful and that this harm provides an important reason not to entrench these structures of disadvantage. We disagree, however, about whether this account fully explains the moral unease about algorithmic prediction. In his piece, Eidelson challenges my claim that individual actors also have reason to avoid compounding prior injustice. In this reply, I answer his challenges.