Daniel J. Solove (George Washington University Law School) and Danielle Keats Citron (University of Virginia School of Law) have posted “Standing and Privacy Harms: A Critique of TransUnion v. Ramirez”
(101 Boston University Law Review Online 62 (2021)). Here is the abstract:
Through the standing doctrine, the U.S. Supreme Court has taken a new step toward severely limiting the effective enforcement of privacy laws. The recent Supreme Court decision, TransUnion v. Ramirez (U.S. June 25, 2021) revisits the issue of standing and privacy harms under the Fair Credit Reporting Act (FCRA) that began with Spokeo v. Robins, 132 S. Ct. 1441 (2012). In TransUnion, a group of plaintiffs sued TransUnion under FCRA for falsely labeling them as potential terrorists in their credit reports. The Court concluded that only some plaintiffs had standing – those whose credit reports were disseminated. Plaintiffs whose credit reports weren’t disseminated lacked a “concrete” injury and accordingly lacked standing – even though Congress explicitly granted them a private right of action to sue for violations like this and even though a jury had found that TransUnion was at fault.
In this essay, Professors Daniel J. Solove and Danielle Keats Citron engage in an extensive critique of the TransUnion case. They contend that existing standing doctrine incorrectly requires concrete harm. For most of U.S. history, standing required only an infringement on rights. Moreover, when assessing harm, the Court has a crabbed and inadequate understanding of privacy harms. Additionally, allowing courts to nullify private rights of action in federal privacy laws is a usurpation of legislative power that upends the compromises and balances that Congress establishes in laws. Private rights of action are essential enforcement mechanisms.
Akshaya Kamalnath (ANU College of Law) and Umakanth Varottil (NUS Law; European Corporate Governance Institute) have posted “A Disclosure-Based Approach to Regulating AI in Corporate Governance” on SSRN. Here is the abstract:
The use of technology, including artificial intelligence (AI), in corporate governance has been expanding, as corporations have begun to use AI systems for various governance functions such as effecting board appointments, enabling board monitoring by processing large amounts of data and even helping with whistle blowing, all of which address the agency problems present in modern corporations. On the other hand, the use of AI in corporate governance also presents significant risks. These include privacy and security issues, the ‘black box problem’ or the lack of transparency with AI decision-making, and undue power conferred on those who control decision-making regarding the deployment of specific AI technologies.
In this paper, we explore the possibility of deploying a disclosure-based approach as a regulatory tool to address the risks emanating from the use of AI in corporate governance. Specifically, we examine whether existing securities laws mandate corporate boards to disclose whether they rely on AI in their decision-making process. Not only could such disclosure obligations ensure adequate transparency for the various corporate constituents, but they may also incentivize boards to pay sufficient regard to the limitations or risks of AI in corporate governance. At the same time, such a requirement will not constrain companies from experimenting with the potential uses of AI in corporate governance. Normatively, and given the likelihood of greater use of AI in corporate governance moving forward, we also explore the merits of devising a specific disclosure regime targeting the intersection between AI and corporate governance.
Charlotte Tschider (Loyola University Chicago School of Law) has posted “Legal Opacity: Artificial Intelligence’s Sticky Wicket” (Iowa Law Review, Vol. 106, 2021) on SSRN. Here is the abstract:
Proponents of artificial intelligence (“AI”) transparency have carefully illustrated the many ways in which transparency may be beneficial to prevent safety and unfairness issues, to promote innovation, and to effectively provide recovery or support due process in lawsuits. However, impediments to transparency goals, described as opacity, or the “black-box” nature of AI, present significant issues for promoting these goals.
An undertheorized perspective on opacity is legal opacity, where competitive, and often discretionary legal choices, coupled with regulatory barriers create opacity. Although legal opacity does not specifically affect AI only, the combination of technical opacity in AI systems with legal opacity amounts to a nearly insurmountable barrier to transparency goals. Types of legal opacity, including trade secrecy status, contractual provisions that promote confidentiality and data ownership restrictions, and privacy law independently and cumulatively make the black box substantially opaquer.
The degree to which legal opacity should be limited or disincentivized depends on the specific sector and transparency goals of specific AI technologies, technologies which may dramatically affect people’s lives or may simply be introduced for convenience. This Response proposes a contextual approach to transparency: Legal opacity may be limited in situations where the individual or patient benefits, when data sharing and technology disclosure can be incentivized, or in a protected state when transparency and explanation are necessary.
Graham Greenleaf (University of New South Wales) has posted “China’s Completed Personal Information Protection Law: Rights Plus Cyber-security” ((2021) 172 Privacy Laws & Business International Report 20-23) on SSRN. Here is the abstract:
On 20 August 2021 the Standing Committee of China’s National People’s Congress (SC-NPC, not the NPC itself) enacted the Personal Information Protection Law (PIPL), the culmination of over a decade of incremental legislative reform. Businesses were required to adjust rapidly to the law’s starting date of 1 November 2021. Since the first draft of the PIPL was released by the SC-NPC in October 2020, it was revised in a succession of drafts. One purpose of this article is to detail these changes. The other purpose is to place the PIPL in the context of China’s near-complete cyber-security laws, of which it is part.
Of the 74 sections in the final Law, half have had non-trivial amendments since the first draft. Some of the amendments are significant, although none involve fundamental changes to the direction of the first draft. Significant amendments include: tightening controls over automated decision-making; right of data portability added; possibility of litigation by ‘privacy NGOs’; special obligations on providers of platform services; extra-territoriality is potentially extra-vague; local representatives required within PRC; and other forms of data localisation widened.
The argument is made that these export conditions are not ‘just Chinese adequacy’ but something considerably different, which seem to open the way for China to negotiate mutual data export agreements, multilateral or bilateral.
PIPL also plays a role in China’s emerging cyber-security structure. The Cybersecurity Law (CSL) of 2016, the Data Security Law (DSL) of 2021, and other more subordinate parts of China’s array of legislation, are other parts of this emerging structure.
Yujun Huang (University of Washington; Macau University of Science and Technology) has posted “Comparative Study: How Metaverse Connect with China Laws” on SSRN. Here is the abstract:
This paper is divided into three parts. The first part introduces the background, concept and development of the “metaverse”. The second part describes the possible disputes over rights and obligations in the metaverse scenario, including disputes over civil rights such as identity rights, personality rights, property rights, intellectual property rights and tort liability, and then explains and analyzes the Chinese laws that may apply to govern these disputes. In the third part, some dispute resolution proposals for resolving disputes that may exist in the metaverse world are presented.