Bagby & Houser on Artificial Intelligence: The Critical Infrastructures

John W. Bagby (Pennsylvania State University) and Kimberly Houser (University of North Texas) have posted “Artificial Intelligence: The Critical Infrastructures” on SSRN. Here is the abstract:

Artificial Intelligence (AI) innovation is most strongly impacted by AI Critical Infrastructures. These are the conditions, capacities, assets and inputs that create an environment conducive to the advancement of the AI technologies. Close inspection of AI’s generalized architecture reveals a supply chain that implies six AI critical infrastructures. There are at least seven necessary steps or processes contained in a generalized AI architecture. These steps are: (1) occurrences, events, facts or conditions transpire enabling the creation of potentially useful data, (2) these data are logged through capture and (increasingly computer and telecommunications enabled) initial storage, (3) such data are aggregated, often by numerous data repositories or AI operators, (4) human intelligence performs iterative analysis as derived from deployment of algorithms, (5) initial machine learning occurs, (6) near constant feedback loops are deployed by many AI applications that adapt the underlying model as new data is incorporated, and (7) based on insights resulting from AI, decision-making occurs, both automatically by computer or by human intervention,. Successful Machine Learning requires ample supply of the six broad AI critical infrastructures: (i) strategic insight/vision largely expressed as regional and/or national Industrial Policy, which is paramount in impacting all four other AI critical infrastructures, (ii) human intellect is needed to foster a deep-bench, from a competent AI Workforce, (iii) R&D Investment in AI, (iv) AI Hardware, both Computing Power and Connectivity (ICT), (v) bountiful and ever growing supply of Accessible Data, and (vi) market receptivity as sustainable demand for AI knowledge to monetize successful AI innovation. This article provides an initial foundation for a comparative of the three world economies (regions) seemingly best positioned to make substantial AI advancements. Predictably, significant differences among the political and cultural drivers in these three regions are likely to impact needed commitment to AI critical infrastructures: China (Asia) vs. the United States (North America) vs. European Union (EU). The harsh reality of AI innovation is that delays in commitment and deployment of AI critical infrastructures will relegate the losing region(s) to become, at best, a chronic AI customer rather than a major successful AI supplier.

Coglianese on Moving Toward Personalized Law

Cary Coglianese (University of Pennsylvania Carey Law School) has posted “Moving Toward Personalized Law” (University of Chicago Law Review Online, Forthcoming) on SSRN. Here is the abstract:

Rules operate as a tool of governance by making generalizations, thereby cutting down on government officials’ need to make individual determinations. But because they are generalizations, rules can result in inefficient or perverse outcomes due to their over- and under-inclusiveness. With the aid of advances in machine-learning algorithms, however, it is becoming increasingly possible to imagine governments shifting away from a predominant reliance on general rules and instead moving toward increased reliance on precise individual determinations—or on “personalized law,” to use the term Omri Ben-Shahar and Ariel Porat use in the title of their 2021 book. Among the various technological, organizational, and political hurdles that stand in the way of a personalized system of law, I elaborate three obstacles that I refer to as the challenges of completeness, consensus, and currency. I then offer two solutions—custom and competence—that could bring about public acceptance to personalized law. Although I do not envision that these solutions can be complete ones, in the sense that they cannot prevent all problems with personalized law, a system of personalized law need not be perfect to be normatively appealing. All that personalized law must be is better than the imperfect rule-based system in place today.

Liu on Exporting the First Amendment through Trade: the Global ‘Constitutional Moment’ for Online Platform Liability

Han-Wei Liu (Monash University) “Exporting the First Amendment through Trade: the Global ‘Constitutional Moment’ for Online Platform Liability” (Georgetown Journal of International Law, Vol. 53, No. 1, 2022) on SSRN. Here is the abstract:

The U.S. in the recent United States-Mexico-Canada Agreement and U.S.-Japan Digital Trade Agreement adopts a new clause which mirrors Section 230 of the Communications Decency Act of 1996, shielding online intermediaries from third-party contents liability. For policymakers, the seemingly innocuous “Interactive Computer Services” title creates the fundamental challenge in balancing free speech against competing interests in the digital age. This Article argues against globally normalizing this clause through its diffusion in trade deals. Internally, as the Biden Administration has offered a clean slate to discuss reforms to the controversial regime, it is unwise for U.S. trade negotiators to export the same clause in future negotiations. Externally, it is problematic for other partners to accept this clause, born from American values deeply rooted in the First Amendment. Each country is entitled to achieve the fundamental right of free speech through their own economic, social, and political pathways, towards an optimal balance—and rebalance—against other interests. The clause should be dropped from future trade negotiations while policymakers worldwide grapple with the challenges posed by online platforms and reconfigure their regulatory frameworks in the digital era.

Elzweig & Trautman on When Does a Nonfungible Token (NFT) Become a Security?

Brian Elzweig (University of West Florida) and Lawrence J. Trautman (Prairie View A&M University – College of Business) have posted “When Does a Nonfungible Token (NFT) Become a Security?” on SSRN. Here is the abstract:

Non-fungible tokens (NFTs) gained prominence in the news cycle during March 2021when $69 million was paid in a cryptocurrency known as ether for a single piece of unique digital art titled “Everydays – The First 5000 Days.” Regulation of NFTs is complicated by the fact that the technology encompasses so many varied applications. Therefore, it is the particular use of a given NFT that will determine its appropriate regulatory regime, since it may take the form of a collectible, data associated with a physical item, financial instrument, or a permanent record associated with a person, such as marriage license, or property deed. Just as in the case of digital art in the form of NFTs, our laws and regulations are in a constant struggle to keep pace with rapid introduction and diffusion of technological changes. Unlike digital or cryptocurrencies which are fungible, NFTs are not. The effective regulation of U.S. securities markets has a significant impact on capital formation, job creation, economic security, and growth of both the American and global economies. In recent years, the advent of the Internet has created novel regulatory challenges for the SEC.

The focus of our article is how and when an NFT becomes a security for purposes of U.S. securities law. We proceed in six parts. First, we briefly explain the evolution of the digital world and emergence of virtual economies within. Second, we describe blockchain technology and the growth in virtual currencies. Third, is an explanation of NFTs along with some examples of their various uses. Fourth, we discuss when a nonfungible token is a security. Fifth, we explore SEC interpretations of when a crypto-asset is a regulatable security. And last, we conclude. Given the importance of U.S, securities markets in fostering job creation and global economic growth, we believe this work contributes to the understanding of this new technology and is of considerable interest to securities issuers, investors and the regulatory community.

Zhao on Initial Coin Offerings and Extraterritorial Application of U.S. Securities Laws

Freya (Fangheng) Zhao (Georgetown University Law Center) has posted “Initial Coin Offerings and Extraterritorial Application of U.S. Securities Laws” (139 Banking L. J. 174 (2022) on SSRN. Here is the abstract:

Cryptocurrency transactions have grown exponentially since Satoshi Nakamoto published the Bitcoin White Paper on Halloween 2008. As of November 2021, the cryptocurrency market has transformed into an ecosystem with 14,710 tokens and $2.6 trillion market capitalization. The rise of initial coin offerings (“ICOs”) has been a major driver of the boom. Thousands of ICOs have raised billions of dollars since MasterCoin conducted the first reported ICO in 2013. Amid the boom, the Securities and Exchange Commission (“SEC”) has been grappling to apply the U.S. securities laws “extraterritorially” to regulate the ICOs, which are usually cross-border due to the inherent international nature of its underlying blockchain technology. The increasingly aggressive regulatory actions from the SEC have caused a massive flight of ICOs to offshore havens. In the first quarter of 2019, 86 ICOs specifically excluded U.S. investors.

However, these efforts to avoid the jurisdiction of the U.S. securities laws have mostly turned out to be futile. The SEC is not shy about reaching beyond the U.S. water’s edge to regulate offshore ICOs, as evidenced by its investigation of the DAO and the enforcement actions against PlexCorps,, Telegram, and Ripple. Class actions brought by investors against Tezos also suggested that the presumption against extraterritoriality is no panacea to prevent the application of the U.S. securities laws over offshore ICOs.

This article examines the extraterritorial application of the U.S. securities laws to regulate offshore ICOs. The first part of the article offers a brief introduction of the jurisprudence governing the extraterritoriality of the U.S. securities laws. The second part analyzes the application of the U.S. securities laws in proceedings against ICO issuers like PlexCorps, Tezos,, Telegram, and Ripple. The third part of this article summarizes the current legal framework of applying the U.S. securities laws to offshore ICOs and concludes that such a patchwork approach is a stopgap solution – expedient but imperfect. This article ends by discussing the potential future directions, including congressional legislation, international cooperation, a deferential substituted compliance approach, and adapting ICOs to existing registration exemptions.

Gal on Limiting Algorithmic Cartels

Michael Gal (University of Haifa – Faculty of Law) has posted “Limiting Algorithmic Cartels” (Berkeley Technology Law Journal, 2023 Forthcoming) on SSRN. Here is the abstract:

Recent studies have proven that pricing algorithms can autonomously learn to coordinate prices, and set them at supra-competitive levels. The growing use of such algorithms mandates the creation of solutions that limit the negative welfare effects of algorithmic coordination. Unfortunately, to date, no good means exist to limit such conduct. While this challenge has recently prompted scholars from around the world propose different solutions, many suggestions are inefficient or impractical, and some might even strengthen coordination.

This challenge requires thinking outside the box. Accordingly, this article suggests four (partial) solutions. The first is market-based, and entails using consumer algorithms to counteract at least some of the negative effects of algorithmic coordination. By creating buyer power, such algorithms can also enable offline transactions, eliminating the online transparency that strengthens coordination. The second suggestion is to change merger review so as to limit mergers that are likely to increase algorithmic coordination. The next two are more radical, yet can capture more cases of such conduct. The third involves the introduction of a disruptive algorithm, which would disrupt algorithmic coordination by creating noise on the supply side. The final suggestion entails freezing the price of one competitor, in line with prior suggestions to address predatory pricing suggested by Edlin and others. The advantages and risks of each solution are discussed. As antitrust agencies around the world are just starting to experiment with different ways to limit algorithmic coordination, there is no better time to explore how best to achieve this important task.

Scholz on Private Rights of Action in Privacy Law

Lauren Henry Scholz (Florida State University – College of Law) has posted “Private Rights of Action in Privacy Law” (William & Mary Law Review, Forthcoming) on SSRN. Here is the abstract:

Many privacy advocates assume that the key to providing individuals with more privacy protection is strengthening the power government has to directly sanction actors that hurt the privacy interests of citizens. This Article contests the conventional wisdom, arguing that private rights of action are essential for privacy regulation. First, I show how private rights of action make privacy law regime more effective in general. Private rights of action are the most direct regulatory access point to the private sphere. They leverage private expertise and knowledge, create accountability through discovery, and have expressive value in creating privacy-protective norms. Then to illustrate the general principle, I provide examples of how private rights of actions can improve privacy regulation in a suite of key modern privacy problems. We cannot afford to leave private rights of action out of privacy reform.

Abiri & Huang on The People’s (Republic) Algorithms

Gilad Abiri (Peking University School of Transnational Law; Yale Law School) and Xinyu Huang
(Yale Law School) have posted “The People’s (Republic) Algorithms” (Notre Dame Journal of International and Comparative Law (Forthcoming)) on SSRN. Here is the abstract:

Recommendation algorithms, such as those behind social media feeds and search engine results, are the prism through which we acquire information in our digital age. Critics ascribe many social and political woes—such as the prevalence of misinformation and political division—to the fact that we view our world through the personalized and atomized prism of recommendation artificial intelligence. The way the great powers of the internet—the United States, the European Union, and China—choose to regulate recommendation algorithms will undoubtedly have a serious impact on our lives and political well-being.

On December 31, 2021, the Cyberspace Administration of China, a governmental internet watchdog, published a bombshell regulation directed at recommendation algorithms. These regulations, which went into effect on March 2022, exponentially increase the control and autonomy of Chinese netizens over their digital life. At the same time, the regulation will greatly increase the control the Chinese government has over these algorithms. In this timely essay, we analyze the content of the regulation and situate it in its historical and political context.

Martínez, Mollica & Gibson on How Poor Writing, not Specialized Concepts, Drives Processing Difficulty in Legal Language

Eric Martínez (MIT), Frank Mollica (Edinburgh), and Edward Gibson (MIT) have posted “Poor Writing, not Specialized Concepts, Drives Processing Difficulty in Legal Language” (Cognition 2022) on SSRN. Here is the abstract:

Despite their ever-increasing presence in everyday life, contracts remain notoriously inaccessible to laypeople. Why? Here, a corpus analysis (n≈225 million words) revealed that contracts contain startlingly high proportions of certain difficult-to-process features–including low-frequency jargon, center-embedded clauses (leading to long-distance syntactic dependencies), passive voice structures, and non-standard capitalization–relative to nine other baseline genres of written and spoken English. An experiment (N=184) further revealed that excerpts containing these features were recalled and comprehended at lower rates than excerpts without these features, even for experienced readers, and that center-embedded clauses inhibited recall more-so than other features. These findings (a) undermine the specialized concepts account of legal theory, according to which law is a system built upon expert knowledge of technical concepts; (b) suggest such processing difficulties result largely from working-memory limitations imposed by long-distance syntactic dependencies (i.e., poor writing) as opposed to a mere lack of specialized legal knowledge; and (c) suggest editing out problematic features of legal texts would be tractable and beneficial for society at-large.

Tucker on Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder

Emily Tucker (Center on Privacy & Technology at Georgetown Law) has posted “Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder” (New York University Review of Law & Social Change, Vol. 46, No. 1, 2022) on SSRN. Here is the abstract:

In the many debates about whether and how algorithmic technologies should be used in law enforcement, all sides seem to share one assumption: that, in the struggle for justice and equity in our systems of governance, the subjectivity of human judgment is something to be overcome. While there is significant disagreement about the extent to which, for example, a machine-generated risk assessment might ever be unpolluted by the problematic biases of its human creators and users, no one in the scholarly literature has so far suggested that if such a thing were achievable, it would be undesirable.

This essay argues that it only becomes possible for policing to be something other than mere brutality when the activities of policing are themselves a way of deliberating about what policing is and should be, and that algorithms are definitionally opposed to such deliberation. An algorithmic process, whether carried out by a human brain or by a computer, can only operate at all if the terms that govern its operations have fixed definitions. Fixed definitions may be useful or necessary for human endeavors—like getting bread to rise or designing a sturdy foundation for a building—which can be reduced to techniques of measurement and calculation. But the fixed definitions that underlie policing algorithms (what counts as transgression, which transgressions warrant state intervention, etc) relate to an ancient, fundamental, and enduring political question, one that cannot be expressed by equation or recipe: the question of justice. The question of justice is not one to which we can ever give a final answer, but one that must be the subject of ongoing ethical deliberation within human communities.