John W. Bagby (Pennsylvania State University) and Kimberly Houser (University of North Texas) have posted “Artificial Intelligence: The Critical Infrastructures” on SSRN. Here is the abstract:
Artificial Intelligence (AI) innovation is most strongly impacted by AI Critical Infrastructures. These are the conditions, capacities, assets and inputs that create an environment conducive to the advancement of the AI technologies. Close inspection of AI’s generalized architecture reveals a supply chain that implies six AI critical infrastructures. There are at least seven necessary steps or processes contained in a generalized AI architecture. These steps are: (1) occurrences, events, facts or conditions transpire enabling the creation of potentially useful data, (2) these data are logged through capture and (increasingly computer and telecommunications enabled) initial storage, (3) such data are aggregated, often by numerous data repositories or AI operators, (4) human intelligence performs iterative analysis as derived from deployment of algorithms, (5) initial machine learning occurs, (6) near constant feedback loops are deployed by many AI applications that adapt the underlying model as new data is incorporated, and (7) based on insights resulting from AI, decision-making occurs, both automatically by computer or by human intervention,. Successful Machine Learning requires ample supply of the six broad AI critical infrastructures: (i) strategic insight/vision largely expressed as regional and/or national Industrial Policy, which is paramount in impacting all four other AI critical infrastructures, (ii) human intellect is needed to foster a deep-bench, from a competent AI Workforce, (iii) R&D Investment in AI, (iv) AI Hardware, both Computing Power and Connectivity (ICT), (v) bountiful and ever growing supply of Accessible Data, and (vi) market receptivity as sustainable demand for AI knowledge to monetize successful AI innovation. This article provides an initial foundation for a comparative of the three world economies (regions) seemingly best positioned to make substantial AI advancements. Predictably, significant differences among the political and cultural drivers in these three regions are likely to impact needed commitment to AI critical infrastructures: China (Asia) vs. the United States (North America) vs. European Union (EU). The harsh reality of AI innovation is that delays in commitment and deployment of AI critical infrastructures will relegate the losing region(s) to become, at best, a chronic AI customer rather than a major successful AI supplier.
Cary Coglianese (University of Pennsylvania Carey Law School) has posted “Moving Toward Personalized Law” (University of Chicago Law Review Online, Forthcoming) on SSRN. Here is the abstract:
Rules operate as a tool of governance by making generalizations, thereby cutting down on government officials’ need to make individual determinations. But because they are generalizations, rules can result in inefficient or perverse outcomes due to their over- and under-inclusiveness. With the aid of advances in machine-learning algorithms, however, it is becoming increasingly possible to imagine governments shifting away from a predominant reliance on general rules and instead moving toward increased reliance on precise individual determinations—or on “personalized law,” to use the term Omri Ben-Shahar and Ariel Porat use in the title of their 2021 book. Among the various technological, organizational, and political hurdles that stand in the way of a personalized system of law, I elaborate three obstacles that I refer to as the challenges of completeness, consensus, and currency. I then offer two solutions—custom and competence—that could bring about public acceptance to personalized law. Although I do not envision that these solutions can be complete ones, in the sense that they cannot prevent all problems with personalized law, a system of personalized law need not be perfect to be normatively appealing. All that personalized law must be is better than the imperfect rule-based system in place today.
Han-Wei Liu (Monash University) “Exporting the First Amendment through Trade: the Global ‘Constitutional Moment’ for Online Platform Liability” (Georgetown Journal of International Law, Vol. 53, No. 1, 2022) on SSRN. Here is the abstract:
The U.S. in the recent United States-Mexico-Canada Agreement and U.S.-Japan Digital Trade Agreement adopts a new clause which mirrors Section 230 of the Communications Decency Act of 1996, shielding online intermediaries from third-party contents liability. For policymakers, the seemingly innocuous “Interactive Computer Services” title creates the fundamental challenge in balancing free speech against competing interests in the digital age. This Article argues against globally normalizing this clause through its diffusion in trade deals. Internally, as the Biden Administration has offered a clean slate to discuss reforms to the controversial regime, it is unwise for U.S. trade negotiators to export the same clause in future negotiations. Externally, it is problematic for other partners to accept this clause, born from American values deeply rooted in the First Amendment. Each country is entitled to achieve the fundamental right of free speech through their own economic, social, and political pathways, towards an optimal balance—and rebalance—against other interests. The clause should be dropped from future trade negotiations while policymakers worldwide grapple with the challenges posed by online platforms and reconfigure their regulatory frameworks in the digital era.
Brian Elzweig (University of West Florida) and Lawrence J. Trautman (Prairie View A&M University – College of Business) have posted “When Does a Nonfungible Token (NFT) Become a Security?” on SSRN. Here is the abstract:
Non-fungible tokens (NFTs) gained prominence in the news cycle during March 2021when $69 million was paid in a cryptocurrency known as ether for a single piece of unique digital art titled “Everydays – The First 5000 Days.” Regulation of NFTs is complicated by the fact that the technology encompasses so many varied applications. Therefore, it is the particular use of a given NFT that will determine its appropriate regulatory regime, since it may take the form of a collectible, data associated with a physical item, financial instrument, or a permanent record associated with a person, such as marriage license, or property deed. Just as in the case of digital art in the form of NFTs, our laws and regulations are in a constant struggle to keep pace with rapid introduction and diffusion of technological changes. Unlike digital or cryptocurrencies which are fungible, NFTs are not. The effective regulation of U.S. securities markets has a significant impact on capital formation, job creation, economic security, and growth of both the American and global economies. In recent years, the advent of the Internet has created novel regulatory challenges for the SEC.
The focus of our article is how and when an NFT becomes a security for purposes of U.S. securities law. We proceed in six parts. First, we briefly explain the evolution of the digital world and emergence of virtual economies within. Second, we describe blockchain technology and the growth in virtual currencies. Third, is an explanation of NFTs along with some examples of their various uses. Fourth, we discuss when a nonfungible token is a security. Fifth, we explore SEC interpretations of when a crypto-asset is a regulatable security. And last, we conclude. Given the importance of U.S, securities markets in fostering job creation and global economic growth, we believe this work contributes to the understanding of this new technology and is of considerable interest to securities issuers, investors and the regulatory community.
Freya (Fangheng) Zhao (Georgetown University Law Center) has posted “Initial Coin Offerings and Extraterritorial Application of U.S. Securities Laws” (139 Banking L. J. 174 (2022) on SSRN. Here is the abstract:
Cryptocurrency transactions have grown exponentially since Satoshi Nakamoto published the Bitcoin White Paper on Halloween 2008. As of November 2021, the cryptocurrency market has transformed into an ecosystem with 14,710 tokens and $2.6 trillion market capitalization. The rise of initial coin offerings (“ICOs”) has been a major driver of the boom. Thousands of ICOs have raised billions of dollars since MasterCoin conducted the first reported ICO in 2013. Amid the boom, the Securities and Exchange Commission (“SEC”) has been grappling to apply the U.S. securities laws “extraterritorially” to regulate the ICOs, which are usually cross-border due to the inherent international nature of its underlying blockchain technology. The increasingly aggressive regulatory actions from the SEC have caused a massive flight of ICOs to offshore havens. In the first quarter of 2019, 86 ICOs specifically excluded U.S. investors.
However, these efforts to avoid the jurisdiction of the U.S. securities laws have mostly turned out to be futile. The SEC is not shy about reaching beyond the U.S. water’s edge to regulate offshore ICOs, as evidenced by its investigation of the DAO and the enforcement actions against PlexCorps, Block.one, Telegram, and Ripple. Class actions brought by investors against Tezos also suggested that the presumption against extraterritoriality is no panacea to prevent the application of the U.S. securities laws over offshore ICOs.
This article examines the extraterritorial application of the U.S. securities laws to regulate offshore ICOs. The first part of the article offers a brief introduction of the jurisprudence governing the extraterritoriality of the U.S. securities laws. The second part analyzes the application of the U.S. securities laws in proceedings against ICO issuers like PlexCorps, Tezos, Block.one, Telegram, and Ripple. The third part of this article summarizes the current legal framework of applying the U.S. securities laws to offshore ICOs and concludes that such a patchwork approach is a stopgap solution – expedient but imperfect. This article ends by discussing the potential future directions, including congressional legislation, international cooperation, a deferential substituted compliance approach, and adapting ICOs to existing registration exemptions.
Michael Gal (University of Haifa – Faculty of Law) has posted “Limiting Algorithmic Cartels” (Berkeley Technology Law Journal, 2023 Forthcoming) on SSRN. Here is the abstract:
Recent studies have proven that pricing algorithms can autonomously learn to coordinate prices, and set them at supra-competitive levels. The growing use of such algorithms mandates the creation of solutions that limit the negative welfare effects of algorithmic coordination. Unfortunately, to date, no good means exist to limit such conduct. While this challenge has recently prompted scholars from around the world propose different solutions, many suggestions are inefficient or impractical, and some might even strengthen coordination.
This challenge requires thinking outside the box. Accordingly, this article suggests four (partial) solutions. The first is market-based, and entails using consumer algorithms to counteract at least some of the negative effects of algorithmic coordination. By creating buyer power, such algorithms can also enable offline transactions, eliminating the online transparency that strengthens coordination. The second suggestion is to change merger review so as to limit mergers that are likely to increase algorithmic coordination. The next two are more radical, yet can capture more cases of such conduct. The third involves the introduction of a disruptive algorithm, which would disrupt algorithmic coordination by creating noise on the supply side. The final suggestion entails freezing the price of one competitor, in line with prior suggestions to address predatory pricing suggested by Edlin and others. The advantages and risks of each solution are discussed. As antitrust agencies around the world are just starting to experiment with different ways to limit algorithmic coordination, there is no better time to explore how best to achieve this important task.