Danny Friedmann (Peking University School of Transnational Law) has posted “Digital Single Market, First Stop to The Metaverse: Counterlife of Copyright Protection Wanted” (Law & Econ of the Digital Transformation, Klaus Mathis and Avishalom Tor, eds. (Springer, 2022 Forthcoming)) on SSRN. Here is the abstract:
Building upon the “fair use by design” concept of Niva Elkin-Koren, this chapter is exploring how artificial intelligence can be used to implement exceptions or limitations.
Section 2 will give a brief overview of the evolution of the copyright acquis in the US and EU, in regard to platforms, discuss the implications of strict liability in an era of massive online use and infringements, which already has pushed and, a fortiori, will push platforms in the metaverse in the direction of automatic, scalable solutions, which on their turn, will increase the need for sufficient safeguards of unauthorized but legal use that falls under an exception or limitation.
Section 3 introduces the implications of the metaverse in regard to intermediary liability of copyright infringement and the need for “breathing space” for users and experimenting.
Section 4 explores the safeguards for legitimate use of content, which includes exceptions and limitations.
Building on Elkin-Koren’s “fair use by design” concept, Section 5 provides the prerequisites of designing algorithmic exceptions or limitations, and whether automated content recognition tools should be qualified as “high risk AI” under the proposed Artificial Intelligence Act, and incentives against over-blocking.
Pietro De Giovanni (Luiss University) has posted “Blockchain Technology Applications in Businesses and Organizations” on SSRN. Here is the abstract:
Blockchain technology has the ability to disrupt industries and transform business models since all intermediaries and stakeholders can now interact with little friction and at a fraction of the current transaction costs. Using blockchain technology, firms can undergo new applications and processes by pursuing transparency and control, low bureaucracy, trustless relationships, high standards of responsibility, and sustainability. As a result, business and organizations can successfully implement blockchain to grant transparency to consumers and end-users; remove challenges linked to pollution, frauds, human rights, abuse, and other inefficiencies; as well as guaranteed traceability of goods and services by univocally identifying the provenance inputs’ quantity and quality along with their treatment and origin. Blockchain Technology Applications in Businesses and Organizations reveals the true advantages that blockchain entails for firms by creating transparent and digital transactions, resolves conflicts and exceptions, and provides incentive-based mechanisms and smart contracts. This book seeks to create a clear understanding of blockchain’s applications such that business leaders can see and evaluate its real advantages. Blockchain is then analyzed not from the typical perspective of financial tools using cryptocurrencies and bitcoins but from the perspective of the business advantages for business and organizations. Specifically, the book highlights the advantages of blockchain across different segments and industries by analyzing specific aspects like procurement, manufacturing, contracts, inventory, logistics, operations, sustainability, technology, and innovation. It is an essential reference source for managers, executives, IT specialists, students, operations managers, supply chain managers, project managers, technology managers, academicians, and researchers.
Rory Van Loo (Boston University – School of Law; Yale ISP) has posted “Privacy Pretexts” (Cornell Law Review, Forthcoming) on SSRN. Here is the abstract:
Data privacy’s ethos lies in protecting the individual from institutions. Increasingly, however, institutions are deploying privacy arguments in ways that harm individuals. Platforms like Amazon, Facebook, and Google wall off information from competitors in the name of privacy. Financial institutions under investigation justify withholding files from the Consumer Financial Protection Bureau by saying they must protect sensitive customer data. In these and other ways, the private sector is exploiting privacy to avoid competition and accountability. This Article highlights the breadth of privacy pretexts and uncovers their moral structure. Like most pretexts, there is an element of truth to the claims. But left unchallenged, they will pave a path contrary to privacy’s ethos by blocking individuals’ data allies—the digital helpers, competitors, and regulators who need access to personal data to advance people’s interests. Addressing this move requires recognizing and overcoming deep tensions in the field of privacy. Although data privacy’s roots are in guarding against access, its future depends on promoting allied access.
Hossein Nabilou (University of Amsterdam Law School; UNIDROIT) has posted “The Law and Macroeconomics of Custody and Asset Segregation Rules: Defining the Perimeters of Crypto-Banking” on SSRN. Here is the abstract:
Custody – simply defined as holding securities or funds on behalf of third parties – is one of the key institutions that defines and distinguishes major financial institutions in the financial system. However, custody rules in financial law have traditionally been studied as a microprudential tool for investor protection purposes, while their macroeconomic impact has largely been overlooked. Inspired by the literature on asset custody and its impact on the institutional design of the traditional financial markets, institutions, and infrastructures, this paper studies the potential impact of defining custody rules in the cryptoasset markets on the future developments of the cryptoasset ecosystem. In traditional finance, a survey of relevant regulations applicable to financial institutions shows that the custody rules and client asset (segregation) rules apply to all financial institutions, other than commercial banks’ core business activity (i.e., deposit-taking). The most salient impact of exempting deposit contracts from custody and client asset rules has been the emergence of a business model for banks that treat their clients’ funds as their own and use them for their own accounts. Comingling clients’ funds with that of the bank is a critical defining feature of the banking industry that differentiates it from non-bank financial institutions as well as non-financial firms, and positions banks at the heart of monetary systems. The custody and asset segregation rules can play the same important role in the future developments of the crypto-asset industry. To delineate the scope of crypto-banking and differentiate it from other types of cryptoasset services, such as exchange and custodial services, it is crucial to start from the custody and asset segregation rules. This paper advocates for a presumption of custody when a client does not self-custody his cryptoassets, giving (or sharing) the control of the assets to a third party. It argues that such a presumption not only would serve the objectives of investor protection but also could prevent excessive credit creation in the cryptoasset ecosystem and the potential risk spillovers to the conventional financial markets and the real economy.
Sonia Katyal (UC Berkeley School of Law) has posted “Democracy and Distrust in an Era of Artificial Intelligence” (Daedalus, Journal of the American Academy of Arts & Sciences 2022) on SSRN. Here is the abstract:
Our legal system has historically operated under the general view that courts should defer to the legislature. There is one significant exception to this view: cases in which it appears that the political process has failed to recognize the rights or interests of minorities. This basic approach provides much of the foundational justifications for the role of judicial review in protecting minorities from discrimination by the legislature. Today, the rise of AI decision-making poses a similar challenge to democracy’s basic framework. As I argue in this essay, the rise of three trends–privatization, prediction, and automation in AI–have combined to pose similar risks to minorities. In this essay, I outline what a theory of judicial review would look like in an era of artificial intelligence, analyzing both the limitations and the possibilities of judicial review of AI. Here, I draw on cases in which AI decision-making has been challenged in courts, to show how concepts of due process and equal protection can be recuperated in a modern AI era, and even integrated into AI, to provide for better oversight and accountability.
Christophe Carugati (Université Paris II) has posted “The Implementation of the Digital Markets Act with National Antitrust Laws” on SSRN. Here is the abstract:
The December 2020 Commission’s proposal for a Digital Markets Act (DMA) reached a compromised text with the Council and the Parliament on March 24, 2022. While the text that will impose obligations and prohibition rules on large online platforms acting as “gatekeepers” before any wrongdoing ex-ante is due to enter into force in October 2022, the same platforms are already under investigation in Germany under a DMA-like competition law that also imposes prohibition rules ex-ante. Other countries in Europe, including Italy, are considering following Germany and implementing new competition rules to adapt to the digital economy. How should the DMA implement with national competition laws? This question is crucial because inconsistency will inevitably hamper the effectiveness of both the DMA and national competition laws. The paper addresses this question by studying the DMA and German implementation framework. Section I explains how legislators envisage the implementation of the DMA with national competition laws. Section II then considers the implementation of the DMA-like national competition rules by focusing the analysis on Germany, which already enforced its new legislation in January 2022 against Google. Section III designs a cooperation model between the DMA and national competition laws. Section IV concludes.
Sylvia Lu (UC Berkeley School of Law) has posted “Data Privacy, Human Rights, and Algorithmic Opacity” (California Law Review, Vol. 110, 2022) on SSRN. Here is the abstract:
Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.
The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.
Margot E. Kaminski (University of Colorado Law School; Yale ISP) and Jennifer M. Urban (UC Berkeley School of Law) have posted “The Right to Contest AI” (Columbia Law Review, Vol. 121, 2021) on SSRN. Here is the abstract:
Artificial intelligence (AI) is increasingly used to make important decisions, from university admissions selections to loan determinations to the distribution of COVID-19 vaccines. These uses of AI raise a host of concerns about discrimination, accuracy, fairness, and accountability.
In the United States, recent proposals for regulating AI focus largely on ex ante and systemic governance. This Article argues instead—or really, in addition—for an individual right to contest AI decisions, modeled on due process but adapted for the digital age. The European Union, in fact, recognizes such a right, and a growing number of institutions around the world now call for its establishment. This Article argues that despite considerable differences between the United States and other countries,establishing the right to contest AI decisions here would be in keeping with a long tradition of due process theory.
This Article then fills a gap in the literature, establishing a theoretical scaffolding for discussing what a right to contest should look like in practice. This Article establishes four contestation archetypes that should serve as the bases of discussions of contestation both for the right to contest AI and in other policy contexts. The contestation archetypes vary along two axes: from contestation rules to standards and from emphasizing procedure to establishing substantive rights. This Article then discusses four processes that illustrate these archetypes in practice, including the first in depth consideration of the GDPR’s right to contestation for a U.S. audience. Finally, this Article integrates findings from these investigations to develop normative and practical guidance for establishing a right to contest AI.
Simon Chin (Yale Law School) has posted “Introducing Independence to the Foreign Intelligence Surveillance Court” (131 Yale L.J. 655 (2021)) on SSRN. Here is the abstract:
The Foreign Intelligence Surveillance Court (FISC), which reviews government applications to conduct surveillance for foreign intelligence purposes, is an anomaly among Article III courts. Created by the Foreign Intelligence Surveillance Act (FISA) in 1978, the FISC ordinarily sits ex parte, with the government as the sole party to the proceedings. The court’s operations and decisions are shrouded in secrecy, even as they potentially implicate the privacy and civil liberties interests of all Americans. After Edward Snowden disclosed the astonishing details of two National Security Agency mass surveillance programs that had been approved by the FISC, Congress responded with the USA FREEDOM Act of 2015. The bill’s reforms included the creation of a FISA amicus panel: a group of five, security-cleared, part-time, outside attorneys available to participate in FISC proceedings at the court’s discretion. Policy makers hoped to introduce an independent voice to the FISC that could challenge the government’s positions and represent the civil liberties interests of the American people. With the FBI’s investigation of Trump campaign advisor Carter Page in 2016 and 2017 raising new concerns about the FISC’s one-sided proceedings, it is now imperative to assess the FISA amicus provision: how it has functioned in practice since 2015, what effects it has had on foreign intelligence collection, and whether it has achieved the objectives that motivated its creation.
To conduct this assessment and overcome the challenges of studying a secret court, this Note draws upon the first systematic set of interviews conducted with six of the current and former FISA amici. This Note also includes interviews with two former FISA judges and three former senior government attorneys intimately involved in the FISA process. Using these interviews, as well as declassified FISA material, this Note presents an insiders’ view of FISC proceedings and amicus participation at the court. The Note arrives at three main insights about the amicus panel. First, amicus participation at the FISC has not substantially interfered with the collection of timely foreign intelligence information. Second, the available record suggests that amici have had a limited impact on privacy and civil liberties. Third, there are significant structural limitations to what incremental reforms to the existing amicus panel can accomplish. Instead, this Note supports the creation of an office of the FISA special advocate—a permanent presence at the FISC to serve as a genuine adversary to the government. While Congress considered and rejected a FISA special advocate in 2015, this Note reenvisions the original proposal with substantive and procedural modifications to reflect the lessons of the past six years, as well as with a novel duty: oversight of approved FISA applications. This Note’s proposal would address both the limitations of the FISA amicus panel that have become manifest in practice and the new Carter Page-related concerns about individual surveillance.
Lawrence J. White (NYU Stern School of Business) has posted “The Dead Hand of Cellophane and the Federal Google and Facebook Antitrust Cases: Market Delineation Will Be Crucial” (The Antitrust Bulletin, Forthcoming) on SSRN. Here is the abstract:
The DOJ and FTC monopolization cases against Google and Facebook, respectively, represent the most important federal non-merger antitrust initiatives since (at least) the 1990s. As in any monopolization case, market delineation will be a central feature of both cases – as it was in the du Pont Cellophane case of 65 years ago. Without a delineated market, how can one determine whether a company has engaged in monopolization? Unfortunately, there is currently no accepted market delineation paradigm that can help the courts address this issue for monopolization cases. And this void generally cannot be filled by the market delineation paradigm that is embedded in the DOJ-FTC “Horizontal Merger Guidelines”: Although this paradigm has had almost 40 years of usage and is now well-established and -accepted for merger analysis, this paradigm generally has no applicability for market delineation in monopolization cases.
This article expands on this argument and shows the potential difficulties that are likely to arise in this area of market delineation and the consequent problems for both cases.
This article also points the way toward a paradigm that offers a sensible approach to dealing with these difficulties.