Schrepel on Smart Contracts and the Digital Single Market Through the Lens of a ‘Law + Technology’ Approach

Thibault Schrepel (University Paris 1 Panthéon-Sorbonne; VU University Amsterdam; Stanford University’s Codex Center; Sciences Po) has posted “Smart Contracts and the Digital Single Market Through the Lens of a ‘Law + Technology’ Approach” on SSRN. Here is the abstract:

The deployment of smart contracts within the European zone could fluidify economic transactions. It also risks fragmenting the Digital Single Market (“DSM”). This conundrum calls for a constructive response to preserve both the benefits brought by smart contracts and a strong DSM.

Against this background, this report adopts a “law + technology” approach. It suggests combining law and technology to develop solutions that encourage the evolution of smart contracts (rather than hindering it) in a direction that preserves and reinforces the DSM.

Lim on B2B Artificial Intelligence Transactions: A Framework for Assessing Commercial Liability

Ernest Lim (National University of Singapore – Faculty of Law) has posted “B2B Artificial Intelligence Transactions: A Framework for Assessing Commercial Liability” on SSRN. Here is the abstract:

Business to business (“B2B”) artificial intelligence (“AI”) transactions raise challenging private law liability issues because of the distinctive nature of AI systems and particularly the new relational dynamics between AI solutions providers and procurers. This article advances a three-stage framework comprising data management, system development and implementation, and external threat management. The purpose is to unpack AI design and development processes involving the relational dynamics of providers and procurers in order to understand the parties’ respective responsibilities. Applying this framework to English commercial law, this article analyses the potential liability of AI solutions providers and procurers under the Supply of Goods and Services Act and the Sale of Goods Act. The assumption that only AI solutions providers will be subject to liability, or that no party will be liable due to the “autonomous” nature of AI systems, is rejected.

Voss on Data Protection Issues for Smart Contracts

W. Gregory Voss (TBS Business School) has posted “Data Protection Issues for Smart Contracts” (Smart Contracts: Technological, Business and Legal Perspectives (Marcelo Corrales, Mark Fenwick & Stefan Wrbka, eds., 2021) on SSRN. Here is the abstract:

Smart contracts offer promise for facilitating and streamlining transactions in many areas of business and government. However, they also may be subject to the provisions of relevant data protection laws such as the European Union’s General Data Protection Regulation (GDPR) if personal data is processed. Initially, this chapter discusses the data protection/data privacy distinction in the context of differing legal models. However, the focus of analysis is the GDPR, as the most significant and influential data protection legislation at this time, given in part to its omnibus nature and extraterritorial scope, and its application to smart contracts.

By their very nature, smart contracts raise difficulties for the classification of the various actors involved, which will have an impact on their responsibilities under the law and their potential liability for violations. The analysis in this chapter turns on the roles of the data controller in the context of smart contracts, and this contract review the definition of that term and of ‘joint controller’ considering supervisory authority guidance. In doing so, the signification of the classification is highlighted, especially in the case of the GDPR.

Furthermore, certain rights granted to data subjects under the GDPR may be difficult to provide in the context of smart contracts, such as the right to be forgotten/right to erasure, the right to rectification and the right not to be subject to a decision based solely on automated processing. This chapter addresses such issues, together with relevant supervisory authority advice, such as the use of encryption to make data nearly inaccessible to approach as nearly as possible the same result as erasure. On the way, the important distinction between anonymized data and personal data is explained, together with its practical implications, and requirements for data integrity and confidentiality (security) are detailed.

In addition, the GDPR requirement of privacy by design and by default must be respected, when that that legislation applies. Data protection principles such as purpose limitation and data minimisation in the case of smart contracts are also scrutinized in this chapter. Data protection and privacy must be considered when smart contracts are designed. This chapter will help the reader understand the contours of such requirement. Even for jurisdictions outside of the European Union, privacy by design will be interesting as best practice.

Finally, problems related to cross-border data transfers in the case of public blockchains are debated, prior to this chapter setting out key elements to allow for a GDPR-compliant blockchain and other concluding remarks.

Lehr on Smart Contracts, Real-Virtual World Convergence and Economic Implications

William Lehr (MIT) has posted “Smart Contracts, Real-Virtual World Convergence and Economic Implications” on SSRN. Here is the abstract:

Smart Contracts (SCs) are usually defined as contracts that are instantiated in computer-executable code that automatically executes all or parts of an agreement with the assistance of block-chain’s distributed trust technology. This is principally a technical description and results in an overly narrow focus. The goal of this paper is to provide an overview of the rapidly evolving multidisciplinary literature on Smart Contracts to provide a synthesis perspective on the economic implications of smart contracts. This necessitates casting a wider-net that ties SCs to the literature on the economics of AI and the earlier Industrial Organization literature to support speculation about the role of SCs in the evolution of AI and the organization of economic activity. Accomplishing this goal builds on a repurposing of the Internet hourglass model that puts SCs at the narrow waist between the real (non-digital) and virtual (digital) realms, serving as the connecting glue or portal by which AIs may play a larger role in controlling the organization of economic activity.

Wagner & Eidenmueller on Digital Dispute Resolution

Gerhard Wagner (Humboldt University School of Law) & Horst Eidenmueller (University of Oxford – Faculty of Law) have posted “Digital Dispute Resolution” on SSRN. Here is the abstract:

This essay identifies and analyses key developments and regulatory challenges of “Digital Dispute Resolution”. We discuss digital enforcement and smart contracts, internal complaint handling mechanisms, external online dispute resolution and courts in a digital world. Dispute resolution innovations originate primarily in the private sector. New service providers have high-powered incentives and face fewer institutional restrictions than the courts. We demonstrate that with smart contracts, digital enforcement and internal complaint handling, a new era of dispute resolution by contract without a neutral third party dawns. This development takes the idea of a “privatization of dispute resolution” to its extreme. It promises huge efficiency gains for the disputing parties. At the same time, risks of an extremely unequal distribution of these gains, to the detriment of less vigilant parties, and of undermining the rule of law loom large. The key regulatory challenge will be to control the enormous power of large, sophisticated commercial actors, especially platforms. We suggest regulatory tools to address this problem.

Reyes on Creating Cryptolaw for the Uniform Commercial Code

Carla Reyes (Southern Methodist University – Dedman School of Law) has posted “Creating Cryptolaw for the Uniform Commercial Code” (Washington and Lee Law Review, Forthcoming) on SSRN. Here is the abstract:

A contract generally only binds its parties. Security agreements, which create a security interest in specific personal property, stand out as a glaring exception to this rule. Under certain conditions, security interests not only bind the creditor and debtor, but also third-party creditors seeking to lend against the same collateral. To receive this extraordinary benefit, creditors must put the world on notice, usually by filing a financing statement with the state in which the debtor is located. Unfortunately, the Uniform Commercial Code (U.C.C.) Article 9 filing system fails to provide actual notice to interested parties and introduces risk of heavy financial losses.

To solve this problem, this Article introduces a smart contract-based U.C.C.-1 form built using Lexon, an innovative new programming language that enables the development of smart contracts in English. The proposed “Lexon U.C.C. Financing Statement” does much more than merely replicate the financing statement in digital form; it also performs several U.C.C. rules so that, for the first time, the filing system works as intended. In demonstrating that such a system remains compatible with existing law, the Lexon U.C.C. Financing Statement also reveals important lessons about the interaction of technology and commercial law.

This Article brings cryptolaw to the U.C.C. in three sections. Section I examines the failure of the U.C.C. Article 9 filing system to achieve actual notice and argues that blockchain technology and smart contracts can help the system function as intended. Section II introduces the Lexon U.C.C. Financing Statement, demonstrating how the computer code implements U.C.C. provisions. Section II also examines the goals that influenced the design of the Lexon U.C.C. Financing Statement, discusses the new programming language used to build it, and argues that the prototype could be used now, under existing law. Section III proposes five innovations for the Article 9 filing system enabled by the Lexon U.C.C. Financing Statement. Section III then considers the broader implications of the project for commercial law, legal research around smart contracts, and the interplay between technology neutral law and a lawyer’s increasingly important duty of technological competence. Ultimately, by providing the computer code needed to build the Lexon U.C.C. Financing Statement, this Article demonstrates not only that crypto-legal structures are possible, but that they can simplify the law and make it more accessible.


Ebers on Liability for AI and EU Consumer Law

Martin Ebers (Humboldt University of Berlin – Faculty of Law; University of Tartu, School of Law) has posted “Liability for Artificial Intelligence and EU Consumer Law” (Journal of Intellectual Property, Information Technology and Electronic Commerce Law) on SSRN. Here is the abstract:

The new Directives on Digital Contracts – the Digital Content and Services Directive (DCSD) 2019/770 and the Sale of Goods Directive (SGD) 2019/771 – are often seen as important steps in adapting European private law to the requirements of the digital economy. However, neither directive contains special rules for new technologies such as Artificial Intelligence (AI). In light of this issue, the following paper discusses whether existing EU consumer law is equipped to deal with situations in which AI systems are either used for internal purposes by companies or offered to consumers as the main subject matter of the contract. This analysis will reveal a number of gaps in current EU consumer law and briefly discuss upcoming legislation.

Coglianese & Lampann on Contracting for Algorithmic Accountability

Cary Coglianese (University of Pennsylvania Law School) and Erik Lampmann (University of Pennsylvania Law School) have posted “Contracting for Algorithmic Accountability” (Administrative Law Review Accord, vol. 6, p. 175, 2021 on SSRN. Here is the abstract:

As local, state, and federal governments increase their reliance on artificial intelligence (AI) decision-making tools designed and operated by private contractors, so too do public concerns increase over the accountability and transparency of such AI tools. But current calls to respond to these concerns by banning governments from using AI will only deny society the benefits that prudent use of such technology can provide. In this Article, we argue that government agencies should pursue a more nuanced and effective approach to governing the governmental use of AI by structuring their procurement contracts for AI tools and services in ways that promote responsible use of algorithms. By contracting for algorithmic accountability, government agencies can act immediately, without any need for new legislation, to reassure the public that governmental use of machine-learning algorithms will be deployed responsibly. Furthermore, unlike with the adoption of legislation, a contracting approach to AI governance can be tailored to meet the needs of specific agencies and particular uses. Contracting can also provide a means for government to foster improved deployment of AI in the private sector, as vendors that serve government agencies may shift their practices more generally to foster responsible AI practices with their private sector clients. As a result, we argue that government procurement officers and agency officials should consider several key governance issues in their contract negotiations with AI vendors. Perhaps the most fundamental issue relates to vendors’ claims to trade secret protection—an issue that we show can be readily addressed during the procurement process. Government contracts can be designed to balance legitimate protection of proprietary information with the vital public need for transparency about the design and operation of algorithmic systems used by government agencies. We further urge consideration in government contracting of other key governance issues, including data privacy and security, the use of algorithmic impact statements or audits, and the role for public participation in the development of AI systems. In an era of increasing governmental reliance on artificial intelligence, public contracting can serve as an important and tractable governance strategy to promote the responsible use of algorithmic tools.

Ebrahim on Algorithms in Business, Merchant-Consumer Interactions, & Regulation

Tabrez Ebrahim (California Western School of Law) has posted “Algorithms in Business, Merchant-Consumer Interactions, & Regulation” (West Virginia Law Review, Vol. 123, 2021) on SSRN. Here is the abstract:

The shift towards the use of algorithms in business has transformed merchant–consumer interactions. Products and services are increasingly tailored for consumers through algorithms that collect and analyze vast amounts of data from interconnected devices, digital platforms, and social networks. While traditionally merchants and marketeers have utilized market segmentation, customer demographic profiles, and statistical approaches, the exponential increase in consumer data and computing power enables them to develop and implement algorithmic techniques that change consumer markets and society as a whole. Algorithms enable targeting of consumers more effectively, in real-time, and with high predictive accuracy in pricing and profiling strategies. In so doing, algorithms raise new theoretical considerations on information asymmetry and power imbalances in merchant–consumer interactions and multiply existing biases and discrimination or create new ones in society. Against this backdrop of the concentration of algorithmic decision-making in merchants, the traditional understanding of consumer protection is overdue for change, and normative debate about fairness, accountability, and transparency and interpretive considerations for non-discrimination is necessary. The theory that notice and choice in data protection laws and consumer protection laws are sufficient in an algorithmic era is inadequate, and countervailing consumer empowerment is necessary to balance the power between merchants and consumers. While legislative activity and regulation have conceivably increased consumer-empowerment, such measures may provide a limited or unclear response in the face of the transformative nature of algorithms. Instead, policy makers should consider responsible algorithmic code and other proposals as potentially effective responses in the analysis of socio-economic dimensions of algorithms in business.

Kolt on Predicting Consumer Contracts

Noam Kolt (University of Toronto) has posted “Predicting Consumer Contracts” (Berkeley Technology Law Journal, Vol. 37, 2022 Forthcoming) on SSRN. Here is the abstract:

This Article empirically examines whether a computational language model can read and understand consumer contracts. Language models are able to perform a wide range of complex tasks by predicting the next word in a sequence. In the legal domain, language models can summarize laws, draft case documents, and translate legalese into plain English. However, the ability of language models to inform consumers of their contractual rights and obligations has not been explored in detail.

To showcase the opportunities and challenges of using language models to read consumer contracts, this Article studies the performance of GPT-3, a powerful language model released in June 2020. The case study employs a novel dataset comprised of questions relating to the terms of service of popular U.S. websites. Although the results are not definitive, they offer several important insights. First, owing to its immense training data, the model can exploit subtle informational cues embedded in questions. Second, the model performed poorly on contractual provisions that favor the rights and interests of consumers, suggesting that it may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.

While language models could potentially empower consumers, they could also provide misleading legal advice and entrench harmful biases. Leveraging the benefits of language models in reading consumer contracts and confronting the challenges they pose requires a combination of engineering and governance. Policymakers, together with developers and users of language models, should begin exploring technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.