Mark Shope (National Yang Ming Chiao Tung University; Indiana University Robert H. McKinney School of Law) has posted “The Bill of Lading on the Blockchain: An Analysis of its Compatibility with International Rules on Commercial Transactions” (Minnesota Journal of Law, Science & Technology, Vol. 22, 2021) on SSRN. Here is the abstract:
This article examines the legal compatibility of a blockchain bill of lading under the following UNCITRAL works: the Model Law on Electronic Commerce, the Model Law on Electronic Signatures, the Convention on the Use of Electronic Communications in International Contracts, the Rotterdam Rules, and the Model Law on Electronic Transferable Records. The bill of lading has been around for centuries, shaping the cross-border sales landscape while at the same time being shaped by it. Blockchain technology is providing an opportunity to assess how various industries are conducting business, including the cross-border sales landscape. The compatibility of blockchain with bills of lading may seem unusual, since the former may be perceived as a new, disruptive technology originally used to trade cryptocurrency and the latter may be perceived as a centuries old, outdated solution that has resisted change. This article attempts to show that these two systems can in fact be compatible with each other and be compatible with international rules on commercial transactions, specifically as they relate to the bill of lading. Blockchain could be the technology that will put an end to the drawbacks of paper bills of lading, and the bill of lading system, if fully adopted, could be the application that develops blockchain technology to its full potential in the shipping industry.
Noga Blickstein Shchory (University of Haifa, Faculty of Law) and Michal Gal (University of Haifa – Faculty of Law) have posted “Market Power Parasites: Abusing the Power of Digital Intermediaries to Harm Competition” (Harvard Journal of Law & Technology, Vol. 35, 2021) on SSRN. Here is the abstract:
Some digital information intermediaries, such as Google and Facebook, enjoy significant and durable market power. Concerns regarding the anti-competitive effects of such power have largely focused on conduct engaged in by the infomediaries themselves, and have led to several recent, well-publicized regulatory actions in the US and elsewhere. This article adds a new dimension to these concerns: the abuse of such power by other market players, which lack market power themselves, in a way which significantly harms the competitive process and undermines the integrity of the relevant in-formation market. We call such abusers “market power parasites.”
We provide three examples of parasitic conduct in online information markets: (1) black hat search engine optimization, (2) click fraud, and (3) fraudulent ratings and reviews. In each of these examples the manipulating parasite utilizes the infomediary’s market power to potentially turn an otherwise limited fraud into a manipulation of market dynamics, with significant anti-competitive effects.
This separation between power and conduct in the case of market power parasites creates an unwarranted lacuna which is not addressed by existing laws aimed at preventing abuses of market power. Anti-trust law does not capture such parasites because it only prohibits unilateral anti-competitive conduct if such conduct is engaged in by a monopolist. At the same time, fraud torts require proof of specific reliance and are therefore limited to a particular wrong, disregarding the broader competitive concerns resulting from parasitic conduct.
To bridge this gap, we suggest a fraud-on-the-online-information-markets rule, akin to the fraud-on-the-market rule in securities law. We propose to eliminate the rigid fraud tort requirement to prove reliance, and replace it with a presumption of reliance that will apply once the plaintiff proves harm to the integrity of an online infomediary. Our proposal strengthens competitors’ cause of action, releasing them from the arguably ill-fitting need to prove specific reliance, thereby increasing enforcement against the anti-competitive acts of market power parasites which harm the integrity of information in digital markets.
Mirko Bagaric (Director of the Evidence-Based Sentencing and Criminal Justice Project, Swinburne University Law School), Jennifer Svilar, Melissa Bull (Queensland University of Technology), Dan Hunter (Queensland University of Technology), and Nigel Stobbs (Queensland University of Technology – Faculty of Law) have posted “The Solution to the Pervasive Bias and Discrimination in the Criminal Justice: Transparent Artificial Intelligence” (American Criminal Law Review, Vol. 59, No. 1, Forthcoming) on SSRN. Here is the abstract:
Algorithms are increasingly used in the criminal justice system for a range of important matters, including determining the sentence that should be imposed on offenders; whether offenders should be released early from prison; and the locations where police should patrol. The use of algorithms in this domain has been severely criticized on a number of grounds, including that they are inaccurate and discriminate against minority groups. Algorithms are used widely in relation to many other social endeavors, including flying planes and assessing eligibility for loans and insurance. In fact, most people regularly use algorithms in their day-to-day lives. Google Maps is an algorithm, as are Siri, weather forecasts, and automatic pilots. The criminal justice system is one of the few human activities which has not embraced the use of algorithms. This Article explains why the criticisms that have been leveled against the use of algorithms in the criminal justice domain are flawed. The manner in which algorithms operate is generally misunderstood. Algorithms are not autonomous machine applications or processes. Instead, they are always designed by humans and hence their capability and efficacy are, like all human processes, contingent upon the quality and accuracy of the design process and manner in which they are implemented. Algorithms can replicate all of the high-level human processing but have the advantage that they process vast sums of information far more quickly than humans. Thus, well-designed algorithms overcome all of the criticisms levelled against them. Moreover, because algorithms do not have feelings, the accuracy of their decision-making is far more objective, transparent, and predictable than that of humans. They are the best means to overcome the pervasive bias and discrimination that exists in all parts of the deeply flawed criminal justice system.
Anthony Man-cho So (The Chinese University of Hong Kong (CUHK)) has posted “Technical Elements of Machine Learning for Intellectual Property Law” (Artificial Intelligence and Intellectual Property, 2020) on SSRN. Here is the abstract:
Recent advances in artificial intelligence (AI) technologies have transformed our lives in profound ways. Indeed, AI has not only enabled machines to see (e.g., face recognition), hear (e.g., music retrieval), speak (e.g., speech synthesis), and read (e.g., text processing), but also, so it seems, given machines the ability to think (e.g., board game-playing) and create (e.g., artwork generation). This chapter introduces the key technical elements of machine learning (ML), which is a rapidly growing sub-field in AI and drives many of the aforementioned applications. The goal is to elucidate the ways human efforts are involved in the development of ML solutions, so as to facilitate legal discussions on intellectual property issues.
Ryan Abbot (University of Surrey School of Law, University of California, Los Angeles – David Geffen School of Medicine) has posted an excerpt from his book “The Reasonable Robot: Artificial Intelligence and the Law” on SSRN. Here is the abstract:
AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law.