Download of the Week

The Download of the Week is  “Report on Civil Liability for Misuse of Private Information” (Simon Constantine (ed), Singapore: Law Reform Committee, Singapore Academy of Law, 2020) on SSRN by Jack Tsen-Ta Lee and Phang Hsiao Chung. Here is the abstract:

This report issued by the Law Reform Committee of the Singapore Academy of Law considers whether the existing legal protections from the disclosure and serious misuse of private information in Singapore are sufficient and effective.

At present, while various protections for victims of such misuse and related breaches of privacy exist, these derive from an assortment of different statutory and common law causes of action (for example, suing for intentional infliction of emotional distress, private nuisance and/or breach of confidence, or bringing claims under the Personal Data Protection Act or Protection from Harassment Act). This patchwork of laws – several of which were designed primarily to address matters other than misuse of private information – not only risks making the law more difficult for victims to navigate, it also risks some instances of serious misuse of private information not being effectively provided for and those affected finding themselves with no real recourse or remedy.

Given these shortcomings, it is submitted that a statutory tort of misuse of private information should be introduced.

(The draft bill annexed to the report was prepared by Phang Hsiao Chung, Deputy Registrar of the Supreme Court, in his capacity as a member of the Law Reform Committee.)

Sears on Algorithmic Speech and Freedom of Expression

Alan M. Sears (Center for Law and Digital Technologies (eLaw), Leiden Law School, Leiden University) has posted “Algorithmic Speech and Freedom of Expression” (Vanderbilt Journal of Transnational Law, Vol. 53, No. 4, 2020) on SSRN. Here is the abstract:

Algorithms have become increasingly common, and with this development, so have algorithms that approximate human speech. This has introduced new issues with which courts and legislators will have to grapple. Courts in the United States have found that search engine results are a form of speech that is protected by the Constitution, and cases in Europe concerning liability for autocomplete suggestions have led to varied results. Beyond these instances, insight into how courts handle algorithmic speech are few and far between.

By focusing on three categories of algorithmic speech, defined as curated production, interactive/responsive production, and semi-autonomous production, this Article analyzes these various forms of algorithmic speech within the international framework for freedom of expression. After a brief introduction of that framework and a look towards approaches to algorithmic speech in the United States, the Article then examines whether the creators or controllers of different forms of algorithms should be considered content providers or mere intermediaries, the determination of which ultimately has implications for liability, which is also explored. The Article then looks at possible interferences with algorithmic speech, and how such interferences may be examined under the three-part test-particular attention is paid to the balancing of rights and interests at play-in order to answer the question of the extent to which algorithmic speech is worthy of protection under international standards of freedom of expression. Finally, other relevant issues surrounding algorithmic speech are discussed that will have an impact going forward, many of which involve questions of policy and societal values that accompany granting algorithmic speech protection.

Souza, Abe, Lima & Souza on the Brazilian Law of Personal Data Protection

Jonatas S. De Souza (Paulista University), Jair M. Abe (Paulista University), Luiz A. De Lima (Paulista University, and Nilson A. De Souza have posted “The Brazilian Law of Personal Data Protection” (International Journal of Network Security & Its Applications (IJNSA) Vol.12, No.6, November 2020) on SSRN. Here is the abstract:

Rapid technological change and globalization have created new challenges when it comes to the protection and processing of personal data. In 2018, Brazil presented a new law that has the proposal to inform how personal data should be collected and treated, to guarantee the security and integrity of the data holder. The General Law Data Protection – LGPD, was sanctioned on September 18th, 2020. Now, the citizen is the owner of his personal data, which means that he has rights over this information and can demand transparency from companies regarding its collection, storage, and use. This is a major change and, therefore, extremely important that everyone understands their role within LGPD. The purpose of this paper is to emphasize the principles of the General Law on Personal Data Protection, informing real cases of leakage of personal data and thus obtaining an understanding of the importance of gains that meet the interests of Internet users on the subject and its benefits to the entire Brazilian society.

Braulin on the Effects of Personal Information on Competition

Francesco Clavorà Braulin (ZEW – Leibniz Centre for European Economic Research) has posted “The Effects of Personal Information on Competition: Consumer Privacy and Partial Price Discrimination” on SSRN. Here is the abstract:

This article studies the effects of consumer information on the intensity of competition. In a two dimensional duopoly model of horizontal product differentiation, firms use consumer information to price discriminate. I contrast a full privacy and a no privacy benchmark with intermediate regimes in which the firms target consumers only partially. No privacy is traditionally detrimental to industry profits. Instead, I show that with partial privacy firms are always better-off with price discrimination: the relationship between information and profits is hump-shaped. Consumers prefer either no or full privacy in aggregate. However, even though this implies that privacy protection in digital markets should be either very hard or very easy, the effects of information on individual surplus are ambiguous: there are always winners and losers. When an upstream data seller holds partially informative data, an exclusive allocation arises. Instead, when data is fully informative, each competitor acquires consumer data but on a different dimension.

Esposito on Making Personalized Prices Pro-Competitive and Pro-Consumers 

Fabrizio Esposito (CEDIS – Nova School of Law) has posted “Making Personalised Prices Pro-Competitive and Pro-Consumers” (Cahiers Du CeDIE Working Papers 2020/02) on SSRN. Here is the abstract:

Price personalisation raises four policy concerns: building trust, fostering competitiveness, increasing access, and avoiding exploitation. The Modernisation Directive introduces an information requirement about personalised prices. The research explains how this information requirement can and shall be used to make price personalisation pro-competitive and pro-consumers. The analysis can be divided into two main parts. First, disclosing the impersonal price is the simple and effective way to reap the benefits of price personalisation while counteracting its negative effects. Second, the legal grounds for the right to know also the impersonal price in EU law are identified. After having explained that consumers have a right to be offered a personalised price, it is shown that the principles of transparency and effectiveness in EU consumer law, together with the right granted by Article 22(3) GDPR, imply that consumers have the right to know also the impersonal price. The right to know also the impersonal price is a critical tile for solving the puzzle represented by the best governance of digital markets in the European Union.

Recommended. On the benefits of mandatory disclosure of the past, personalized prices of others, see “The End of Bargaining in the Digital Age.”

Koshiyama et al. on Algorithm Auditing

Adriano Koshiyama (University College London Department of Computer Science) et al. have posted “Towards Algorithm Auditing: A Survey on Managing Legal, Ethical and Technological Risks of AI, ML and Associated Algorithms” on SSRN. Here is the abstract:

Business reliance on algorithms are becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage. High-profile cases include VW’s Dieselgate scandal with fines worth of $34.69B, Knight Capital’s bankruptcy (~$450M) by a glitch in its algorithmic trading system, and Amazon’s AI recruiting tool being scrapped after showing bias against women. In response, governments are legislating and imposing bans, regulators fining companies, and the Judiciary discussing potentially making algorithms artificial “persons” in Law.

Soon there will be ‘billions’ of algorithms making decisions with minimal human intervention; from autonomous vehicles and finance, to medical treatment, employment, and legal decisions. Indeed, scaling to problems beyond the human is a major point of using such algorithms in the first place. As with Financial Audit, governments, business and society will require Algorithm Audit; formal assurance that algorithms are legal, ethical and safe. A new industry is envisaged: Auditing and Assurance of Algorithms (cf. Data privacy), with the remit to professionalize and industrialize AI, ML and associated algorithms.

The stakeholders range from those working on policy and regulation, to industry practitioners and developers. We also anticipate the nature and scope of the auditing levels and framework presented will inform those interested in systems of governance and compliance to regulation/standards. Our goal in this paper is to survey the key areas necessary to perform auditing and assurance, and instigate the debate in this novel area of research and practice.

Dornis on ‘Authorless Works’ and ‘Inventions without Inventor’ and the Muddy Waters of ‘AI Autonomy’ in Intellectual Property Doctrine 

Tim W. Dornis (Leuphana University of Lueneburg, New York University School of Law) has posted “Of ‘Authorless Works’ and ‘Inventions without Inventor’ – The Muddy Waters of ‘AI Autonomy’ in Intellectual Property Doctrine” on SSRN. Here is the abstract:

Artificial intelligence (AI) has entered all areas of our life, including creative production and inventive activity. Modern AI is used, inter alia, for the production of newspaper articles; the generation of weather, company, and stock market reports; the composition of music; the creation of visual arts; and pharmaceutical and medicinal research and development. Despite the exponential growth of such real- world scenarios of artificial creativity and inventiveness, it is still unclear whether the output of creative and inventive AI processes – i.e., AI-generated ‘works’ and ‘inventions’ – should be protected under copyright or patent law. Current doctrine largely denies such protection on the grounds that no human creator exists in cases where AI functions autonomously in the sense of being independent of and uncontrolled by humans. More recently, both the European Parliament and the EU Commission have put the topic on their agenda. Interestingly, their positions seem to contradict each other – one in favour of, one against creating new instruments of protection for AI-generated output. This and the rising debate in legal scholarship (with equally contradictory positions) invites more analysis. A closer look at the doctrinal foundations and economic underpinnings of ‘work without author’ and ‘invention without inventor’ scenarios reveals that neither the law as it stands nor scholarly debate is currently up to the challenges posed by AI creativity and inventiveness.

Cihon, Maas & Kemp on Should Artificial Intelligence Governance Be Centralized (?)

Peter Cihon (Center for the Governance of AI, Future of Humanity Institute, University of Oxford), Matthijs M. Maas (CSER, Cambridge, University of Copenhagen CECS) and Luke Kemp (ANU Fenner School of Environment and Society) have posted “Should Artificial Intelligence Governance be Centralised?: Design Lessons from History” (Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 228–34. New York NY USA: ACM, 2020) on SSRN. Here is the abstract:

Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.

Lee & Phang on Civil Liability for Misuse of Private Information

Jack Tsen-Ta Lee and Phang Hsiao Chung (Attorney-General’s Chambers, Singapore) have posted “Report on Civil Liability for Misuse of Private Information” (Simon Constantine (ed), Singapore: Law Reform Committee, Singapore Academy of Law, 2020) on SSRN. Here is the abstract:

This report issued by the Law Reform Committee of the Singapore Academy of Law considers whether the existing legal protections from the disclosure and serious misuse of private information in Singapore are sufficient and effective.

At present, while various protections for victims of such misuse and related breaches of privacy exist, these derive from an assortment of different statutory and common law causes of action (for example, suing for intentional infliction of emotional distress, private nuisance and/or breach of confidence, or bringing claims under the Personal Data Protection Act or Protection from Harassment Act). This patchwork of laws – several of which were designed primarily to address matters other than misuse of private information – not only risks making the law more difficult for victims to navigate, it also risks some instances of serious misuse of private information not being effectively provided for and those affected finding themselves with no real recourse or remedy.

Given these shortcomings, it is submitted that a statutory tort of misuse of private information should be introduced.

(The draft bill annexed to the report was prepared by Phang Hsiao Chung, Deputy Registrar of the Supreme Court, in his capacity as a member of the Law Reform Committee.)

Recommended.

Goelzhauser, Kassow, and Rice on Supreme Court Case Complexity

Greg Goelzhauser (Utah State University – Department of Political Science), Benjamin Kassow (University of North Dakota-Department of Political Science and Public Administration), and Douglas Rice (University of Massachusetts Amherst – Department of Political Science) have posted “Measuring Supreme Court Case Complexity” (Journal of Law, Economics, and Organization, Forthcoming) on SSRN. Here is the abstract:

Case complexity is central to the study of judicial politics. The dominant measures of Supreme Court case complexity use information on legal issues and provisions observed post-decision. As a result, scholars using these measures to study merits stage outcomes such as bargaining, voting, separate opinion production, and opinion content introduce post-treatment bias and exacerbate endogeneity concerns. Furthermore, existing issue measures are not valid proxies for complexity. Leveraging information on issues and provisions extracted from merits briefs, we develop a new latent measure of Supreme Court case complexity. This measure maps with the prevailing understanding of the underlying concept while mitigating inferential threats that hamper empirical evaluations. Our brief-based measurement strategy is generalizable to other contexts where it is important to generate exogenous and pre-treatment indicators for use in explaining merits decisions.