Koshiyama et al. on Algorithm Auditing

Adriano Koshiyama (University College London Department of Computer Science) et al. have posted “Towards Algorithm Auditing: A Survey on Managing Legal, Ethical and Technological Risks of AI, ML and Associated Algorithms” on SSRN. Here is the abstract:

Business reliance on algorithms are becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage. High-profile cases include VW’s Dieselgate scandal with fines worth of $34.69B, Knight Capital’s bankruptcy (~$450M) by a glitch in its algorithmic trading system, and Amazon’s AI recruiting tool being scrapped after showing bias against women. In response, governments are legislating and imposing bans, regulators fining companies, and the Judiciary discussing potentially making algorithms artificial “persons” in Law.

Soon there will be ‘billions’ of algorithms making decisions with minimal human intervention; from autonomous vehicles and finance, to medical treatment, employment, and legal decisions. Indeed, scaling to problems beyond the human is a major point of using such algorithms in the first place. As with Financial Audit, governments, business and society will require Algorithm Audit; formal assurance that algorithms are legal, ethical and safe. A new industry is envisaged: Auditing and Assurance of Algorithms (cf. Data privacy), with the remit to professionalize and industrialize AI, ML and associated algorithms.

The stakeholders range from those working on policy and regulation, to industry practitioners and developers. We also anticipate the nature and scope of the auditing levels and framework presented will inform those interested in systems of governance and compliance to regulation/standards. Our goal in this paper is to survey the key areas necessary to perform auditing and assurance, and instigate the debate in this novel area of research and practice.

Dornis on ‘Authorless Works’ and ‘Inventions without Inventor’ and the Muddy Waters of ‘AI Autonomy’ in Intellectual Property Doctrine 

Tim W. Dornis (Leuphana University of Lueneburg, New York University School of Law) has posted “Of ‘Authorless Works’ and ‘Inventions without Inventor’ – The Muddy Waters of ‘AI Autonomy’ in Intellectual Property Doctrine” on SSRN. Here is the abstract:

Artificial intelligence (AI) has entered all areas of our life, including creative production and inventive activity. Modern AI is used, inter alia, for the production of newspaper articles; the generation of weather, company, and stock market reports; the composition of music; the creation of visual arts; and pharmaceutical and medicinal research and development. Despite the exponential growth of such real- world scenarios of artificial creativity and inventiveness, it is still unclear whether the output of creative and inventive AI processes – i.e., AI-generated ‘works’ and ‘inventions’ – should be protected under copyright or patent law. Current doctrine largely denies such protection on the grounds that no human creator exists in cases where AI functions autonomously in the sense of being independent of and uncontrolled by humans. More recently, both the European Parliament and the EU Commission have put the topic on their agenda. Interestingly, their positions seem to contradict each other – one in favour of, one against creating new instruments of protection for AI-generated output. This and the rising debate in legal scholarship (with equally contradictory positions) invites more analysis. A closer look at the doctrinal foundations and economic underpinnings of ‘work without author’ and ‘invention without inventor’ scenarios reveals that neither the law as it stands nor scholarly debate is currently up to the challenges posed by AI creativity and inventiveness.

Cihon, Maas & Kemp on Should Artificial Intelligence Governance Be Centralized (?)

Peter Cihon (Center for the Governance of AI, Future of Humanity Institute, University of Oxford), Matthijs M. Maas (CSER, Cambridge, University of Copenhagen CECS) and Luke Kemp (ANU Fenner School of Environment and Society) have posted “Should Artificial Intelligence Governance be Centralised?: Design Lessons from History” (Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 228–34. New York NY USA: ACM, 2020) on SSRN. Here is the abstract:

Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.

Lee & Phang on Civil Liability for Misuse of Private Information

Jack Tsen-Ta Lee and Phang Hsiao Chung (Attorney-General’s Chambers, Singapore) have posted “Report on Civil Liability for Misuse of Private Information” (Simon Constantine (ed), Singapore: Law Reform Committee, Singapore Academy of Law, 2020) on SSRN. Here is the abstract:

This report issued by the Law Reform Committee of the Singapore Academy of Law considers whether the existing legal protections from the disclosure and serious misuse of private information in Singapore are sufficient and effective.

At present, while various protections for victims of such misuse and related breaches of privacy exist, these derive from an assortment of different statutory and common law causes of action (for example, suing for intentional infliction of emotional distress, private nuisance and/or breach of confidence, or bringing claims under the Personal Data Protection Act or Protection from Harassment Act). This patchwork of laws – several of which were designed primarily to address matters other than misuse of private information – not only risks making the law more difficult for victims to navigate, it also risks some instances of serious misuse of private information not being effectively provided for and those affected finding themselves with no real recourse or remedy.

Given these shortcomings, it is submitted that a statutory tort of misuse of private information should be introduced.

(The draft bill annexed to the report was prepared by Phang Hsiao Chung, Deputy Registrar of the Supreme Court, in his capacity as a member of the Law Reform Committee.)

Recommended.

Goelzhauser, Kassow, and Rice on Supreme Court Case Complexity

Greg Goelzhauser (Utah State University – Department of Political Science), Benjamin Kassow (University of North Dakota-Department of Political Science and Public Administration), and Douglas Rice (University of Massachusetts Amherst – Department of Political Science) have posted “Measuring Supreme Court Case Complexity” (Journal of Law, Economics, and Organization, Forthcoming) on SSRN. Here is the abstract:

Case complexity is central to the study of judicial politics. The dominant measures of Supreme Court case complexity use information on legal issues and provisions observed post-decision. As a result, scholars using these measures to study merits stage outcomes such as bargaining, voting, separate opinion production, and opinion content introduce post-treatment bias and exacerbate endogeneity concerns. Furthermore, existing issue measures are not valid proxies for complexity. Leveraging information on issues and provisions extracted from merits briefs, we develop a new latent measure of Supreme Court case complexity. This measure maps with the prevailing understanding of the underlying concept while mitigating inferential threats that hamper empirical evaluations. Our brief-based measurement strategy is generalizable to other contexts where it is important to generate exogenous and pre-treatment indicators for use in explaining merits decisions.