Michelle Lyon Drumbl (Washington and Lee University School of Law) has posted “#Audited: Social Media and Tax Enforcement” (Oregon Law Review, Forthcoming) on SSRN. Here is the abstract:
With limited resources and a diminished budget, it is not surprising that the Internal Revenue Service would seek new tools to maximize its enforcement efficiency. Automation and technology provide new opportunities for the IRS, and in turn, present new concerns for taxpayers. In December 2018, the IRS signaled its interest in a tool to access publicly available social media profiles of individuals in order to “expedite IRS case resolution for existing compliance cases.” This has important implications for taxpayer privacy.
Moreover, the use of social media in tax enforcement may pose a particular harm to an especially vulnerable population: low-income taxpayers. Social science research shows us that the poor are already over-surveilled, and researchers have identified various ways in which algorithmic screening and data mining can result in discrimination. What, then, are the implications of social media mining in the context of tax enforcement, especially given that the IRS already audits the poor at a rate similar to which it audits the highest earning individuals? How can these concerns be reconciled with the need for tax enforcement?
This article questions the appropriateness of the IRS further automating its enforcement tactics in ways that may harm already vulnerable individuals, makes proposals to balance the use of any such tactics with respect for taxpayer rights, and considers how tax lawyers should advise their clients in an era of diminishing privacy.
Monika Zalnieriute (University of New South Wales – Faculty of Law) has posted “”Transparency-Washing” In The Digital Age: A Corporate Agenda of Procedural Fetishism” (Critical Analysis of Law, 8(1) 2021, Forthcoming) on SSRN. Here is the abstract:
Contemporary discourse on the regulation and governance of the digital environment has often focused on the procedural value of transparency. This article traces the prominence of the concept of transparency in contemporary regulatory debates to the corporate agenda of technology companies. Looking at the latest transparency initiatives of IBM, Google and Facebook, I introduce the concept of “transparency-washing,” whereby a focus on transparency acts as an obfuscation and redirection from more substantive and fundamental questions about the concentration of power, substantial policies and actions of technology behemoths. While the “ethics-washing” of the tech giants has become widely acknowledged, “transparency washing” presents a wider critique of corporate discourse and neoliberal governmentality based on procedural fetishism, which detracts from the questions of substantial accountability and obligations by diverting the attention to procedural micro-issues that have little chance of changing the political or legal status quo.
Giorgio Monti (Tilburg Law and Economics Center) has posted “The Digital Markets Act – Institutional Design and Suggestions for Improvement” on SSRN. Here is the abstract:
The Digital Markets Act (DMA) is a major policy initiative to regulate platform gatekeepers in a more systematic manner than under competition law. This paper reflects on the institutional setup in the Commission proposal. While the DMA is well-designed, this paper recommends improvement in the following aspects: (i) matching the DMA’s objectives with obligations imposed on gatekeepers; (ii) facilitating co-regulation; (iii) streamlining the enforcement pyramid; (iv) emphasising the role of private enforcement; (iv) clarifying the role of competition law.
Jeffrey Ritter has posted “Digital Justice in 2058: Trusting Our Survival to AI, Quantum and the Rule of Law” (8 J. Int’l & Comparative Law __ (2021)) on SSRN. Here is the abstract:
As legal scholarship on the interactions among artificial intelligence (AI) and the rule of law advances, quantum computing is rapidly moving from scientific theory into reality, offering unprecedented potential for what AI will accomplish. To anticipate what the rule of law will offer when quantum becomes real, Part I introduces a future reality in which a new machine-based legal system, quantum law, governs humankind.
Time travelling forward to 2058, the centennial birthday of the Internet, Part II surveys the condition of the world, in which the rule of law serves an essential purpose—to extend the survival of humankind. Part III offers the text of an imagined keynote address in that year, describing the foundations on which justice has evolved and quantum law is administered.
Part IV concludes by challenging custodians of the law to think differently about how to fit law and technology together, while still preserving and advancing the humane values cherished as principles of the rule of law today—compassion, forgiveness, redemption, equality and fairness.
Straton Papagianneas (Leiden University, Leiden Institute for Area Studies) has posted “Automated Justice and Fairness in the PRC” on SSRN. Here is the abstract:
The digitalisation and automation of the judiciary, also known as judicial informatisation, (司法信息化) has been ongoing for two decades in China. The latest development is the emergence of “smart courts” (智慧法院), which are part of the Chinese party-state’s efforts to reform and modernise its governance capacity. These are legal courts where the judicial process is fully conducted digitally, and judicial officers make use of technological applications sustained by algorithms and big-data analytics. The end-goal is to create a judicial decision-making process that is fully conducted in an online judicial ecosystem where the majority of tasks are automated and opportunities for human discretion or interference are minimal.
This article asks how automation and digitalisation satisfy procedural fairness in the PRC? First, it discusses the Chinese conception of judicial fairness through a literature review. It finds that the utilitarian conception of fairness is a reflection of the inherently legalist and instrumentalist vision of law. This is turn, also influences the way innovations, such as judicial automation, are assessed. Then, it contextualises the policy of ‘building smart courts’, launched in 2017, which aimed to automate and digitalise large parts of the judicial process. The policy is part of a larger reform drive that aims to recentralise power and standardise decision-making. Next, it discusses how automation and digitalisation have changed the judicial process, based on a reading of court and media reports of technological applications. The final section analyses the implications of automation and digitalisation for judicial fairness in the PRC.
The article argues that, within the utilitarian conceptualisation of justice and law, automated justice can indeed be considered fair because it improves the quality of procedures to the extent that they facilitate the achievement of the political goals of judicial reform and the judiciary in general.
Juliet M. Moringiello (Widener University – Commonwealth Law School) has posted “Automating Repossession” to SSRN. Here is the abstract:
Imagine if you bought a refrigerator from BestBuy on credit and BestBuy reserved the right to disable that refrigerator remotely if you failed to pay. This is not a future fantasy; subprime car lenders have been doing something similar for two decades. Many goods are connected to networks that allow the seller of the goods to retain some measure of control over them. These “smart goods” pose several challenges to the law, notably to the rules that govern creditors’ remedies when the owner of smart goods collateral defaults on the loan secured by such collateral. A creditor with a security interest in smart goods has the technological capacity to disable such goods remotely upon the borrower’s default.
Automating Repossession addresses a question that has no clear answer in commercial law – does a creditor have the right to remotely disable collateral upon its debtor’s default? As physical goods are increasingly connected to online networks in ways that allow their sellers to control their use, it is possible for secured lenders to deploy a remote and automated repossessor to disable tangible collateral in the event of a borrower’s default. Article 9 of the Uniform Commercial Code (UCC), which allows a secured creditor to repossess collateral upon its debtor’s default without resorting to the courts only if it can do so without a breach of the peace, does not address this practice. A handful of states have responded to the use of remote disablement by enacting amendments to their versions Article 9 of the UCC or their statutes aimed more specifically at consumer protection. In the vast majority of U.S. jurisdictions, the law is silent as to whether a remote disablement is equivalent to a self-help repossession and thus imposes no limitations on its use.
This paper recognizes that remote disablement should be a permissible creditor remedy in the UCC and proposes appropriate limitations on its use. To craft appropriate limitations, the article explores the history of the breach of the peace standard in repossessions involving physical contact. Rejecting that standard for automated repossessions, the article draws from contractual, legislative, and judicial sources to suggest limitations on remote disablement that address the unique harms caused by that remedy. Those sources include contracts governing remote disablement in the subprime automobile lending industry, the handful of existing laws governing the practice, and the restrictions on self-help remedies in the laws governing physical repossessions such as evictions, digital disablement of computer software, and remedies that cross the digital-physical divide in satellite financing. The article concludes by considering the interests that might be violated when a creditor crosses the digital-physical divide to remotely disable physical collateral and makes recommendations about how the UCC should address remote disablement as a creditor remedy.
Mark Nitzberg (University of California, Berkeley) and John Zysman (University of California, Berkeley) have posted “Algorithms, Data, and Platforms: The Diverse Challenges of Governing AI” (Journal of European Public Policy) on SSRN. Here is the abstract:
Artificial Intelligence (AI) poses interwoven challenges. Defined as technology that uses advanced computation to perform at human cognitive capacity in some task area, AI must be regulated in the context of its broader toolbox – algorithms, data and platforms – and its regulation must be sector-specific. Establishing national and community priorities on how to reap AI’s benefits, while managing its social and economic risks, is an evolving debate. Digital Platform Firms are a fundamental driver of AI tools: they dominate the playing field and often pursue priorities outside the frames of the public sector and of civil society. While its governance is critical to national success, AI pries open a Pandora’s box of questions that sweep across the economy and society, engaging diverse communities. Rather than a single, global ethical framework, one must consider how to pursue objectives of interoperability amongst nations with quite different political economies.
Frank A. Pasquale (Brooklyn Law School) has posted “The Resilient Fragility of Law” (Foreword to “Is Law Computable?: Critical Perspectives on Law and Artificial Intelligence” (Simon Deakin & Christopher Markou, eds., Hart Publishing, 2020)) on SSRN. Here is the abstract:
Are current legal processes computable? Given known limitations of computing likely to continue into the near and medium-term future, the answer for all but the simplest processes is: no. Should they become more computable? Some processes could benefit from further algorithmatization, statistical analysis, and quantitative valuation, but context is critical. For reductionist projects in computational law and legal automation (particularly those that seek to replace, rather than complement, legal practitioners), traces of the legal process are all too often mistaken for the process itself. The words in a complaint and an opinion, for instance, are taken to be the essence of the proceeding, and variables gleaned from decisionmakers’ past actions and affiliations are further used to predict their future actions. Such behavioristic approaches undervalue the resilient fragility of law—that is, the capacity of persons and institutions to creatively interpret language, reframe disputes, and find new patterns of cooperation. In diverse ways, the chapters in this volume reclaim and revalue law’s resilient fragility, identifying labor and judgment as the irreplaceable center of a humane legal system.
Jasbir Khalsa (Microsoft Corporation) has posted “Freedom of Expression and Human Dignity in the Age of Artificial Intelligence” on SSRN. Here is the abstract:
Cambridge Analytica exposes possible gaps in legal protection as it relates to certain human rights and the use of personal data to offer ‘free’ technology. This article discusses Freedom of Expression and Human Dignity under The Charter of Fundamental Rights of the European Union. This article explores how the Charter can be applied to technology and private parties like Facebook or Cambridge Analytica holding such private parties accountable for violations of Human Rights.
Iris H-Y Chiu (University College London – Faculty of Laws, ECGI) and Ernest Lim (National University of Singapore (NUS) – Faculty of Law) have posted “Managing Corporations’ Risk in Adopting Artificial Intelligence: A Corporate Responsibility Paradigm” (Washington University Global Studies Law Review (forthcoming)) on SSRN. Here is the abstract:
Machine learning (ML) raises issues of risk for corporate and commercial use that are distinct from the legal risks involved in deploying robots that may be more deterministic in nature. Such issues of risk relate to what data is being input for the learning processes for ML, the risks of bias, and hidden, sub-optimal assumptions; how such data is processed by ML to reach its ‘outcome,’ leading sometimes to perverse results such as unexpected errors, harm, difficult choices, and even sub-optimal behavioural phenomena; and who should be accountable for such risks. While extant literature provides rich discussion of these issues, there are only emerging regulatory frameworks and soft law in the form of ethical principles to guide corporations navigating this area of innovation.
This article focuses on corporations that deploy ML, rather than on producers of ML innovations, in order to chart a framework for guiding strategic corporate decisions in adopting ML. We argue that such a framework necessarily integrates corporations’ legal risks and their broader accountability to society. The navigation of ML innovations is not carried out within a ‘compliance landscape’ for corporations, given that the laws and regulations governing corporations’ use of ML are yet emerging. Corporations’ deployment of ML is being scrutinised by the industry, stakeholders, and broader society as governance initiatives are being developed in a number of bottom-up quarters. We argue that corporations should frame their strategic deployment of ML innovations within a ‘thick and broad’ paradigm of corporate responsibility that is inextricably connected to business-society relations.