Hannah Bloch-Wehba (Texas A&M University School of Law; Yale ISP) has posted “Algorithmic Governance from the Bottom Up” (Brigham Young University Law Review, Forthcoming) on SSRN. Here is the abstract:
Artificial intelligence and machine learning are both a blessing and a curse for governance. In theory, algorithmic governance makes government more efficient, more accurate, and more fair. But the emergence of automation in governance also rests on public-private collaborations that expand both public and private power, aggravate transparency and accountability gaps, and create significant obstacles for those seeking algorithmic justice. In response, a nascent body of law proposes technocratic policy changes to foster algorithmic accountability, ethics, and transparency.
This Article examines an alternative vision of algorithmic governance, one advanced primarily by social and labor movements instead of technocrats and firms. The use of algorithmic governance in increasingly high-stakes settings has generated an outpouring of activism, advocacy, and resistance. This mobilization draws on the same concerns that animate budding policy responses. But social and labor movements offer an alternative source of constraints on algorithmic governance: direct resistance from the bottom up. These movements confront head-on the entanglement of economic power, racial hierarchy, and government surveillance.
Using three case studies, this Article explores how tech workers and social movements are resisting and mobilizing against technologies that expand surveillance and funnel wealth to the private sector. Each case study illustrates how the intermingling of state and private power has required movements to engage both within and outside firms to counteract the growing appeal of automation. Yet the dominant approaches to regulating the government’s uses of technology continue to afford a privileged role to private firms and elite institutions, sidelining movement demands. The fundamental challenge posed by these movements will be whether—and how—law and policy can accommodate demands for bottom-up control. This Article sketches a new vision for algorithmic accountability, with a more vibrant role for workers and for the public in determining how firms and government institutions work together.
Nydia Remolina (Singapore Management University – Centre for AI & Data Governance) has posted “The Role of Financial Regulators in the Governance of Algorithmic Credit Scoring” on SSRN. Here is the abstract:
The use of algorithmic credit scoring presents opportunities and challenges for lenders, regulators, and consumers. This paper provides an analysis of the perils of the use of AI in lending, such as the problem of discrimination in lending markets that use algorithmic credit scoring, the limited control financial consumers have over the outcomes of AI models due to the current scope of data protection law and financial consumer protection law, the financial exclusion caused by the lack of data from traditionally excluded groups, the regulatory arbitrage in lending markets, and the little oversight of the use of alternative data for algorithmic credit scoring. I provide a comparative overview of the current approaches to algorithmic credit scoring in different jurisdictions such as Kenya, the European Union, the United Kingdom, Hong Kong, Singapore, the United States, Australia, and Brazil to argue that these models do not solve the problems illustrated. To address the problems of algorithmic credit scoring and effectively protect consumers as end users of these models, and therefore, promote access to finance, this paper proposes a set of tools and solutions for financial regulators. First, a testing supervisory process for algorithmic credit scoring models will effectively promote fair lending. Second, to create a right to know the outcomes of the algorithm, including opinion data and inferences, to promote digital self-determination. This solution empowers consumers affected by algorithmic credit scoring so they can verify and challenge the decision made by the AI model. Third, to level the playing field between financial institutions and other lenders that use algorithmic credit scoring. Fourth, to use the sandbox as a test environment for lenders to create data of traditionally excluded groups in a controlled environment. And finally, to foster data sharing and data portability initiatives for credit scoring through open finance schemes in an environment controlled by the financial regulatory authority. Better algorithms, unbiased data, AI regulation, fair lending regulation and AI governance guidelines do not solve the perils of the use of AI for creditworthiness assessment. In contrast, these proposals aim to solve the problems of algorithmic credit scoring in any jurisdiction.
Wulf A. Kaal (University of St. Thomas, Minnesota – School of Law) and Hayley Howe (Emerging Technology Association) have posted “Custody of Digital Assets” on SSRN. Here is the abstract:
The custody of digital assets plays an essential role in the evolution of the digital asset industry. Fully compliant legal custody solutions for digital assets increase legal certainty and mainstream investor confidence which, in turn, helps build markets in digital assets. Once digital asset markets evolved, self-custody solutions help increase the decentralization of the digital asset market. This article examines the evolving custody solutions for digital assets.