Polle et al. on AI Standards: Thought-Leadership in AI Legal, Ethical and Safety Specifications Through Experimentation

Roseline Polle (University College London) and others have posted “Towards AI Standards: Thought-Leadership in Ai Legal, Ethical and Safety Specifications Through Experimentation” on SSRN. Here is the abstract:

With the rapid adoption of algorithms in business and society there is a growing concern to safeguard the public interest. Researchers, policy-makers and industry sharing this view convened to collectively identify future areas of focus in order to advance AI standards – in particular the acute need to ensure standard suggestions are practical and empirically informed. This discussion occurred in the context of the creation of a lab at UCL with these concerns in mind (currently dubbed as UCL The Algorithms Standards and Technology Lab). Via a series of panels, with the main stakeholders, three themes emerged, namely (i) Building public trust, (ii) Accountability and Operationalisation, and (iii) Experimentation. In order to forward the themes, lab activities will fall under three streams – experimentation, community building and communication. The Lab’s mission is to provide thought-leadership in AI standards through experimentation.

Solow-Niderman on Information Privacy and the Inference Economy

Alicia Solow-Niederman (Harvard Law School) has posted “Information Privacy and the Inference Economy” on SSRN. Here is the abstract:

Information privacy is in trouble. Contemporary information privacy protections emphasize individuals’ control over their own personal information. But machine learning, the leading form of artificial intelligence, facilitates an inference economy that strains this protective approach past its breaking point. Machine learning provides pathways to use data and make probabilistic predictions—inferences—that are inadequately addressed by the current regime. For one, seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection. Moreover, it is possible to aggregate myriad individuals’ data within machine learning models, identify patterns, and then apply the patterns to make inferences about other people who may or may not be part of the original data set. The inferential pathways created by such models shift away from “your” data, and towards a new category of “information that might be about you.” And because our law assumes that privacy is about personal, identifiable information, we miss the privacy interests implicated when aggregated data that is neither personal nor identifiable can be used to make inferences about you, me, and others.

This Article contends that accounting for the power and peril of inferences requires reframing information privacy governance as a network of organizational relationships to manage—not merely a set of data flows to constrain. The status quo magnifies the power of organizations that collect and process data, while disempowering the people who provide data and who are affected by data-driven decisions. It ignores the triangular relationship among collectors, processors, and people and, in particular, disregards the co-dependencies between organizations that collect data and organizations that process data to draw inferences. It is past time to rework the structure of our regulatory protections. This Article provides a framework to move forward. Accounting for organizational relationships reveals new sites for regulatory intervention and offers a more auspicious strategy to contend with the impact of data on human lives in our inference economy.

Simon-Kerr on Credibility in an Age of Algorithms

Julia Ann Simon-Kerr (University of Connecticut – School of Law) has posted “Credibility in an Age of Algorithms” (Rutgers Law Review, Forthcoming) on SSRN. Here is the abstract:

Evidence law has a “credibility” problem. Artificial intelligence creators will soon be marketing tools for assessing credibility in the courtroom. Yet, although credibility is a vital concept in the U.S. legal system, there is deep ambiguity within the law about its function. American jurisprudence assumes that impeachment evidence tells us about a witness’s propensity for truthfulness. Yet this same jurisprudence focuses fact-finders on external qualities that are probative of a witness’s worthiness of belief but not of the risk that they will lie. Without a clear understanding of what credibility in the legal system is or should be, the terms of engagement will be set by the creators of algorithms in accordance with their interests.

This article focuses on the two main paradigms within current credibility jurisprudence as a guide to thinking about how algorithms might be brought to bear on legal credibility. It does this by analogy to two existing algorithmic products. One is the U.S. credit scoring system. The other is China’s experiment with a “social credit” scoring system. These examples reflect the actual and purported function of credibility in the law in ways that are revealing both for current practice and as we contemplate the credibility of the future.