Themeli & Philipsen on AI as the Court: Assessing AI Deployment in Civil Cases

Erlis Themeli (Erasmus University Rotterdam) and Stefan Philipsen have posted “AI as the Court: Assessing AI Deployment in Civil Cases” (K. Benyekhlef (ed), AI and Law. A Critical Overview, Éditions Thémis 2021, p. 213-232) on SSRN. Here is the abstract:

Lawyers and some branches of the government use artificial intelligence based programs to take decisions and develop their business strategies. In addition, several research teams have developed AI programs that are able to predict court decisions. Similar systems may be used in courts in the near future to administer files, support judges, or maybe replace them. In this paper, we assess possible consequence of deploying AI in European courts. The paper is divided in four main sections. First, we distinguish between AI in the court – when AI is used by parties or the court administration, – and AI as the court – AI that can support or replace judges. Second, we categorise civil cases in Europe according to their complexity and conflict, suggesting that judges may be assisted by AI systems, but cannot be replaced for complex high-conflict cases. Third, we assess to what extent AI can replace judges and still meet the legal requirements following from (1) principles of access to justice like accessibility, transparency, and accountability; and (2) from the fundamental right to an effective remedy and the right to a fair trial as protected by the European Convention on Human Rights. Fourth, we conclude that under the current legal framework it is already feasible to replace judges by AI for non-complex low-conflict cases. In addition, using only AI to decide cases with higher complexity and conflict would threaten access to justice as well as the right to a fair trial. We, also, recognizes that in the future an increasing use of AI in courts will challenge our traditional understanding of concepts like access to justice and the right to a fair trial. This understanding will be driven by the perception court-users will have of AI as the court.

Recommended. See also “The Impact of Artificial Intelligence on Rules, Standards, and Judicial Discretion” referenced on this blawg.

Kilovaty on Psychological Data Breach Harms

Ido Kilovaty (University of Tulsa College of Law, Yale University – Law School) has posted “Psychological Data Breach Harms” (23 North Carolina Journal of Law & Technology (2021)) on SSRN. Here is the abstract:

Cybersecurity law, both in statutory and case law, is primarily based on the premise that data breaches result exclusively in financial harms. Intuitively, legal scholarship has largely been focused on financial harms to the exclusion of non-financial harms, emotional and mental, that also arise from data breaches. There is now a critical mass of research showing that consumers whose information has been compromised suffer from serious emotional and mental conditions as a result. This Article seeks to evaluate cybersecurity law in light of this reality and propose a framework to address these psychological data breach harms.

Psychological data breach harms arising from data breaches raise a plethora of significant challenges which the law does not adequately account for. Consumers suffering these harms are unlikely to pursue litigation, nor are they likely to prevail in it for both standing and cause of action reasons. In similar vein, different cybersecurity law frameworks, such as the Computer Fraud and Abuse Act, data security laws, data breach notification laws, and FTC enforcement do not generally recognize any harms that are non-monetary in nature. Moreover, companies suffering data breaches are not legally required to offer any assistance or mitigation for consumers who may suffer psychological harms. Contributing to these challenges is the fact that breached companies are often not even required to disclose breaches that are unlikely to cause future financial harm.

This Article offers a legal and conceptual framework for psychological data breach harms, which cybersecurity law currently overlooks. First, this Article argues for the recognition of psychological data breach harms within the process of cybersecurity, from the very outset. Second, this Article makes concrete recommendations on how psychological data breach harms ought to be addressed, both by regulators and breached entities, as well as the appropriate remedies. Third, this Article calls for a reconsideration of what we mean by “personal information,” and for the expansion of information categories that cybersecurity law protects.

Schwarcz on Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data

Daniel Scharcz (Minnesota Law) has posted “Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data” (Houston Journal of Health Law and Policy, 2021) on SSRN. Here is the abstract:

Insurers and employers often have financial incentives to discriminate against people who are relatively likely to experience future healthcare costs. Numerous federal and states laws nonetheless seek to restrict such health-based discrimination. Examples include the Pregnancy Discrimination Act (PDA), the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), and the Genetic Information Non-Discrimination Act (GINA). But this Essay argues that these laws are incapable of reliably preventing health-based discrimination when employers or insurers rely on machine-learning AIs to inform their decision-making. At bottom, this is because machine-learning AIs are inherently structured to identify and rely upon proxies for traits that directly predict whatever “target variable” they are programmed to maximize. Because the future health status of employees and insureds is in fact directly predictive of innumerable facially neutral goals for employers and insurers respectively, machine-learning AIs will tend to produce similar results as intentional discrimination based on health-related factors. Although laws like the Affordable Care Act (ACA) can avoid this outcome by prohibiting all forms of discrimination that are not pre-approved, this approach is not broadly applicable. Complicating the issue even further, virtually all technical strategies for developing “fair algorithms” are not workable when it comes to health-based proxy discrimination, because health information is generally private and hence cannot be used to correct unwanted biases. The Essay nonetheless closes by suggesting a new strategy for combatting health-based proxy discrimination by AI: limiting firms’ capacity to program their AIs using target variables that have a strong possible link to health-related factors.

Chason on Smart Contracts and the Limits of Computerized Commerce

Eric D. Chason (William & Mary Law School) has posted “Smart Contracts and the Limits of Computerized Commerce” (Nebraska Law Review, Vol. 99, No. 330, 2020) on SSRN. Here is the abstract:

Smart contracts and cryptocurrencies have sparked considerable interest among legal scholars in recent years, and a growing body of scholarship focuses on whether smart contracts and cryptocurrencies can sidestep law and regulation altogether. Bitcoin is famously decentralized, without any central actor controlling the system. Its users remain largely anonymous, using alphanumeric addresses instead of legal names. Ethereum shares these traits and also supports smart contracts that can automate the transfer of the cryptocurrency. Ethereum also supports specialized “tokens” that can be tied to the ownership of assets, goods, and services that exist completely outside of the Ethereum blockchain. By some accounts, cryptocurrencies and smart contracts will revolutionize private law. Some argue they have the potential to displace contract and property law. In this Article, I will argue that a complete revolution is not inexorable. Facing the technical and complicated nature of this subject, we should keep in mind a simple fact: cryptocurrencies and smart contracts are computer data and computer programs. To a large extent, they will have legal force only if given force by judges, regulators, and legislators.

Recommended.