Niblett & Yoon on AI and The Nature of Disagreement

Anthony Niblett (U Toronto Law) and Albert Yoon (same) have posted “A.I. and the Nature of Disagreement” on SSRN. Here is the abstract:

Some legal commentators – including ourselves – have been loudly optimistic about the power of artificial intelligence (AI) to improve litigation. These commentators argue that AI can provide clearer information, cutting through much of the complexity of the law, reducing frictions and disagreements between the parties. Further, the possibility of using AI to determine the outcomes of legal disputes has given rise to the concept of “robot judges” in legal scholarship.

But in this paper, we argue that much of this literature fails to fully appreciate what litigated disputes are really about. Litigants may disagree about the facts of the case, the applicable rules, or how the rules apply to the facts. These disagreements are often complex and intertwined.

We contend that AI tools may be limited in their ability to resolve litigated disputes because these tools often address only one type of disagreement, leaving others unresolved. The optimistic view of AI in litigation assumes that parties disagree mainly about the likelihood of winning or the size of damages awards for a given set of agreed facts. But we question whether litigation is really fueled by such disagreements.

Our main takeaway is that if litigation is driven by disagreements over the facts or which rules should govern, AI’s capacity to reduce disagreement may fall short of what some proponents of AI claim. We call for more empirical and theoretical work to explore what litigants actually disagree about to better assess the likely impact of algorithmic decision-making in legal systems.

Garon on Ethics 3.0 – Attorney Responsibility in the Age of Generative AI

Jon Garon (Nova Law) has posted “Ethics 3.0 – Attorney Responsibility in the Age of Generative AI” (The Business Lawyer , Am. Bar Assoc, Vol. 79, Winter 2023–2024) on SSRN. Here is the abstract:

A lawyer’s duty to remain competent and diligent in light of technological change begins with the Model Rules but it must extend to the substantive relevant law. This article focuses on the obligations of client confidentiality, the duty to understand cybersecurity, the need to exploit the new technologies of generative AI and the metaverse with caution, and the need to communicate in a permissible manner. These are all key obligations under the ABA Model Rules of Professional Conduct related to the use of technology. The Model Rules provide a normative guideline that goes beyond the technical requirements for minimum competency and may provide standards for professional malpractice liability and other legal standards, but they are only a start. To fully understand the scope of the lawyers’ duty regarding technology, the practitioner must also look at state and federal regulations including HIPAA) data privacy and security rules, digital exportation under the Export Administration Act and the International Traffic in Arms Regulations, state consumer privacy laws, FTC Guides Concerning the Use of Endorsements and Testimonials in Advertising and similar truth-in-advertising obligations, and more.

Mills on A Contractual Approach to Social Media Governance

Gilad Mills (Harvard Law School) has posted “A Contractual Approach to Social Media Governance”
(Yale Law & Policy Review, Vol. 42, Forthcoming) on SSRN. Here is the abstract:

The heated scholarly debate in recent years around social media governance has been dominated by a clear public law bias and has yielded a substantively incomplete analysis of the issues at hand. Captured by public law analogies that depict platforms as governors who perform legislative, administrative, and adjudicatory functions, scholars and policymakers have repeatedly turned to public law norms as the hook on which they hang proposed governance solutions. As a practical strategy, they either called to impose public law norms by way of regulatory intervention or, conversely, called on platforms to adopt them voluntarily. This approach to social media governance, however, has met with limited success, stymied by political deadlocks, constitutional constraints, and platforms’ commercial preferences. At the same time, private law has been broadly overlooked as a potentially superior source of governance norms for social media, while the potential role the judiciary could play in generating these norms has been seriously discounted or even ignored altogether.

This Article tackles this blind spot in the current scholarship and thinking, offering a novel, comprehensive contractual approach to social media governance. Applying relational contract theory to social media contracting, it lays out the normative underpinnings for subjecting platforms to contractual duties of fairness and diligence, from which governance norms can and should be derived, it is argued. A doctrinal analysis is also provided, to equip courts and litigators with the practical tools for holding platforms liable when such contractual duties are breached. Finally, to mitigate concerns about judicial over-encroachment on platforms’ decision-making, the Article offers a pragmatic remedial approach that prefers equitable remedies to damages and adopts a deferential standard of review––a “platform judgment rule”––that would insulate platforms from judicial scrutiny so long as they uphold their “best-efforts” commitments to conduct informed, unbiased, content-moderation in good faith, and to refrain from grossly misusing personal data.

Solow-Niederman on AI Standards and Politics

Alicia Solow-Niederman (George Washington Law) has posted “Can AI Standards Have Politics?” (71 UCLA L. Rev. Disc. 2 (forthcoming)) on SSRN. Here is the abstract:

How to govern a technology like artificial intelligence (AI)? When it comes to designing and deploying fair, ethical, and safe AI systems, standards are a tempting answer. By establishing the best way of doing something, standards might seem to provide plug-and-play guardrails for AI systems that avoid the costs of formal legal intervention. AI standards are all the more tantalizing because they seem to provide a neutral, objective way to proceed in a normatively contested space. But this vision of AI standards blinks a practical reality. Standards do not appear out of thin air. They are constructed. This Essay analyzes three concrete examples from the European Union, China, and the United States to underscore how standards are neither objective nor neutral. It thereby exposes an inconvenient truth for AI governance: Standards have politics, and yet recognizing that standards are crafted by actors who make normative choices in particular institutional contexts, subject to political and economic incentives and constraints, may undermine the functional utility of standards as soft law regulatory instruments that can set forth a single, best formula to disseminate across contexts.

Gans on How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption

Joshua S. Gans (U Toronto – Rotman; NBER) has posted “How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption” on SSRN. Here is the abstract:

This paper examines recent proposals and research suggesting that AI adoption should be delayed until its potential harms are properly understood. It is shown that conclusions regarding the social optimality of delayed AI adoption are sensitive to assumptions regarding the process by which regulators learn about the salience of particular harms. When such learning is by doing — based on the real-world adoption of AI — this generally favours acceleration of AI adoption to surface and react to potential harms more quickly. This case is strengthened when AI adoption is potentially reversible. The paper examines how different conclusions regarding the optimality of accelerated or delayed AI adoption influence and are influenced by other policies that may moderate AI harm.