Choi on Software Professionals, Malpractice Law, and Codes of Ethics

Bryan H. Choi (Ohio State University (OSU) – Michael E. Moritz College of Law; Information Society Project, Yale Law School) has posted “Software Professionals, Malpractice Law, and Codes of Ethics” (Communications of the ACM 2021) on SSRN. Here is the abstract:

We all know what a professional is—or do we? For years, ACM has proclaimed that its members are part of a computing profession. But is it really a profession? Many people describe themselves as “professionals” in the colloquial sense of being paid to perform some specialized skill. Yet, only a few occupations are regarded as professions in the legal sense. Courts do not consider athletes or chefs to be professionals the way doctors and lawyers are. Likewise, courts have consistently excluded software developers from that select group.

To understand why U.S. law does not recognize computing as a profession—and whether that classification could be changed—calls for a fresh look at the law of professions. Why does the law distinguish professionals from nonprofessionals such as mechanics or pilots? What would happen if courts treated software developers like doctors or lawyers? What are professionals’ legal duties of care and how do they differ from ethical codes of conduct? Can one bootstrap the other?

Much of the computing community has assumed that a more robust commitment to ethics is a prerequisite for legal recognition as a profession. That assumption is exactly backward. Professional malpractice law is needed to catalyze a robust code of ethics. The lesson is this: the best way for ACM’s Code of Ethics to make a meaningful difference in changing software development practices is for courts to recognize software as a profession.

Chen, Stremitzer & Tobia on Having Your Day in Robot Court

Benjamin Minhao Chen (The University of Hong Kong – Faculty of Law), Alexander Stremitzer (ETH Zurich), and Kevin Tobia (Georgetown University Law Center; Georgetown University – Department of Philosophy) have posted “Having Your Day in Robot Court” on SSRN. Here is the abstract:

Should machines be judges? Some balk at this possibility, holding that ordinary citizens would see a robot-led legal proceeding as procedurally unfair: To have your “day in court” is to have a human hear and adjudicate your claims. Two original experiments assess whether laypeople share this intuition. We discover that laypeople do, in fact, see human judges as fairer than artificially intelligent (“AI”) robot judges: All else equal, there is a perceived human-AI “fairness gap.” However, it is also possible to eliminate the fairness gap. The perceived advantage of human judges over AI judges is related to perceptions of accuracy and comprehensiveness of the decision, rather than “softer” and more distinctively human factors. Moreover, the study reveals that laypeople are amenable to “algorithm offsetting.” Adding an AI hearing and increasing the AI interpretability reduces the perceived human-AI fairness gap. Ultimately, the results support a common challenge to robot judges: there is a concerning human-AI fairness gap. Yet, the results also indicate that the strongest version of this challenge — human judges have inimitable procedural fairness advantages — is not reflected in the views of laypeople. In some circumstances, people see a day in robot court as no less fair than day in human court.

Normann & Sternberg on Hybrid Collusion: Algorithmic Pricing in Human-Computer Laboratory Markets

Hans-Theo Normann (Heinrich Heine University Dusseldorf – Department of Economics; Max Planck Institute for Research on Collective Goods) and Martin Sternberg (MPI for Research on Collective Goods, Bonn) have posted “Hybrid Collusion: Algorithmic Pricing in Human-Computer Laboratory Markets” on SSRN. Here is the abstract:

We investigate collusive pricing in laboratory markets when human players interact with an algorithm. We compare the degree of (tacit) collusion when exclusively humans interact to the case of one firm in the market delegating its decisions to an algorithm. We further vary whether participants know about the presence of the algorithm. We find that three-firm markets involving an algorithmic player are significantly more collusive than human-only markets. Firms employing an algorithm earn significantly less profit than their rivals. For four-firm markets, we find no significant differences. (Un)certainty about the actual presence of an algorithm does not significantly affect collusion.

Burri & Trusilo on Ethical Artificial Intelligence

Thomas Burri (University of St. Gallen) and Daniel Trusilo (University of St. Gallen) have posted “Ethical Artificial Intelligence: An Approach to Evaluating Disembodied Autonomous Systems” (In: Rain Liivoja and Ann Valjataga (eds), Autonomous Cyber Capabilities in International Law, forthcoming 2021) on SSRN. Here is the abstract:

Building off our prior work on the practical evaluation of autonomous robotic systems, this chapter discusses how an existing framework can be extended to apply to autonomous cyber systems. It is hoped that such a framework can inform pragmatic discussions of ethical and regulatory norms in a proactive way. Issues raised by autonomous systems in the physical and cyber realms are distinct; however, discussions about the norms and laws governing these two related manifestations of autonomy can and should inform one another. Therefore, this paper emphasizes the factors that distinguish autonomous systems in cyberspace, labeled disembodied autonomous systems, from systems that physically exist in the form of embodied autonomous systems. By highlighting the distinguishing factors of these two forms of autonomy, this paper informs the extension of our assessment tool to software systems, bringing us into the legal and ethical discussions of autonomy in cyberspace.

Goldman on Content Moderation Remedies

Eric Goldman (Santa Clara University – School of Law) has posted “Content Moderation Remedies” (Michigan Technology Law Review, Forthcoming) on SSRN. Here is the abstract:

This Article addresses a critical but underexplored topic of “platform” law: if a user’s online content or actions violate the rules, what should happen next? The longstanding expectation is that Internet services should remove violative content or accounts from their services, and many laws mandate that result. However, Internet services have a wide range of other options—what I call “remedies”—they can use to redress content or accounts that violate the applicable rules. The Article describes dozens of remedies that Internet services have actually imposed. The Article then provides a normative framework to help Internet services and regulators navigate these remedial options to address the many difficult tradeoffs involved in content moderation. By moving past the binary remove-or-not remedy framework that dominates the current discourse about content moderation, this Article helps to improve the efficacy of content moderation, promote free expression, promote competition among Internet services, and improve Internet services’ community-building functions.

Dickinson on Textual Internet Immunity

Gregory M. Dickinson (Stanford Law School) has posted “Toward Textual Internet Immunity” (Stanford Law & Policy Review, Forthcoming) on SSRN. Here is the abstract:

Internet immunity doctrine is broken. Under Section 230 of the Communications Decency Act of 1996, online entities are absolutely immune from lawsuits related to content authored by third parties. The law has been essential to the internet’s development over the last twenty years, but it has not kept pace with the times and is now deeply flawed. Democrats demand accountability for online misinformation. Republicans decry politically motivated censorship. And Congress, President Biden, the Department of Justice, and the Federal Communications Commission all have their own plans for reform. Absent from the fray, however—until now—has been the Supreme Court, which has never issued a decision interpreting Section 230. That appears poised to change, however, following Justice Thomas’s statement in Malwarebytes v. Enigma in which he urges the Court to prune back decades of lower-court precedent to craft a more limited immunity doctrine. This Essay discusses how courts’ zealous enforcement of the early internet’s free-information ethos gave birth to an expansive immunity doctrine, warns of potential pitfalls to reform, and explores what a narrower, text-focused doctrine might mean for the tech industry.

Greenleaf on China’s Comprehensive Draft Data Privacy Law

Graham Greenleaf (University of New South Wales, Faculty of Law) has posted “China Issues a Comprehensive Draft Data Privacy Law” ((2020) 168 Privacy Laws & Business International Report 1, 6-10) on SSRN. Here is the abstract:

The long-anticipated Law of the People’s Republic of China on the Protection of Personal Information (Draft) (‘PPIL’) was released by the Standing Committee of the National People’s Congress (SC-NPC), the second-highest legislative body in China, on 21 October 2020. Its enactment will be the culmination of a decade-long evolution. The article analyses the draft PPIL and considers where it goes beyond the previous benchmark, the CyberSecurity Law (CSL) of 2016, and compares aspects of the EU’s GDPR.

The article concludes that, while detailed conclusions await enactment, some things are clear enough. China’s draft law is well within the normal global range of data privacy laws, shows many GDPR influences, and goes beyond the GDPR on some points. It goes further in many respects than the 2016 CSL, and the 2017 PI Standard. The ‘enforcement toolkit’ is diverse, with ‘dissuasive’ sanctions, as the GDPR puts it. These apparently strong data privacy rights in the private sector must co-exist with a high level of government surveillance (including the ‘Social Credit’ system) but they are likely to be enforceable because China needs there to be public trust in its e-commerce sector, and aspects of e-governance, so credible data privacy laws are necessary.

Other than the absence of a DPA (specialised, or independent), the most important departure from ‘European’ norms is that the data export restrictions are largely at the discretion of the CAC, with no objective criteria, and other forms of data localisation are similar. Multiple risk points for foreign and local companies will result.

For other countries attracted to ideologies of ‘data sovereignty’, the ‘Chinese model’ (explained in the article) may prove an attractive one to emulate. Internationally, this will fit uncomfortably with both the EU’s GDPR and US laissez-faire. Disputes before international trade forums are likely to result.

Coglianese on Administrative Law in the Automated State

Cary Coglianese (University of Pennsylvania Law School) has posted “Administrative Law in the Automated State” (Daedalus (Forthcoming)) on SSRN. Here is the abstract:

In the future, administrative agencies will rely increasingly on digital automation powered by machine learning algorithms. Can U.S. administrative law accommodate such a future? Not only might a highly automated state readily meet longstanding administrative law principles, but the responsible use of machine learning algorithms might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. Algorithmic governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, algorithms might well prove to be more faithful agents of democratic institutions. Yet even if an automated state were smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to government by algorithm will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one.

Erdos on The UK and the EU Personal Data Framework After Brexit: Another Switzerland?

David Erdos (University of Cambridge – Faculty of Law; Trinity Hall) has posted “The UK and the EU Personal Data Framework After Brexit: Another Switzerland?” on SSRN. Here is the abstract:

The UK-EU Trade and Cooperation Agreement sets out a pathway for the UK to have the closest relationship on personal data with the EU outside of the European Economic Area (EEA) and Switzerland. This is principally apparent in the area of justice and security where there is very extensive provision for data exchange including DNA and fingerprints. This exchange rests on specified common standards and will likely be complemented by the first ever EU adequacy agreement under the Law Enforcement Directive. In some contrast, understandings in the general area of data protection (at least outside direct marketing) point only to mutual adequacy. Whilst mandating “essentially equivalent” (GDPR, recital 104) protection, significant flexibility is retained. Given the UK’s distinct approach to data protection, the EU may find that it adopts a more divergent approach in the medium term than, for example, Switzerland. Bona fide implementation of the Council of Europe’s Data Protection Convention 108+ may provide a good lodestar for a more graduated regime which also seeks to clearly reconcile data protection with competing rights. The paper tentatively examines what that might entail for the proactive transparency rules, sensitive data regime, integrity provisions and specific restrictions. Any such reform would require great care and should not detract from the need for much more effective practical enforcement.

Recommended.

Tippett, Alexander, and Branting on Predicting Judicial Decisions from Legal Briefs

Elizabeth Chika Tippett (University of Oregon School of Law), Charlotte Alexander (Georgia State University – Institute for Insight; Georgia State University College of Law), and L. Karl Branting (University of Wyoming) have posted “Does Lawyering Matter? Predicting Judicial Decisions from Legal Briefs, and What That Means for Access to Justice” (Texas Law Review, Forthcoming) to SSRN. Here is the abstract:

This study uses linguistic analysis and machine learning techniques to predict summary judgment outcomes from the text of the parties’ briefs. We test the predictive power of textual characteristics, stylistic features, and citation usage, and find that citations to precedent – their frequency, their patterns, and their popularity in other briefs – are the most predictive of a summary judgment win. This suggests that good lawyering may boil down to good legal research. However, good legal research is expensive, and the primacy of citations in our models raises concerns about access to justice. Here, our citation-based models also suggest promising solutions. We propose a freely available, computationally-enabled citation identification and brief bank tool, which would extend to all litigants the benefits of good lawyering and open up access to justice.

Recommended.