Khalsa on Freedom of Expression and Human Dignity in the Age of Artificial Intelligence 

Jasbir Khalsa (Microsoft Corporation) has posted “Freedom of Expression and Human Dignity in the Age of Artificial Intelligence” on SSRN. Here is the abstract:

Cambridge Analytica exposes possible gaps in legal protection as it relates to certain human rights and the use of personal data to offer ‘free’ technology. This article discusses Freedom of Expression and Human Dignity under The Charter of Fundamental Rights of the European Union. This article explores how the Charter can be applied to technology and private parties like Facebook or Cambridge Analytica holding such private parties accountable for violations of Human Rights.

Chiu & Lim on Managing Corporations’ Risk in Adopting Artificial Intelligence

Iris H-Y Chiu (University College London – Faculty of Laws, ECGI) and Ernest Lim (National University of Singapore (NUS) – Faculty of Law) have posted “Managing Corporations’ Risk in Adopting Artificial Intelligence: A Corporate Responsibility Paradigm” (Washington University Global Studies Law Review (forthcoming)) on SSRN. Here is the abstract:

Machine learning (ML) raises issues of risk for corporate and commercial use that are distinct from the legal risks involved in deploying robots that may be more deterministic in nature. Such issues of risk relate to what data is being input for the learning processes for ML, the risks of bias, and hidden, sub-optimal assumptions; how such data is processed by ML to reach its ‘outcome,’ leading sometimes to perverse results such as unexpected errors, harm, difficult choices, and even sub-optimal behavioural phenomena; and who should be accountable for such risks. While extant literature provides rich discussion of these issues, there are only emerging regulatory frameworks and soft law in the form of ethical principles to guide corporations navigating this area of innovation.

This article focuses on corporations that deploy ML, rather than on producers of ML innovations, in order to chart a framework for guiding strategic corporate decisions in adopting ML. We argue that such a framework necessarily integrates corporations’ legal risks and their broader accountability to society. The navigation of ML innovations is not carried out within a ‘compliance landscape’ for corporations, given that the laws and regulations governing corporations’ use of ML are yet emerging. Corporations’ deployment of ML is being scrutinised by the industry, stakeholders, and broader society as governance initiatives are being developed in a number of bottom-up quarters. We argue that corporations should frame their strategic deployment of ML innovations within a ‘thick and broad’ paradigm of corporate responsibility that is inextricably connected to business-society relations.

Levesque on Applying the UN Guiding Principles on Business and Human Rights to Online Content Moderation

Maroussia Levesque (Harvard Law School, Berkman Klein Center for Internet & Society) has posted “Applying the Un Guiding Principles on Business and Human Rights to Online Content Moderation” on SSRN. Here is the abstract:

What do Rembrandt and social media platforms have in common? Light. Both judiciously use it to emphasize certain aspects and relegate others to obscurity, leveraging darkness to highlight flattering features.

This article assesses the accountability of social media platforms with regards to content moderation. It probes voluntary measures like the Facebook Oversight Board and transparency reports for similarities with the chiaroscuro painting technique. These two self-governance initiatives shed light on fairly uncontroversial aspects of content moderation, obscuring more problematic areas in the process. In that sense, chiaroscuro and self-governance actually travel in opposite directions; chiaroscuro uses darkness to create light while self-governance uses light to create darkness.

The United Nations Guiding Principles on Business and Human Rights (UNGP) could fill self-governance gaps by activating a clearer link between companies and user well-being. The notions of access to remedy and due diligence support the case for harmonizing accountability measures across moderation practices. External oversight should cover a broader array of moderation decisions, beyond individual content takedowns. It should also approach harms holistically, integrating privacy, equality and other human rights dimensions to the analysis. Transparency reports should provide more granular information about platforms’ informal collaboration with states.