Stefan Heiss (University of Bremen; University of Graz) has posted “Artificial Intelligence Meets European Union Law: The EU Proposals of April 2021 and October 2020” on SSRN. Here is the abstract:
In April 2021 and October 2020, the European Commission and European Parliament unveiled groundbreaking draft legislations towards a framework for trustworthy Artificial Intelligence (AI). First, the proposed AI Act requires risky AI providers to comply with several standards before they can place their system on the market. Second, it has been acknowledged that autonomous systems pose a challenge to conventional liability rules. Consequently, at the European level, a new draft regulation on liability rules was initiated. Both proposals follow a first-of-its-kind policy that outlines how companies are allowed to use AI and what consequences should be enforced if an AI system causes harm to third parties. The broad scope of the initiatives will have an impact on numerous companies even outside the borders of the EU. However, the approaches are a direct challenge to the view that law should leave emerging technology alone. It is important to recognize that regulating AI is subject to the inherent trade-off between slowing down the development process of the technology and establishing desirable quality parameters. This contribution concludes that the crucial issue will be to tie the two proposals together. A joint use of liability and regulation strategies can be a great aid in enhancing overall efficiency for the usage of AI.
Matija Damjan (University of Ljubljana Law) has posted “Algorithms and Fundamental Rights: The Case of Automated Online Filters” (Journal of Liberty and International Affairs 2021) on SSRN. Here is the abstract:
The information that we see on the internet is increasingly tailored by automated ranking and filtering algorithms used by online platforms, which significantly interfere with the exercise of fundamental rights online, particularly the freedom of expression and information. The EU’s regulation of the internet prohibits general monitoring obligations. The paper first analyses the CJEU’s case law which has long resisted attempts to require internet intermediaries to use automated software filters to remove infringing user uploads. This is followed by an analysis of article 17 of the Directive on Copyright in the Digital Single Market, which effectively requires online platforms to use automated filtering to ensure the unavailability of unauthorized copyrighted content. The Commission’s guidance and the AG’s opinion in the annulment action are discussed. The conclusion is that the regulation of the filtering algorithms themselves will be necessary to prevent private censorship and protect fundamental rights online.
Eldar Haber (University of Haifa Law) has posted “Algorithmic Inclusion” (72 Fla. L. Rev. F. 94 (2021)) on SSRN. Here is the abstract:
Artificial Intelligence (AI) is expected to dramatically change humanity. From the automation of daily tasks and labor, to curing diseases and handling disasters, many forecast that human beings will soon begin enjoying the benefits of AI technology within many aspects of their lives. While it is currently difficult to evaluate when and to what extent AI will live up to fulfill its promise, it is uncertain whether the continued development of AI technology will widen the already existing digital divide between those with access to technology and those without.
The concern of a new digital divide that could stem from AI technology had been articulated by Professor Peter K. Yu as the algorithmic divide. In his Article, Professor Yu describes the potential inequalities that these technological developments will likely create and intensify. Much like the digital divide, Professor Yu argues, there will be a “new inequitable gap” between those with access to new technologies and those without, while the latter will miss out “on the many political, social, economic, cultural, educational, and career opportunities provided by machine learning and artificial intelligence.” This Response adds to the discussion of the perceived forthcoming algorithm divide by further analyzing key issues that emerge within the goal of inclusion. The first Part briefly summarizes the algorithmic divide as projected by Professor Yu and his suggestions to reduce the risks and fears that stem from it. The second Part then raises further caveats and key issues that must be taken into consideration when discussing how to bridge the algorithmic divide.