Nathalie A. Smuha (KU Leuven Law) has posted “The Work of the High-Level Expert Group on AI as the Precursor of the AI Act” on SSRN. Here is the abstract:
On 25 April 2018, the Commission published a Communication titled ‘Artificial Intelligence for Europe’, setting out a comprehensive AI strategy for the EU. As part of that strategy, it appointed an independent High-Level Expert Group on AI (HLEG) with the instruction of drafting voluntary AI ethics guidelines titled Ethics Guidelines for Trustworthy AI (HLEG Guidelines), and policy recommendations for EU institutions and Member States, titled Policy and Investment Recommendations for Trustworthy AI (Policy Recommendations). These documents, and the HLEG Guidelines in particular, on which the Policy Recommendations build, have proven to be foundational for the subsequent proposal and adoption of the AI Act. In this contribution, I will therefore retrace how EU institutions shifted their attention from soft law to hard law, and discuss the way in which the former influenced the latter.
To do so, I start by introducing the HLEG and its context, against a background of other (international) initiatives (section 2). Subsequently, I provide an overview of the HLEG Guidelines and discuss the three-pronged role they play in the AI Act: (1) they inspired the requirements imposed on high-risk AI systems as well as other provisions of the regulation, (2) they act as a basis for voluntary codes of conduct that the AI Act seeks to foster, and (3) they serve as a more general normative framework underlying the AI Act’s rationale (section 3). I then move to the Policy Recommendations and explain to which extent the advice which the members of the HLEG formulated therein found its way to the AI Act (section 4), before concluding (section 5).
