Philipp Hacker (European New School of Digital Studies) has posted “Sustainable AI Regulation” on SSRN. Here is the abstract:
Current proposals for AI regulation, in the EU and beyond, aim to spur AI that is trustworthy and accountable. What is missing, however, is a robust regulatory discourse and roadmap to make AI, and technology more broadly, environmentally sustainable. This paper aims to take first steps to fill this gap.
In computer science, AI and technology more generally are increasingly recognized as important contributors to climate change. And with good reason: Current estimates show that information and communication technology (ICT) contributes up to 3.9% of global greenhouse gas (GHG) emissions compared to roughly 2.5% for global air travel. The carbon footprint of machine learning more specifically has skyrocketed over the last years. Water consumption is another crucial factor. Regarding both energy and water, AI training is particularly resource intensive, and even more so with large generative AI models, such as ChatGPT or GPT-4.
However, questions of climate change and sustainability still occupy a significant blind spot in AI regulation. This paper will therefore explore two key dimensions: legal instruments to make AI greener; and methods to render AI regulation more sustainable. Concerning the former, transparency mechanisms, such as the disclosure of the GHG footprint under Article 11 EU AI Act, could be a first step. However, given the well-known limitations of disclosure, regulation needs to go beyond transparency. Hence, in this paper, I propose a mix of co-regulation strategies; sustainability by design; restrictions on training data; and consumption caps.
Within sustainability by design strategies, one important mechanism could be what I term “sustainability impact assessments”. Crucially, during the modelling phase, developers should compare different AI model types (e.g., linear regression versus neural networks) not only regarding their performance but also their estimated GHG footprint. Already, effective tools exist to measure the GHG impact of such models. Simply put, if two model types exhibit similar performance, the developers would be obliged, under such a provision, to choose the more sustainable model for further development and deployment. In this way, the current fixation on performance measures may be complemented by climate change mitigation strategies. Importantly, pre-trained models, such as large AI models, may in the long run be more energy-efficient despite their high upfront training costs. However, ironically, planned regulation might thwart these efforts. Pre-trained models, such as ChatGPT, are significantly dis-incentivized by the EU AI Act and the EU AI liability directives. Hence, regulatory endeavors should urgently be updated to better reflect the sustainability challenges AI raises.
This regulatory toolkit may then, in a second step, serve as a blueprint for other information technologies and infrastructures facing significant sustainability challenges due to their high GHG emissions, for example: blockchain (e.g., bitcoin); Metaverse applications; and data centers. The regulatory toolbox described above, from transparency to sustainability assessments and hard consumption caps, can and must be flexibly adapted to these other areas of technology law.
The final dimension consists in efforts to render AI regulation, and by implication the law itself, more sustainable. Certain rights we have come to take for granted, such as the right to erasure (Article 17 GDPR), may have to be limited due to sustainability considerations. Imagine that a large AI model was trained on supposedly anonymized medical data and is used for cancer detection. Given new re-identification techniques, one data subject exercises her right to erasure. Not only may her data point have to be deleted from the training data, but the entire AI model may have to be re-trained–entailing significant GHG emissions. In my view, the subjective right to erasure, in such situations, has to be balanced against the collective interest in mitigating climate change. Here, I draw on the growing literature on data externalities and third-party effects of processing. The paper formulates guidelines to strike this balance equitably, discusses specific use cases, and identifies doctrinal legal methods for incorporating such a “sustainability limitation” into existing (e.g., Art. 17(3) GDPR) and future law (e.g., AI Act). Ultimately, law, computer science and sustainability studies need to team up to effectively address the dual large-scale transformations of digitization and sustainability.