Lahmann et al. on The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence

Henning Lahmann (Leiden U Centre Law and Digital Technologies) et al. have posted “The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence” (Final version accepted and forthcoming in Ethics & Information Technology) on SSRN. Here is the abstract:

The article analyses proposed AI-supported systems to detect, monitor, and counter ‘cognitive warfare’ and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of ‘cognitive warfare’ as used in contemporary public security discourse, it describes the emergence of AI as a novel tool expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilise AI to devise countermeasures, ranging from AI-based early warning systems to state-run, internet-wide content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, freedom of expression, freedom of information, and self-determination. The proposed AI systems insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causal links between ‘cognitive warfare’ campaigns and undesired outcomes. As a result, using AI to counter ‘cognitive warfare’ risks harming the very rights and values such measures purportedly seek to protect. Policymakers should focus less on seemingly quick technological fixes. Instead, they should invest in long-term strategies against information disorder in digital communication ecosystems that are solidly grounded in the preservation of fundamental rights.