Marco Almada (Universite du Luxembourg Law) has posted “Technical AI Transparency: A Legal View of the Black Box” on SSRN. Here is the abstract:
AI systems are often described as “black boxes” that must be made scrutable for oversight. This view tends to accompany a technical framing of the problem, under which some present or future approach can make AI systems transparent. In this paper, I argue that what the law expects from “transparency” is not something that can be provided by purely legal means. After proposing a taxonomy of technical approaches to transparency, I highlight how these approaches fall short of legal requirements. Many of these shortcomings are essential to the approaches in question, but they can be mitigated by combining approaches and by using other legal provisions to address manipulation risks. Legal transparency of AI is not a solely technical problem, but adequate design requirements can provide a valuable regulatory contribution.
