Charlotte Tschider (Loyola University Chicago School of Law) has posted “Beyond the Black Box” (98 Denv. L. Rev. 683 (2021)). Here is the abstract:
As algorithms have become more complex, privacy and ethics scholars have urged artificial intelligence (AI) transparency for purposes of ensuring safety and preventing discrimination. International statutes are increasingly mandating that algorithmic decision-making be explained to affected individuals when such decisions impact an individual’s legal rights, and U.S. scholars continue to call for transparency in automated decision-making.
Unfortunately, modern AI technology does not function like traditional, human-designed algorithms. Due to the unavailability of alternative intellectual property (IP) protections and their often dynamically inscrutable status, algorithms created by AI are often protected under trade-secrecy status, which prohibits sharing the details of a trade secret, lest destroy the trade secret. Furthermore, dynamic inscrutability, the true “black box,” makes these algorithms secret by definition: even their creators cannot easily explain how they work. When mandated by statute, it may be tremendously difficult, expensive, and undesirable from an IP perspective to require organizations to explain their AI algorithms. Despite this challenge, it may still be possible to satisfy safety and fairness goals by instead focusing on AI system and process disclosure.
This Article first explains how AI differs from historically defined software and computer code. This Article then explores the dominant scholarship calling for opening the black box and the reciprocal pushback from organizations likely to rely on trade secret protection—a natural fit for AI’s dynamically inscrutable algorithms. Finally, using a simplified information fiduciary framework, I propose an alternative for promoting disclosure while balancing organizational interests via public AI system disclosure and black-box testing.