Adrien Raizonville (Institut Polytechnique de Paris) and Xavier Lambin (ESSEC Business School) have posted “Algorithmic Explainability and Obfuscation under Regulatory Audits” on SSRN. Here is the abstract:
The best-performing and most popular algorithms are often the least explainable. In parallel, there is growing concern and evidence that sophisticated algorithms may engage, autonomously, in profit-maximizing but welfare-reducing strategies. Drawing on the literature on self-regulation, we model a regulator who seeks to encourage firms’ compliance to socially desirable strategies through the threat of (costly and imperfect) audits. Firms may invest in explainability to better understand their algorithms and reduce their cost of compliance. We find that, when audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator’s detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed.