Weissinger on AI, Complexity, and Regulation

Laurin Weissinger (Tufts University – The Fletcher School of Law and Diplomacy) has posted “AI, Complexity, and Regulation” (OUP Handbook on AI Governance, Forthcoming) on SSRN. Here is the abstract:

Regulating and governing AI will remain a challenge due to the inherent intricacy of how AI is deployed and used in practice. Regulation effectiveness and efficiency is inversely proportional to system complexity and the clarity of objectives: the more complicated an area is and the harder objectives are to operationalize, the more difficult it is to regulate and govern. Safety regulations, while often concerned with complex systems like airplanes, benefit from measurable, clear objectives and uniform subsystems. AI has emergent properties, and is not just “a technology” but interwoven with organizations, people, and the wider social context. Furthermore, objectives like “fairness” are not only difficult to grasp and classify but they will change their meaning case-by-case.

The inherent complexity of AI systems will continue to complicate regulation and governance but with appropriate investment, monetary and otherwise, complexity can be tackled successfully. However, due to the considerable power imbalance between users of AI in comparison to those AI systems are used on, successful regulation might be difficult to create and enforce. As such, AI regulation is more of a political and socio-economic problem than a technical one.