Tejas N. Narechania (University of California, Berkeley, School of Law) has posted “Machine Learning as Natural Monopoly” (Iowa Law Review, Forthcoming) on SSRN. Here is the abstract:
Machine learning is transforming the economy, reshaping operations in communications, law enforcement, and medicine, among other sectors. But all is not well: It is now well-established that many machine-learning-based applications harvest vast amounts of personal information and yield results that are systematically biased. In response, policymakers have begun to offer a range of inchoate and often insufficient solutions, overlooking the possibility—suggested intuitively by scholars across disciplines—that these systems are natural monopolies, and thus neglecting the long legal tradition of natural monopoly regulation.
Drawing on the computer science, economics, and legal literatures, I find that machine-learning-based applications can be natural monopolies. Several features of machine learning suggest that this is so, including the fixed costs of developing these applications and the computational methods of optimizing these systems. This conclusion yields concrete policy implications: Where natural monopolies exist, public oversight and regulation is typically superior to market discipline through competition. Hence, where machine-learning-based applications are natural monopolies, this regulatory tradition offers one framework for confronting a range of issues—from privacy to accuracy and bias—that attend to such systems. Just as prior natural monopolies—the railways, electric grids, and telephone networks—faced rate and service regulation to protect against extractive, anticompetitive, and undemocratic behaviors, so too might machine-learning-based applications face similar public regulation to limit intrusive data collection and protect against algorithmic redlining, among other harms.