Kazim et Al. on the UK’s National AI Strategy

Emre Kazim (University College London) et al. have posted “Innovation and Opportunity: Review of the UK’s National AI Strategy” on SSRN. Here is the abstract:

The publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the national industrial, policy, regulatory, and geo-strategic agenda. Although there is a multiplicity of threads to explore, in terms of actionable steps, this text can be read primarily as a ‘signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation (research, SMEs) and opportunity (industry, economy), underpinned by a trust framework that has innovation and opportunity at the forefront of any standard and regulatory framework. In this white paper, we provide an overview of the structure of the document and offer an emphasised commentary on various standouts. Following this, we offer our initial thoughts and feedback on strategic points of contention in the strategy. Our main takeaways are:

Innovation First: a clear signal is that innovation is at the forefront of UK’s data priorities.
Alternative Ecosystem of Trust: the opportunity is for the UK’s regulatory-market norms to become a preferred ecosystem for innovation and trust but this is very much dependent upon the regulatory system and delivery frameworks required.

Defence, Security and Risk: security and risk are discussed in terms of utilisation of AI (capabilities and in the modernisation and operations of the MoD), and governance (understanding of long term risk and defence against the malign use of AI).

Revision of Data Protection: the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection and the documentation of processes and accountabilities to individual citizens.

EU Disalignment – Atlanticism?: a focus on innovation and economic advancement is continuously touted raising the questions regarding a step back in terms of data protection rights.
We conclude with further notes on data flow continuity, the feasibility of a sector approach to regulation, legal liability, and the lack of a method of engagement for stakeholders. Whilst the strategy sends important signals for fostering and growing innovation, achieving ethical innovation is a harder challenge and will require a carefully evolved framework built with appropriate expertise.

Werner on Algorithmic and Human Collusion

Tobias Werner (Heinrich Heine University Dusseldorf) has posted “Algorithmic and Human Collusion” on SSRN. Here is the abstract:

As self-learning pricing algorithms become popular, there are growing concerns among academics and regulators that algorithms could learn to collude tacitly on non-competitive prices and thereby harm competition. I study popular reinforcement learning algorithms and show that they develop collusive behavior in a simulated market environment. To derive a counterfactual that resembles traditional tacit collusion, I conduct market experiments with human participants in the same environment. Across different treatments, I vary the market size and the number of firms that use a self-learned pricing algorithm. I provide evidence that oligopoly markets can become more collusive if algorithms make pricing decisions instead of humans. In two-firm markets, market prices are weakly increasing in the number of algorithms in the market. In three-firm markets, algorithms weaken competition if most firms use an algorithm and human sellers are inexperienced.