Polle et al. on AI Standards: Thought-Leadership in AI Legal, Ethical and Safety Specifications Through Experimentation

Roseline Polle (University College London) and others have posted “Towards AI Standards: Thought-Leadership in Ai Legal, Ethical and Safety Specifications Through Experimentation” on SSRN. Here is the abstract:

With the rapid adoption of algorithms in business and society there is a growing concern to safeguard the public interest. Researchers, policy-makers and industry sharing this view convened to collectively identify future areas of focus in order to advance AI standards – in particular the acute need to ensure standard suggestions are practical and empirically informed. This discussion occurred in the context of the creation of a lab at UCL with these concerns in mind (currently dubbed as UCL The Algorithms Standards and Technology Lab). Via a series of panels, with the main stakeholders, three themes emerged, namely (i) Building public trust, (ii) Accountability and Operationalisation, and (iii) Experimentation. In order to forward the themes, lab activities will fall under three streams – experimentation, community building and communication. The Lab’s mission is to provide thought-leadership in AI standards through experimentation.