This invitation-only workshop, co-hosted by the Office for AI and the University of Bath’s Institute for Policy Research, is one of several events taking place as part of the consultation on the UK’s AI regulation white paper. The white paper was published in March 2023 and proposes an “innovative and iterative” approach to the regulation of AI.
Existing regulators will be expected to implement the framework underpinned by five cross-sectoral principles:
- (1) safety, security, and robustness
- (2) appropriate transparency and explainability
- (3) fairness
- (4) accountability and governance
- (5) contestability and redress.
These principles are not currently underpinned by any new legal powers or duties, and regulators will be expected to implement them within their existing remits. This approach can be distinguished from other proposed regulatory frameworks, including the EU’s AI Act which sets out harmonised rules for the development, placing on the market, and use of AI in the European Union (EU).
The UK white paper proposes that legal responsibility for compliance should be allocated to the actors in the AI life cycle best able to identify, assess and mitigate risks effectively. It states that incoherent or misplaced allocation could hinder innovation.
Written responses to the white paper can be submitted until 21st June 2023.
During the workshop, we will address the following main questions:
- Which actors in the AI life cycle are best placed to mitigate risks?
- To what extent does the current system allocate accountability to the actors best placed to mitigate risks?
- Should ‘upstream’ market actors (e.g., developers of foundation models, providers of cloud services, data brokers) bear more liability than they do under current regulatory arrangements?
- If so, which actors and under what circumstances?
- Which changes to existing UK law and policy would best improve allocation of accountability throughout the AI life cycle?
- How can tools for trustworthy AI (such as assurance techniques and technical standards) help address accountability gaps across the AI life cycle? What evidence is there to support their efficacy?