Governing bodies have repeatedly stated the need for AI to be transparent, interpretable, and explainable. However, the goals have only been stated in these high-level abstract terms. This leaves eXplainable AI (XAI) researchers and developers without clear objectives or measures of success. Existing XAI metrics tend to be highly technical, focusing on aspects like feature relevance and extracting model mechanisms. However, this only serves the needs of model developers and does little for groups such as end-users or regulators.
Instead, we need Human-Centric eXplainable AI (HCXAI). HCXAI is designed such that the explanation meets the needs of the individual stakeholder, rather than requiring the stakeholder to adjust to the XAI. This includes the need for metrics to validate whether these individual needs are being met, with the relevant metrics differing by stakeholder.
Our aim is to understand the different norms and values required by different stakeholders in deploying HCXAI. These will be based on goals different stakeholders are trying to achieve through XAI, such as demonstrating privacy or fairness in a system. We will then research how these norms and values can be measured in a verifiable way and develop metrics to do so. Through this we aim to promote these values among XAI researchers and developers, by giving them a way to demonstrate the suitability of their methods for achieving them. We also give other stakeholders the means to test whether desired values are being delivered by the XAI system through the use of these metrics, enabling robust AI deployment.
AI Safety & Alignment
BSc in Applied Mathematics from Cardiff University.
MSc in Applied Mathematics from University of Bath.
Four years working in actuarial science between bachelor’s and master’s degree.
Eight months working as an AI researcher after master’s, focusing on Bayesian approaches for playing games and different methods for training reinforcement learning agents.
Dr Marina De Vos
Dr Janina Hoffman
Dr Andreas Theodorou