Emma Li

Improved Interpretability Methods Towards More Responsible AI

Project Summary

The increased reliance on Machine Learning in an expanding number of areas within our lives (e.g. healthcare, self-driving cars, financial decisions, and criminal justice), means that we place great trust in these systems, and our belief that they are correct and fair assigns to them enormous responsibility. To ensure that we do not replicate the failings of our current society, and to avoid potentially drastic errors, it is crucial that these algorithms are trustworthy, their results verifiable and understandable, and that we lessen the uncertainty that may surround their decisions. Since innumerable choices go into creating a deep learning model, its construction and results may often be impossible to replicate and this lack of reliability should be especially concerning given the damaging consequences it could lead to.

Interpretability methods are currently still in their early stages, which provides scope for the exploration of ideas to create new, robust methods. We need to be able to explain the actions of any system in an understandable way to those employing it and those affected by its decisions. I aim to build improved interpretability methods for deep learning by using machine learning and statistical techniques in order to create more trustworthy and ethical AI. While the very nature of algorithms prompts different considerations on how to be ethical compared with how humans endeavour to live an ethical life, a range of ethical and philosophical ideas will be explored in order to approach construction of ethical AI for human welfare. The hope is that the results of this work will lead to improved algorithms, and increased trust and adoption of AI to help it reach its fullest potential to benefit society by complementing and enhancing human capabilities.


Research Interests

AI and Explainability, Ethics and AI, Human-Centred AI, Deep Learning – particularly Uncertainty and Fairness. My interests involve developing new methods to complement and augment human abilities that will lead to more equitable and just technology and solutions.


MMath in Mathematics and Computer Science (with a year in industry), University of York

Three years working in industry doing AI research


Dr Xi Chen