AI Explainability with Tom Begley

‘AI Explainability’ with Tom Begley R&D Lead at Faculty AI.

ART-AI Seminar

We are very pleased to have Tom Begley, R&D Lead at Faculty AI, join us for this ART-AI Seminar entitled ‘AI Explainability’ on 26th November. Abstract and Bio below:

Abstract

Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions. The Shapley framework for explainability attributes a model’s predictions to its input features in a mathematically principled and model-agnostic way, making it an appealing candidate for explaining black-box models. However, general implementations of Shapley explainability make an untenable assumption: that the model’s features are uncorrelated. Furthermore, they provide no means by which causal knowledge can be reflected in the explanations.

In this talk we will review the Shapley framework for explainability, understand some of the limitations of the original formulation, and introduce Faculty’s recent work on addressing some of these limitations, including the introduction of Shapley values that respect correlations in the data, and Shapley values that incorporate causal knowledge of the user.

Bio

Tom Begley is R&D Lead at Faculty where he helps run their AI Safety research programme. He previously worked as a Data Scientist for Faculty, building and deploying machine learning models for a number of clients in government and industry. He holds a PhD in pure mathematics from the University of Cambridge where he studied problems in geometric analysis.


Event Info

Date 26.11.2020
Start Time 1:15pm
End Time 2:15pm

Add to Google Calendar