Xuying Zhong

Rendering AI more transparent and explainable: which explanations do humans prefer?

Project summary

AI applications have become ubiquitous in various fields in our daily life. People often expect transparent or explainable procedures, so that they can trust the safety, fairness and performance of machine decisions. My research aims to investigate the role of explanations in building trust and to identify human preferences over different types and formats of explanations. The research will experimentally explore the hypothesis that a better match between the AI explanations and the humans’ explanation preferences will enhance transparency and ultimately allow humans to better calibrate their trust in AI decisions. Outcomes of the research will guide practitioners in their search for appropriate methods to better communicate with the public and ultimately let people benefit from a wider adoption of AI.

Research interests

Trust in AI, Explainable AI, Human-Centred AI

Background

BEng in e-Commerce Engineering with Law, Queen Mary University of London

MSc in Business Analytics, Imperial College London

Five years of work experience in an AI company analysing human behaviours to optimize the chance of positive outcomes in call centre interactions.

Supervisors

Prof Özgür Şimşek

Dr Janina Hoffmann

Xuying Zhong