As artificial intelligence (AI) diagnostic systems move toward healthcare deployment, a critical gap persists between technical performance and patient acceptability. Existing AI explanation techniques are predominantly designed for experts, leaving laypeople without the conceptual tools to calibrate trust in AI generated medical recommendations. There are two objectives to this research including i) evaluating the effects of explanations and communication cues in aiding non expert user trust calibration, and ii) exploring the potential of immersive virtual reality (VR) technology as an interface as the first point of contact to healthcare for minor medical conditions and maintaining trust between patients and medical professionals.
Virtual Reality
Human-AI Agent Interaction
Human Centred AI
Anthropomorphism
Trust in AI
BSc Psychology at University of Bath
Dr Karin Petrini
Dr Crescent Jicol
Prof Eamonn O’Neill
