Joe Goodier

Interpretable Diagnostic Imaging Using Generative Priors

Project Summary

In the current paradigm, there is a problem with deep neural networks being used as a model to classify and aid in decision making. This is because most deep neural networks are black boxes, and are viewed in simple terms of inputs, self-optimising parameters, set hyper-parameters, and outputs with little understanding about what features are being selected for in the output. What is needed are explainable neural networks, where it is possible to interpret the causal links for a given output. There are few areas where this has higher consequences than that of medicine. The ability to identify disease from medical images is currently in the remit of Radiologists, and there have  been concerns that this technology could replace the profession, but with the backlog on healthcare systems, any technology that can help streamline the diagnosis time is urgently needed. However, with the inevitable rush of new technology into this space to help fill this gap, there is the risk of deployment occurring too quickly. It is far easier and quicker to build a model with high performance than to build a system that has high performance and is verifiably explainable and robust. It is possible that in this rush to deploy the AI to meet the demand, the model may have learned incorrect features leading to unforeseen, potentially dangerous consequences. I plan to investigate the possibility of using a generative adversarial network as a Bayesian prior to learn a healthy distribution of medical images. The difference as measured from this healthy prior can then be used to assess whether a sample is from the healthy distribution or not. I will research how this method in assessing diagnostic images can identify what features  within the image are important for a sample not being healthy, and assess the realism of the synthetic images as a method of inquiry into what the model understands to be healthy.

 

 

Research Interests

Interpretable Deep Learning.

Computer Vision.

Generative Models.

Computer-Aided Detection and Diagnostics.

Background

My academic background is originally in Astrophysics, I undertook an integrated master’s at Durham in Physics and Astronomy, over which I gained experience in both observational and computational components of the field. My master’s project was done at the Institute for Computational Cosmology (ICC) at Durham. I undertook a 6-month internship in the AI department at the tech company Innovative Physics, where I worked with their lung cancer detection model and developed a lung nodule segmentation model to be used in the drug trail industry.

Supervisors

Prof Neill Campbell

Dr Charles Larkin