In the current paradigm, there is a problem with deep neural networks being used as a model to classify and aid in decision making. This is because most deep neural networks are black boxes, and are viewed in simple terms of inputs, self-optimising parameters, set hyper-parameters, and outputs with little understanding about what features are being selected for in the output. What is needed are explainable neural networks, where it is possible to interpret the causal links for a given output. However, it is not simply a matter of determining the decision-relevant sections of a prediction by a model. This information is useless if it cannot be interpreted by the human in the loop and causal links have been understood in the context of use by a specialist, not necessarily trained in machine learning. I would like to produce an interpretable computer vision model for diagnostic imaging. I will also investigate the balance between interpretability vs completeness of explanation within the context of computer vision in diagnostic imaging. I will investigate this by attempting to answer the following questions. What are the most complete ways to present information learned by the model in a format that is usable by clinicians? How can the information be simplified so that it is in an interpretable format without introducing bias?
Interpretable Deep Learning.
Computer-Aided Detection and Diagnostics.
My academic background is originally in Astrophysics, I undertook an integrated master’s at Durham in Physics and Astronomy, over which I gained experience in both observational and computational components of the field. My master’s project was done at the Institute for Computational Cosmology (ICC) at Durham. I undertook a 6-month internship in the AI department at the tech company Innovative Physics, where I worked with their lung cancer detection model and developed a lung nodule segmentation model to be used in the drug trail industry.
Dr Neill Campbell