Scott Wellington

Decoding imagined, auditory and vocalised speech from invasive and non-invasive brain signals

Project Summary

Brain-computer interfaces: state-of-the-art machine learning models to decode imagined speech (and auditory speech) have shown impressive results. Much ink has been spilled over ‘thought-reading’ technologies among AI forecasters and ethicists; however, for individuals with neurodegenerative conditions or speech deficits, practical real-world utilities are already clear for assistive technologies. I propose a project to ’close the gap’ between two domains: covert speech decoding from surgically-invasive electrocorticography (ECoG) data currently outperforms models using data from surface electroencephalography (EEG). I aim to apply novel approaches inspired from cutting-edge neural processing to inform the future(s) of these speech neuroprostheses.


Winner of 2 x Doctoral Recognition Awards 2022

Research Interests

Brain-computer interfaces.

Signal decoding and machine learning, including methods of communication for public understanding and to inform decision-making.

AI solutions for medical, assistive and human augmentative technologies.

Biosignal data, processing, and practical ethics.


MSc Speech and Language Processing, University of Edinburgh

One year with the company SpeakUnique, who builds personalised synthetic voices for people with impaired speech resulting from neurodegenerative conditions; initially with the Centre for Clinical Brain Sciences, investigating machine learning techniques to decode speech from EEG and ECoG signal data.


Dr Ben Metcalfe

Prof Damien Coyle

Dr Dingguo Zhang

Oliver Watts (SpeakUnique)