Melissa Torgbi

Explainable Natural Language Processing for the Analysis of Online Discourse

Project Summary

Researching the challenges and risk of using large language models in Natural Language Processing (NLP), focusing on addressing these issues through the use of explainable NLP.

There has been significant progress in NLP in recent years, particularly with the development of pre-trained and large language models. These models are based on neural networks and are trained on massive amounts of text data, allowing them to learn the patterns and structure of language in a way that mimics human understanding. However, the increased use of these types of models has highlighted many challenges in NLP that still need to be addressed. Issues such as bias and lack of model interpretability are important considerations in the development and deployment of NLP models.

Research Interests

Natural Language Processing, Explainable AI, Interpretability, Reasoning, Bias, Hallucinations

Background

BEng Electronic Engineering, University of Southampton

MSc Digital and Technology Solutions Specialist, Aston University

Level 7 Apprenticeship Digital and Technology Solutions Specialist, Aston University

Four years working in industry as a Data Scientist

Supervisors

Dr Harish Tayyar Madabushi

Dr Iulia Cioroianu

Dr Claire Bonial (Georgetown University and US Army DEVCOM)

Melissa Torgbi