Joseph Marvin Imperial

Towards Transparent and User-Centric Models for Readability Assessment

Research Summary

Readability assessment is the process of identifying the correct difficulty level of any given text such as children’s books, reference materials, prescriptions, and so on. Large neural language models like BERT can be used as a black-box end-to-end solution to assess readability. Although these methods outperform conventional methods using formula-based approaches and traditional machine learning, they critically lack interpretation of how linguistic knowledge encoded within the layers of the neural model itself act as predictors during inference. In addition, the common setup of developing these assessment models assume a “one-size-fits-all” approach where linguistic features of a chosen language are modelled without consideration of how various reading populations such as native speakers, second-language learners, etc. learn the language. My research will address these challenges by focusing on: (a) emphasis on developing transparent and interpretable models where experts can trace and understand effects of linguistic features used to train the models and (b) emphasis on customizing assessment models with respect to the language background of the user and adjust model configurations automatically.

Research Interests

Natural language processing, controllable language generation, application of NLP to education, readability assessment, and social media.

Background

M.S. in Computer Science, De La Salle University – Philippines

Senior NLP Researcher, National University – Human Language Technology Lab (HLT)

Received the Google AI and Tensorflow Faculty Award in 2021 for spearheading machine learning initiatives at National University – Philippines

Associate Member of the DOST National Research Council of the Philippines (NRCP)

Supervisors

Dr Vinay Namboodiri

Dr Harish Tayyar Madabushi

Prof Gail Forey