In order to practically interact with humans, robots must be able to generate accurate predictions of what actions and activities humans are going to perform. These require sophisticated cognitive architectures to facilitate the processing of multiple forms of sensory input, as well as the generation of accurate and safe responses to these stimuli. Most current methods generate predictions based only on the current state of human activity, ignoring whatever actions they have previously performed.
I aim to research and develop a cognitive architecture for use in human-robot interaction capable of using a form of autobiographical memory to record what interactions have historically taken place with users, and use these alongside multiple forms of perception to generate predictions of what actions users will take.
Human-Robot Interaction
Multimodal Perception
Machine Learning
Anthropomorphism of AI
AI Policy
AI Alignment
MComp Computer Science at the University of Bath
Two years working as an application support engineer at a London-based trading firm supporting manual and algorithmic trading.
Research Assistant at the Hertie School, Berlin, studying anthropomorphism in human-robot interaction.
Dr Uriel Martinez Hernandez
Dr Wenbin Li