Our Students

Meet the ART-AI students and read about their research interests and backgrounds.

ART-AI Cohort 1


2020 Cohort

Peter Bayliss

Decision making at the Human/Machine Interface in Criminal Justice

Edward Clark

Artificial Intelligence Tactical Decision Aid for Management of Naval Sensors and Autonomous Vehicles

Thao Do

AI for Identification and Support for Victims of Sexual Exploitation in Southeast Asia

Tom Donnelly

Artificial Intelligence for the Control of Upper Limb Prosthetics

Andy Evans

AI and its Consequences for the Labour Market

Tory Frame

AI in Support of Sleep

Joe Goodier

Interpretable Deep Learning

Finn Hambly

Round-trip Policy Engineering

Matt Hewitt

Hierarchical Reinforcement Learning for Transparency in AI

Emma Li

Improved Interpretability Methods Towards More Responsible AI

Pablo Medina

Ecological Rationality of Choices Between Gambles

Deborah Morgan

The Application and Uses of AI Within Education, particularly Within the Teaching and Understanding of Literacy

Jessica Nicholson

Robotics Processing Architecture: A Skill Acquisition Approach to Artificial Cognitive Development

Alice Parfett

How AI Affects Privacy and Prejudice for Humanity

Brier Rigby Dames

Modelling Brain-Like Intelligence in an Evolutionary Context for AI Applications

Alex Taylor

Machine learning in Safety Critical Engineering

Scott Wellington

Decoding Imagined Speech from the Brain Signal

 

Peter Bayliss

Decision making at the Human/Machine Interface in Criminal Justice

Project Summary

I intend to research the impact of algorithmic decision support systems on human decision making at the gateway to criminal justice, in order to inform and affect police policy and practice. Whilst human/AI interaction has been studied extensively in other contexts (flight crew, for example) I will be considering how the recommendations of such systems influence the operational decision making of individuals, such as custody officers, within the criminal justice system.

The lessons learnt as a result of this project may well have considerably wider application given the rapidly expanding capability of machine learning systems and the fact that police forces across the UK are expanding their use of AI based decision support systems.   

Supervisors

Dr Sarah Moore

Dr Emma Carmel

Research Interests

The impact of conscious and unconscious influences on human decision making.

The understanding of risk.

Background

I worked in the public sector for 30 years initially as a police officer then, after taking my first degree, as an inspector of taxes. After completing four years of taxes training, I investigated and prosecuted serious tax fraud. I then moved to anti-avoidance work where I tried to counter the marketed avoidance schemes used by the rich, by artificially creating tens of £millions of losses, for example. My final role with HM Revenue & Customs was as a national technical specialist fighting the avoidance of Stamp Duty Land Tax on purchases of property where £millions of tax was at stake.

MSc Financial Management (Heriot Watt University)

BSc Mathematical Sciences (Open University)

BA Economics (Birmingham City University)

Edward Clark

Artificial Intelligence Tactical Decision Aid for Management of Naval Sensors and Autonomous Vehicles

Project Summary

In naval operations, it is necessary to optimally deploy and position a finite number of sonar sensing vehicles (manned and autonomous) to maximise their combined detection coverage over a geographical area of interest. Performance prediction of the sonar systems is crucial. Currently, performance prediction is complicated and user-intensive. It involves environmental data gathering, acoustic propagation modelling, sonar system modelling, and manual interpretation of the modelled outputs. Environmental data can be highly variable in space and time and is obtained from different sources (direct in-situ measurements, numerical models, and historical databases). The underwater acoustic models predict how sound propagates between sources and receivers, which depends upon the environmental data as well as the source and receiver characteristics (e.g., frequency and depth). The sonar performance models predict detection performance from the acoustic characteristics output from the acoustic models and the sonar characteristics (e.g. aperture size and types of signal processing applied to the acoustic signals). Users interpret the outputs of the performance models and use these to develop plans for the deployment of naval assets. This PhD aims to incorporate AI into the chain to aid and enhance decision-making for optimal deployment. We envisage that with exposure to a sufficiently comprehensive example dataset (i.e., training), optimal deployment could be achieved directly from data earlier in the chain, ultimately directly from key environmental features. Such features might include the spatial/temporal distribution of the oceanic mixed layer depth or the seabed type for shallow water areas, both of which have a significant effect on acoustic propagation and therefore sonar performance. Incorporating AI within a naval decision chain will require a high degree of transparency. This is crucial for developing trust with operators, for accountability and for responsible use.

Supervisors

Dr Alan Hunter

Marcus Donnelly (SEA Ltd)

Research Interests

My research has a key focus on its transparency due to the nature of its end users, therefore the explainability of AI and identifying where accountability should lie when using AI algorithms is of great interest to me.

Other areas include:

AI for practical decision making.

Bayesian machine learning.

Responsible AI use within the defence sector.

Background

MSci in Geophysics at Imperial College

I worked at a world leading Marine Robotics company for a year specialising in data analytics.

Edward Clark

Thao Do

AI for Identification and Support for Victims of Sexual Exploitation in Southeast Asia

Project Summary

My PhD research proposal will focus on the use of machine learning techniques and qualitative research methods in identifying online child sex trafficking patterns in Southeast Asia. This is an under-researched area, yet important and closely linked to the context of UK, US, and Europe. As the organized crime sector is growing on a transnational scale, cross-sectional and interdisciplinary research is becoming more important. According to the Internet Watch Foundation (2019), 95% of the world’s child sexual abuse material is hosted in Europe and North America, and more than half of the victims (73%) are in Asia and the Pacific. With my experiences of working with NGOs, researchers, and law enforcement officials in the field of Security and Intelligence in Vietnam, Cambodia, and Uganda, the research aims to: (1) Explore the application of data science/computer science in studying criminal activities, thus develop early detection tools that assist in the investigation and victims rescue process; (2) Build capacity, strengthen legal framework and collaboration across disciplines (academia, NGOs, government) and countries (UK, Vietnam, and Cambodia) through roundtable discussions and technical support; and (3) Provide human rights approaches and NGOs perspective in promoting effective victim protection and assistance. As the crime is trans-national (e.g., involving demands from Westerners, illicit webs hosted abroad, etc.), the research has the potential to be replicated in other countries where child sex trafficking is prevalent and assist in uncovering the international criminal networks involving child-sex exploitation.

Supervisors

Dr Alinka Gearon

Dr Emma Carmel

Dr Olga Isupova

Research Interests

AI for Social Good.

ICT4D.

Governance & civil society.

Child protection.

Disaster management.

Public health.

Background

BA in Economics, Foreign Trade University, Vietnam

Exchange in Environmental Economics, Montana State University, US

MA in Governance and Development, Institute of Development Studies (IDS), University of Sussex, UK

Work experiences in India, Tunisia, Cambodia, Vietnam, and Uganda

Tom Donnelly

Artificial Intelligence for the Control of Upper Limb Prosthetics

Project Summary

The field of rehabilitation and assistive robotics has developed numerous prosthetic devices for people that have lost limbs. These devices help users to recover their quality of life by allowing them to interact with and manipulate objects. Despite progress, current prosthetic devices still face many challenges to make them intelligent, safe, reliable and acceptable for users. Fortunately, significant advances have been observed in recent years in wearable and sensor technology, soft materials, control and machine learning methods. Together, these aspects offer a large repertoire of opportunities to address challenges for assistive robotics. This project will investigate the development of AI- based sensing and control strategies for prosthetic hands. Specifically, the project will address:

-development of machine learning methods for the reliable understanding of multimodal signals from the human body to perform correct prosthetic actions, which will require a deep understanding of cutting edge machine learning methods.

-design of control loops that make the prosthetic respond reliably to the intention of human movements.

-verification and transparency to ensure the correct functioning and safety of the prosthetic device, which will require research into how medical devices are regulated and how the designers should or should not be held accountable for devices.

-analysis of user perception of the usability, acceptability and effectiveness of the prosthetic, in order to optimise user engagement.

The project will apply novel psychological theory to identify specific barriers and facilitators (for example, using a cognitive psychology framework to determine whether users view prosthetics as effective, necessary and tolerable). The ‘Person-Based Approach’ will then be used to develop targeted behavioural support to overcome these barriers.

Supervisors

Dr Ben Metcalfe

Research Interests

Engineering applications of machine learning, with a particular focus on medical devices.

Background

MEng in Electronic and Electrical Engineering at Bath, with a final year project focusing on machine learning for the classification of post-anoxic cardiac arrest coma patient’s EEG signals.

Andy Evans

AI and its Consequences for the Labour Market

Project Summary

AI will become a competitive necessity for many business sectors, and like previous automation technologies, will fundamentally transform the nature of work and organisational models. As with any technological deployment, people and process are two other key considerations to deliver organisational benefit. However the implications of AI for the labour force may be bleak.

My project is focused on the implications of AI on the evolution of the future workforce, in particular investigating the skills and competencies that will be required in a workplace with a business model enabled by AI. Organisations should have a social responsibility to ensure their workforce is trained appropriately and government policies should adapt to mitigate the potential social and economic consequences of increased unemployment and inequality.

Supervisors

Prof Hugh Lauder

Research Interests

The social and economic impact of AI technologies on the workforce.

Background

I have worked in the global telecoms environment for the last 30 years in both operator and vendor environments. My roles have been at a senior management level covering strategy, technology, operations, transformation and financial domains. I have led teams and activities in strategic analysis, network design, new technology introduction, operational planning, outsourcing, business case development, financial planning, process development/improvement and cost improvement.

MSc Data Science & Analytics (Royal Holloway University of London)

MBA (Henley Management College)

BSc Electrical & Electronic Engineering (University of Leeds)

Tory Frame

AI in Support of Sleep

Project Summary

I am interested in using AI to help people sleep better – 50% of British adults get less than the recommended 7-9 hours per night, and it is bad for their health, performance and wellness. Laboratory thermo-regulation studies have greatly improved early morning wakening and deep sleep. My aim is to use AI and smart devices to help adults thermoregulate, and therefore sleep better, in their own beds.

Supervisors

Dr Julian Padget

Dr Simon Jones

Research Interests

Physical and psychological impact of AI on humans, and how it can be used to support behaviour change.

Human Computer Interaction.

Smart devices.

Sleep.

Behaviour Change.

Background

I have spent most of my career helping drive strategic and commercial change in consumer companies. I was responsible for Strategy, Business Development and Ventures at Diageo in my last role. Prior to that, I was Barry’s Bootcamp MD International and Chief Marketing Officer, based in the US. Before that, I was a Partner at Bain, where I worked for 16 years. My last role there was running Results Delivery in Asia, based out of Shanghai. I am an active angel, investing to back founders in tech/ consumer companies.

MA (Hons) in Politics, Economics, and Philosophy, University of Oxford

MBA, Harvard University

MSc in Computer Studies, University of Bath

 

Joe Goodier

Interpretable Deep Learning

Project Summary

In the current paradigm, there is a problem with deep neural networks being used as a model to classify and aid in decision making. This is because most deep neural networks are black boxes, and are viewed in simple terms of inputs, self-optimising parameters, set hyper-parameters, and outputs with little understanding about what features are being selected for in the output. What is needed are explainable neural networks, where it is possible to interpret the causal links for a given output. However, it is not simply a matter of determining the decision-relevant sections of a prediction by a model. This information is useless if it cannot be interpreted by the human in the loop and causal links have been understood in the context of use by a specialist, not necessarily trained in machine learning. I would like to produce an interpretable computer vision model for diagnostic imaging. I will also investigate the balance between interpretability vs completeness of explanation within the context of computer vision in diagnostic imaging. I will investigate this by attempting to answer the following questions. What are the most complete ways to present information learned by the model in a format that is usable by clinicians? How can the information be simplified so that it is in an interpretable format without introducing bias?  

Supervisors

Dr Neill Campbell

Research Interests

Interpretable Deep Learning.

Computer Vision.

Generative Models.

Computer-Aided Detection and Diagnostics.

Background

My academic background is originally in Astrophysics, I undertook an integrated master’s at Durham in Physics and Astronomy, over which I gained experience in both observational and computational components of the field. My master’s project was done at the Institute for Computational Cosmology (ICC) at Durham. I undertook a 6-month internship in the AI department at the tech company Innovative Physics, where I worked with their lung cancer detection model and developed a lung nodule segmentation model to be used in the drug trail industry.

Finn Hambly

Round-trip Policy Engineering

Project Summary

Most policies are written by humans for humans, but some environments are too complex or change too quickly for manual policy design to remain effective. The central aim of my research is to establish methods for algorithmic policy engineering that are constrained to acting in the interests of humans, while retaining the benefits of autonomous policy iteration. The key questions that guide this research ask:

-How can policies be safely constructed with statistical machine learning methods?

-How can these policies be accurately validated and communicated between humans and software agents?

-To what degree should human preferences be inferred, and how can AI systems be developed to safely infer human preferences?

Supervisors

Dr Julian Padget

Research Interests

AI alignment.

Policy engineering.

Cooperative inverse reinforcement learning.

Iterated distillation and amplification.

AI governance.

Background

MSci in Natural Sciences specialising in Physics, University of York

Medical Microwave Imaging Research, University of York & Sylatech

Matt Hewitt

Hierarchical Reinforcement Learning for Transparency in AI

Project Summary

To the average human, picking up an item is a trivial action. However, this simple function consists of numerous complex sub-actions that can be represented as a hierarchy of smaller tasks. The brain abstracts these sub-actions, such that a human only considers the high-level skill with the lower level actions being performed subconsciously. Currently, reinforcement learning (RL) algorithms focus on mapping a state observation to a corresponding set of low-level actions. This creates a lack of transparency in the way in which conventional RL agents operate when applied to large scale problems. This means that they also struggle with temporal abstraction, suffering from a lack of generality and only working in the environments in which they were trained, struggling with even the smallest modifications. These are all problems that the human brain has developed to overcome and are things that advancements in RL should focus on perfecting. Hierarchical reinforcement learning (HRL) offers a solution by creating a hierarchy of smaller and smaller sub-skills to create high-level behaviours that are temporally abstract. This increases the transparency of the agents that are produced, representing the actions taken as a set of high-level skills that can be easily understood by humans. In addition having algorithms create these hierarchies autonomously, should result in the development of domain independent skills. As such my research will focus on new methods of autonomous skill discovery in hierarchical reinforcement learning.

Supervisors

Dr Özgür Şimşek

Research Interests

Reinforcement Learning with an emphasis on hierarchical methods.

Deep Learning.

Robotics and Automation.

Background

BSc Computer Science, University of Bath

MSc Data Science, University of Bath

Emma Li

Improved Interpretability Methods Towards More Responsible AI

Project Summary

The increased reliance on Machine Learning in an expanding number of areas within our lives (e.g. healthcare, self-driving cars, financial decisions, and criminal justice), means that we place great trust in these systems, and our belief that they are correct and fair assigns to them enormous responsibility. To ensure that we do not replicate the failings of our current society, and to avoid potentially drastic errors, it is crucial that these algorithms are trustworthy, their results verifiable and understandable, and that we lessen the uncertainty that may surround their decisions. Since innumerable choices go into creating a deep learning model, its construction and results may often be impossible to replicate and this lack of reliability should be especially concerning given the damaging consequences it could lead to.

Interpretability methods are currently still in their early stages, which provides scope for the exploration of ideas to create new, robust methods. We need to be able to explain the actions of any system in an understandable way to those employing it and those affected by its decisions. I aim to build improved interpretability methods for deep learning by using machine learning and statistical techniques in order to create more trustworthy and ethical AI. While the very nature of algorithms prompts different considerations on how to be ethical compared with how humans endeavour to live an ethical life, the classical theories of consequentialism and deontology will be explored in order to construct ethical AI. The hope is that the results of this work will lead to improved algorithms, and increased trust and adoption of AI to help it reach its fullest potential to benefit society by complementing and enhancing human capabilities.

Supervisors

Dr Xi Chen

Research Interests

AI and Explainability, Ethics and AI, Human-Centred AI, Deep Learning – particularly Uncertainty and Fairness. My interests involve developing new methods to complement and augment human abilities that will lead to more equitable and just technology and solutions.

Background

MMath in Mathematics and Computer Science (with a year in industry), University of York

Three years working in industry doing AI research

Pablo Medina

Ecological Rationality of Choices Between Gambles

Project Summary

What is optimal choice under uncertainty? This is a fundamental question in artificial intelligence, as well as in economics and psychology. The standard answer comes from expected utility theory but assumes an environment in which possible outcomes of actions, and their probabilities, are known. But what if this is not the case (as is commonly true in the real world)? When potential outcomes and their likelihoods have to be learned from experience (through sampling and exploration), depending on the exploration strategies being used, the representation of the outcome space may be systematically incomplete and the beliefs about event likelihoods may be systematically biased. By studying how humans deal with uncertainty – how we manage the information available to construct decision rules – and understanding the properties of the environments in which they are made, I expect to find effective and transparent rules of decision that help human-machine interaction in automated decision making. If successful, new ways of approaching automated decision making would be provided, with potential applications to improve decision making in industry and the public sector.

Supervisors

Dr Özgür Şimşek

Prof Ralph Hertwig (MPI, Berlin)

Research Interests

Experience-based decisions under uncertainty.

Cognitive models of judgment and decision making.

Reinforcement Learning.

Bounded and environmental rationality.

Background

BSc Economics at University of Almeria

Msc Finance at Carlos III University of Madrid

Msc Economics (Behavioral and Experimental Economics) at University of Granada

Deborah Morgan

The Application and Uses of AI Within Education, particularly Within the Teaching and Understanding of Literacy

Project Summary

Can AI contribute to a personalised and creative conception of education or simply provide more nuanced assessment opportunities? I will explore the use, design and wider societal implications of AI within education. In particular, I will explore the role that continually improving language models could offer in supporting literacy. This will incorporate an evaluation into the current use, norms, design, ethical implications and perceived purposes of AI within the subject of English. This evaluation will apply a scenario modelling exploration (at different levels of abstraction) to review both potential opportunities and risks of varied system designs. The purpose, ethical implications and place of AI within current, shifting perceptions of education is an area worthy of deeper and urgent consideration.

Supervisors

Prof Hugh Lauder

Research Interests

Learning sciences focused upon the application and use of AI within education.

The ethical and societal implications of AI to support teaching and to understand learning.

Human-Computer Interaction.

Background

I worked in industry as a Corporate Projects Solicitor and communications consultant before moving into Education. My MSt focused upon the teaching of English and my research explored feedback methods within narrative writing.

LLB Law – University of Leicester

Legal Practice Course – University of Sheffield

PGCE, MSt – University of Cambridge

Deborah Morgan

Jessica Nicholson

Robotics Processing Architecture: A Skill Acquisition Approach to Artificial Cognitive Development

Project Summary

My project aims to consider how an artificially intelligent agent might autonomously acquire skills via state-of-the-art Graph-based methods. Specifically, it explores cognitive robotics and how individual technologies can perform specific tasks that emulate human intelligence and cognitive functions. This entails studying how we can train and create skill-learning agents that are autonomous, adaptive, and more importantly cognitive of their surroundings, whilst maintaining transparent and responsible practices. Through my studies I hope to answer the following questions: is a robot cognizant of its own hierarchical learning? Is it able to contextualize the specific skills it has learned from a task, and retain that memory in a repository? Commercial Artificial Intelligence has long been fraught with the demonization of autonomous robots and I hope that my research will bring some transparency to the realities of the field. Additionally, I hope to provide people with more knowledge and understanding of the transgression of the cognitive development, and behavior of robots.

Supervisors

Dr Özgür Şimşek

Research Interests

Artificial Intelligence and Machine Learning with a focus in Reinforcement Learning, Unsupervised Learning and Cognitive Development of Robots. Above all, the ethical and transparent implementation of these subject matters.

Background

BSc Economics and Women & Gender Studies, Rutgers University

MSc Data Science, University of Bath

I have three years of experience working as a software and management consultant, and one year of experience as a data analyst within the fintech and blockchain domain.

Alice Parfett

How AI Affects Privacy and Prejudice for Humanity

Project Summary

The principal aim of my research is to detail how AI could affect both privacy and prejudice for humanity. However, the broadness of all three elements of this aim requires a focus. Google and Facebook are among the world leaders in AI innovation, the sheer scale of their operations and recent scandals make them important case studies to compare. I will examine how these two private companies approach various AI challenges, from facial recognition to algorithms that decide personalized adverts for users. By focussing on these two companies, my research will be able to offer unique contributions on how privacy and prejudice can feedback on each other and exacerbate problems. My research will question whether humanity is ignorant to its gradual loss of privacy, or whether we are simply willing to sacrifice it for the ‘handy’ new AI-based services these companies are offering. Similarly, does this extend to ignoring emergent prejudices, seemingly in all of these services, against multiple different groups of people? The different origins of this prejudice will be examined, one example being the dataset choices that meant Google Photos classified two black people as ‘gorillas’. My work will be intrinsically linked to the UKRI CDT in ART-AI’s theme of policy making with and for AI and its findings will be vital in highlighting how innovation must become more accountable, responsible and transparent for the good of humanity.

Supervisors

Prof Hilde Coffé

Research Interests

‘Bias’ in AI.

AI and Privacy.

Historical Precedents.

Interdisciplinary Approaches.

Background

BSc Mathematical Sciences and History at University of Exeter, my dissertation was entitled ‘Artificial Intelligence and Slavery: the History and Mathematics of the Future’.

Brier Rigby Dames

Modelling Brain-Like Intelligence in an Evolutionary Context for AI Applications

Project Summary

A challenge for AI research is to operate autonomously in natural environments. Many organisms process the world in milliseconds and react accordingly. Achieving such a rapid and complex response with AI is an ongoing challenge. One solution is to pursue research on Brain-Like Intelligence: biologically-inspired solutions that replicate natural processes, which address the computational demands of multiple sensory inputs and potential motor outputs.

Brain-Like AI research has typically focused on human intelligence as the end-goal. What if the focus was broadened to look at other evolved intelligences from the natural world? The pluralistic view that all species have evolved adaptive sensory and motor capabilities for different environments might better equip autonomous robots with the flexibility to use different computational approaches attuned to their environment.

My project aims to examine the evolution of sensory cognition by the phylogenetic modelling of behavioural data to provide a basis for biologically-inspired Brain-Like AI. The results will stimulate innovations in the diversity of artificial intelligences and increase the input and output modalities for AI agents and robotics.

 

Supervisors

Dr Michael Proulx

Prof Eamonn O’Neill

Dr Alexandra de Sousa

Research Interests

Modelling the visual system of humans and other species.

Computer vision, audition, and ethical implications.

Biologically-inspired robotics and autonomous machines.

The use and regulation of AI and Robotics in the areas of health care and education.

Background

BSc Biology and Psychology (Joint Honours) at the University of Stirling

MSc Psychological Research Methods, specialising in neuroscience, at Birkbeck, University of London

I worked as an RA at the University of Cambridge for almost two years: in the Department of Psychology, Faculty of Education, and then at the Cambridge Institute of Public Health. I also worked as an analyst in the areas of AI ethics, cybersecurity, and healthcare. Most recently, I was a project assistant for the software team at CMR Surgical.

Alex Taylor

Machine learning in Safety Critical Engineering

Project Summary

In collaboration with Rolls-Royce, I will be researching the applicability of machine learning in safety critical engineering. I will be working on advancing the state of the art in this domain to meet the challenge of mitigating “worst case” performance in systems that must be trusted not to fail, for example in jet engines or in control systems for autonomous vehicles. I plan to develop new AI that will work with and for world leading engineering teams, helping them to make better decisions and improve design processes without sacrificing quality or safety.

I will be working with data scientists in Rolls-Royce’s R2 Data Labs and with experts within the defence business, who can provide world-leading insight into how innovations in machine learning, data science and related technologies can be used in this domain.

As aerospace engineering is a highly regulated and safety critical industry, this project requires that the AI developed is accountable, responsible and transparent. There will be a focus on understanding “why” an AI does what is does, what guarantees can be made of an AIs worst case performance, and how this can be conveyed to regulators.

Supervisors

Dr Neill Campbell

Research Interests

I am generally interested in all things data science and machine learning. I have a particular focus on the transparency and reliability of machine learning.

Background

MSc Data Science, University of Bath

MEng Aerospace Engineering, University of Bristol

One year of experience working in patent law.

Scott Wellington

Decoding Imagined Speech from the Brain Signal

Project Summary

Brain-computer interfaces: state-of-the-art machine learning models to decode imagined speech (and auditory speech) have shown impressive results. Much ink has been spilled over ‘thought-reading’ technologies among AI forecasters and ethicists; however, for individuals with neurodegenerative conditions or speech deficits, practical real-world utilities are already clear for assistive technologies. I propose a project to ’close the gap’ between two domains: covert speech decoding from surgically-invasive electrocorticography (ECoG) data currently outperforms models using data from surface electroencephalography (EEG). I aim to apply novel approaches inspired from cutting-edge neural processing to inform the future(s) of these speech neuroprostheses.

Supervisors

Dr Dingguo Zhang

Research Interests

Brain-computer interfaces.

Signal decoding and machine learning, including methods of communication for public understanding and to inform decision-making.

AI solutions for medical, assistive and human augmentative technologies.

Biosignal data, processing, and practical ethics.

Background

MSc Speech and Language Processing, University of Edinburgh

One year with the company SpeakUnique, who builds personalised synthetic voices for people with impaired speech resulting from neurodegenerative conditions; initially with the Centre for Clinical Brain Sciences, investigating machine learning techniques to decode speech from EEG and ECoG signal data.

2019 Cohort

Ronny Bogani

To What Degree AI Can Empower, Protect and Further Realise the Rights of Children and Young People in Accordance with the UN Convention on the Rights of the Child

Oscar Bryan

Machine Learning for the Detection of Unexploded Ordnance Using Synthetic Aperture Sonar

George Fletcher

Intelligent 3D Character Creation & Animation

Catriona Gray

The Adoption and Use of Artificial Intelligence Technologies (AITs) by Humanitarian Organisations

Akshil Patel

Autonomous Skill Acquisition

Elena Safrygina

Translating Spatio-Temporal Imaging Data Into Clinical Data Using Machine Learning

Jack E. Saunders

Reactive Collision Avoidance Using Deep Reinforcement Learning for the Application of UAV Delivery

Jack R. Saunders

Crossing the Uncanny Valley: Using Deep Learning for Realism in Facial Animation

Mafalda Teixeira Ribeiro

Controlled Biological Neuron Growth for the Creation of Animats and Novel AI Techniques

Elsa Zhong

How Interactive AI Can Be Used to Manipulate Humans

Damian Ziubroniewicz

Fair Machine Learning

 

Ronny Bogani

To What Degree AI Can Empower, Protect and Further Realise the Rights of Children and Young People in Accordance with the UN Convention on the Rights of the Child

Project Summary

I will explore AI’s potential as a digital guardian in the online environment. My project is relevant to present debate concerning both the means and the desirability of classifying individuals as ‘children’ using the current system based on the age of the child, and how to remain consistent with the best interests of the child in the digital world; how AI can be utilized to evaluate the individual developmental maturity of children independent of age, and how artificial intelligence may be employed to protect and ensure children’s rights to development, information, education, participation and privacy in a safe and secure digital environment. My research is intended to ascertain whether AI acting as a digital guardian could redefine the position and power of children in relation to the competing protective interests of parents and policy interests of the State using the United Nations Convention for the Rights of the Child (UNCRC) as a regulatory framework.

Supervisors

Dr Emma Carmel

Dr Julian Padget

Research Interests

AI and the Law.

AI and Human Rights.

Background

Bachelor of Arts, University of Central Florida

Juris Doctor, Florida State University

Master of Laws in Human Rights, University of Edinburgh

Oscar Bryan

Machine Learning for the Detection of Unexploded Ordnance Using Synthetic Aperture Sonar

Project Summary

My research area is the application of machine learning methods for the detection of sea disposed, unexploded ordnance. Advances in underwater sensing technologies (synthetic aperture Sonar) and autonomous vehicles has resulted in large amounts of survey data at a high (cm) resolution. This development motivates the development of autonomous mapping and assessment for the remediation of historic weapons dumped at sea.

Supervisors

Dr Alan Hunter

Dr Tom Fincham Haines

Research Interests

Sonar, deep learning, Bayesian machine learning and explainable AI.

Background

MEng Mechanical Engineering at University of Bath, ART-AI MRes

George Fletcher

Intelligent 3D Character Creation & Animation

Project Summary

I am investigating the use of AI in rapid character and asset creation for highly stylised avatars or high quality rapid character creation with semantically meaningful controls assisting design. The focus is on leveraging the rich quadrupedal motion capture data of CAMERA here at Bath via deep learning techniques. My project will also consider the ethical issues concerned with high quality avatar generation, and the potential impacts on the creative industries (eg games industry ‘crunch‘).

Supervisors

Prof Darren Cosker

Dr Polly McGuigan

Research Interests

Machine Learning, particularly concerned with Animation and Procedural Character Generation for the creative industries, with broader interests in Reinforcement Learning, Graphics, and Vision.

Background

MMath (4 years) from the University of Oxford

Catriona Gray

The Adoption and Use of Artificial Intelligence Technologies (AITs) by Humanitarian Organisations

Project Summary

I will examine decisions relating to the adoption and use of data-driven AITs as decision-supporting tools by humanitarian actors within the ‘global refugee regime’. Adopting an institutional ethnographic approach, my project aims to identify the forces shaping these decisions as well as their ethical and political implications.

Supervisors

Dr Emma Carmel

Dr Marina De Vos

Research Interests

Displacement, humanitarianism, Science and Technology Studies (STS), governance, policy anthropology, socio-legal studies, decoloniality.

Background

LLB Law, MRes Sociology, MSc Refugee and Forced Migration Studies

Catriona Gray

Akshil Patel

Autonomous Skill Acquisition

Project Summary

My project will develop algorithms that give artificial agents the ability to autonomously form useful high-level behaviours, such as grasping, from available behavioural units (for example, primitive sensory and motor actions available to a robot). This allows a developmental process during which an agent can learn behaviours of increasing complexity through continuously building on its existing set of skills. Using high-level skills aggregates the agent’s behaviour which facilitates transparency through explainability.

Supervisors

Dr Özgür Şimşek

Dr Iulia Cioroianu

Research Interests

All things Machine Learning with a particular interest in Reinforcement Learning and Transparency and Explainability in AI.

Background

BSc Mathematical Sciences and MSc Machine Learning and Autonomous Systems at University of Bath

Organiser of BathML meetup (2019-)

Elena Safrygina

Translating Spatio-Temporal Imaging Data Into Clinical Data Using Machine Learning

Project Summary

Tumour heterogeneity at the protein level has been associated with poor prognosis in several human carcinomas. Current approaches for assessing protein function rely on intensity-based methods, which are limited by their subjectivity and specificity.  A novel assay using amplified, time-resolved Forster resonance energy transfer (FRET) is a highly specific and sensitive method and can be adapted to any protein.

The aim of the project is to combine both methods to reveal molecular heterogeneity at the protein level and, using machine learning techniques, translate it to interpretable format, which can be widely used by clinicians.

Supervisors

Prof Banafshé Larijani

Dr Julian Padget

Research Interests

Probabilistic modelling, Machine Learning, and interdisciplinary applications to biomedical sciences.

My research aim is to extract valuable information and automate the inference of clinical data using Machine Learning.

Background

Specialist diploma in Engineering, Ural Federal University, Ekaterinburg

MSc Data Science, Birkbeck College, University of London

Elena Safrygina

Jack E. Saunders

Reactive Collision Avoidance Using Deep Reinforcement Learning for the Application of UAV Delivery

Project Summary

Unmanned aerial vehicles (UAV) lack the required safety technologies for regulations to allow for delivery applications.  Current legislation is very restrictive, only permitting their operation within line of sight and during certain daylight hours.  This is for good reasons as UAVs can weigh up to 25kg and travel at speeds approaching 45 m/s which presents enormous threat to individual safety with several reports describing injuries or property damage from UAV crashes.    

I hope to research the dynamic environments delivery UAVs will encounter mid-flight for collision avoidance systems.  More specifically pop-up collisions that require highly reactive manoeuvres.

Supervisors

Dr Wenbin Li

Dr Pejman Iravani

Research Interests

Training manoeuvres using deep reinforcement learning and exploring new ways that visual intelligence can impact the trajectory generation of UAVs. Then finding ways this research can help increase trust for UAV delivery.

Background

MEng Mechanical Engineering at Cardiff University

Jack R. Saunders

Crossing the Uncanny Valley: Using Deep Learning for Realism in Facial Animation

Project Summary

The project involves capturing micro-level idiosyncrasies in individual faces and reconstructing them using state-of-the-art deep learning technologies, in particular generative adversarial networks. The project also touches on the ethical issues surrounding the use of realistically reconstructed faces, such as those posed by deepfakes.

Supervisors

Prof Darren Cosker

Dr Anthony Little

Research Interests

My research interests include the use of AI methods, particularly GANs in computer modelling for CGI in films and games, with a specific focus on faces.

Background

BSc in Mathematics from the University of Southampton

Mafalda Teixeira Ribeiro

Controlled Biological Neuron Growth for the Creation of Animats and Novel AI Techniques

Project Summary

My project focuses on developing devices for directing the growth of biological neurons, as well as recording and stimulating these using electrical or optical means. The cultured neuronal network on the device can then interact with a simulated environment through an animat – a simulated robot. This setup could offer novel insights into the mechanisms behind intelligent behaviour at a miniaturised level, which could then be used to improve or create novel AI approaches. Given the interdisciplinarity of this project, concepts of ethics, accountability, responsibility, and transparency are crucial from growing biological cultures to any derived AI applications.

Supervisors

Dr Benjamin Metcalfe

Dr Christof Lutterorth

Dr Michael Proulx

Research Interests

Using bioelectronics for better monitoring and understanding of cellular behaviour, and the mechanisms behind intelligence in order to develop novel AI techniques.

Background

I received an MEng in Electrical and Electronic Engineering from the University of Bath, with a final year project focusing on the development of highly sensitive microfluidic lab-on-chip devices. I also had an industrial placement at Intel UK where I conducted digital design and verification of chips for mobile applications.

Elsa Zhong

How Interactive AI Can Be Used to Manipulate Humans

Project Summary

My research is to investigate how conversational interactive AI (CIAI) (e.g. Amazon Alexa) can influence humans by creating or enhancing human cognitive biases. Cognitive biases are universally inherent in the human brain. CIAI can create or enhance cognitive biases to manipulate or nudge humans. The current research starts from arguing if people really want to get rid of biases from AI or if AI should use inherent human biases to some extent to influence humans. The research would then investigate how to influence humans with CIAI. The results are expected to be directly used to set industry standards of CIAI and to address the lack of clear standards and accountability of CIAI.

Supervisors

Prof Eamonn O’Neill

Dr Vinay Namboodiri

Research Interests

Conversational interactive AI.

Human and machine biases.

Behavioural economics.

Background

BSc in Applied Psychology and Medical Law, Chongqing Medical University

MSc in Social Psychology, Lancaster University

Two years of working experience in the AI industry; past research projects include affective computing and rehabilitation robots for the treatment of children with ASD.

Damian Ziubroniewicz

Fair Machine Learning

Project Summary

The project focuses on the development of fairness adjusted machine learning algorithms. The aim is to create models which reduce the presence and impact of bias and unfairness, such as discrimination, in machine learning outputs while preserving optimal predictive capacities.

Supervisors

Dr Özgür Şimşek

Dr Iulia Cioroianu

Prof Nick Pearce

Research Interests

Machine Learning, data science, algorithmic bias, fairness.

My main research interest involves developing new techniques in the computational sciences to answer questions and problems in the social sciences.

Background

MSc Data Science, University of Liverpool. My research project focused on the development of an algorithm with the ability to measure and quantify social value in the digital city context. I also have 1 year of professional experience as a data scientist.