George Sandle

Deep Generative Energy-Based Models

Project Summary

This work seeks to improve the capabilities of generative models through an investigation of energy-based modelling. An energy-based approach to learning seeks to minimise the difference between a model of the world, and a measurement of it obtained through observed variables. Producing models in this manner offers advantages in terms of improved calibration, adversarial robustness and out-of-distribution detection. Much inspiration for this approach originates from the work of Karl Friston FRS, whose ‘free-energy principle’ serves as a framework for many of the deep-learning architectures to be investigated during this project. One example of an energy-based approach is the deep-equilibrium model, which offers a relatively computationally inexpensive way to implement a neural-network with ‘infinite’ hidden layer depth. Different methods for implicit and explicit minimizations of these models will be explored. Particular attention will be paid to producing models that are invertible. Inversion refers to obtaining a latent code vector which explains observed measurements as completely as possible. This is useful in producing models with good generalizability.

Research Interests

Generative modelling.

Statistical physics.

Optimization.

Mathematical neuroscience.

Philosophy of artificial intelligence.

Background

BSc Chemistry – University of Plymouth.

MPhil Computational chemistry – University of Cambridge.

I have also worked as a data scientist in the life sciences.

Supervisors

Dr Vinay Namboordiri

George Sandle