Karolis Jucys

Reinforcement Learning Interpretability

Project Summary

While deep reinforcement learning systems have achieved great success in many areas, their practical application in the real world has been progressing at a slower pace. Part of the reason is their black-box nature. My project aims to shine light into it. This can be done through various methods ranging from digging deep into the individual neurons of the neural network part of the agent to creating inherently interpretable and simple decision tree policies that closely mimic the behaviour of the agent.

Research Interests

Reinforcement Learning

Deep Learning

Interpretability

Background

BSc Computer Science – Vilnius University

MSc Mathematics – Vilnius University

Worked for 8 years as a quant in finance

Co-organizer – MineRL competitions at NeurIPS

Supervisor

Prof Özgür Şimşek

Dr Rachael Bedford