The UKRI Inter AI CDT Conference took place on the 7th and 8th November 2022 at the Bristol Hotel with the Interactive Artificial Intelligence CDT based at the University of Bristol and the Foundational Artificial Intelligence CDT based at UCL. Over 170 students, academics and industry representatives came together for this 2 day conference with the aim of enabling collaboration as well as delivering pertinent workshops.
On day 1 we had Professor Nello Cristianini, from the Department of Engineering Mathematics at the University of Bristol as the key note speaker, followed by a choice of 2 x morning and 2 x afternoon workshops (details of these workshops are below). At the end of day 1 we had a poster session at the MShed, which had over 45 posters submitted by students across the 3 x CDTs. Prizes for the top 3 posters were presented to the students at the conference dinner. Congratulations go to Scott Wellington, from ART-AI cohort 2, who won the competition and to Elsa Zhong, from cohort 1, who came third!
On day 2 we had Professor Francesca Toni, from the Faculty of Engineering, Department of Computing, at Imperial College London as the key note speaker and again this was followed by a choice of 2 x morning and 2 x afternoon workshops (details of these workshops are below).
Below is some of the feedback we have received by those that attended:
“There was excellent back and forth amongst participants and very interesting ideas were presented.”
“I really enjoyed getting to network and the social opportunities.”
“What I enjoyed most was meeting everyone and the chance to discuss research.”
“It was really nice seeing other people from other CDT’s that also have similar interests and talk to them about their research.”
Due to its success, we are hoping to make this an annual event! If you would like to see more about what happened at the conference, please watch the video below.
Information about the workshops
Day 1 – Morning Session
AI Ethics in Defence
The main objective of the workshop is to encourage engagement on the subject topic with a broad range of students and academics and allow MOD and the Defence Industry to share their thinking on this topic but also receive inputs from the attendees. This is an all day workshop; in the morning session you will hear from speakers from both academia and industry before breaking into discussion groups to focus on a problem that will be posed by the table host. The afternoon session will be a feedback session from the discussions and end with a panel session.
Deep Tech Entrepreneurship in AI
Hear from a panel of 3 entrepreneurs in the AI sector:
Dr Riam Kanso is the CEO of Conception X, after heading up student entrepreneurship at University College London Engineering since 2017. Riam has a DPhil in Neuroscience from Oxford University. She has launched two companies, worked in med-tech and computational science, and taught entrepreneurship both at UCL and Cambridge. Conception X is now her full time mission. It has been set up as a not for profit with the aim of helping bright students solve global challenges.
Stacy-Ann Sinclair is the CEO and founder of CodeREG, a regtech startup codifying natural language regulation into machine executable regulation. They use machine learning and cutting-edge natural language processing techniques to better understand regulations and its impact in real-time. A Computer Scientist who has spent the last 10 years building trading systems and globally scalable data platforms for UBS, Barclays Investment Bank and Bank of America Merrill Lynch, Stacy-Ann strongly believes most problems can and should be solved in software, to this end she has founded CodeREG. CodeREG is on a mission to make regulatory change as simple as a software update. Stacy-Ann is an active advocate for increasing the number of women in tech and introducing coding at an early age to children, especially young girls. She supports and works with initiatives positively contributing to these causes – such as Stemettes and Code First Girls.
Thomas Stone from Kintsugi (Ad)ventures.
Day 1 – Afternoon Session
AI Ethics in Defence
Continued from the morning session, see above
AI & healthcare workshop
By involving leading experts in AI for healthcare, this workshop introduces you to the surging potential of AI transforming health. Particularly scenarios based on medical imaging will be covered to demonstrate how AI is used to tackle medical challenges such as brain cancer detection and segmentation, and dementia prediction. You will also have the opportunity to discover how edge-cutting deep learning techniques can be used to impact clinical practice.
Day 2 – Morning Session
AI & healthcare workshop
Continued from day 1, see above. You may select either or both of the sessions.
How to do experiments when human subjects are involved
Machine learning and AI have many application areas that involve interactions with people. In this session we will present a high level overview of how to conduct human participant research studies, with quantitative or qualitative data, in controlled experiments or in the messy real world.
Day 2 – Afternoon Session
Brain Computer Interfaces (BCI) (1 hour)
In this workshop, attendees will get a hands-on session with a popular commercial brain-computer interface (BCI) device. We will record and visualise some of your own brain signals, before exploring some of the ways which your data may be used to shape the future of BCIs for fields as diverse as soft robotics, visual decoding, and speech.
Where scale fails (1 hour)
Remarkable progress has been made recently in AI, in particular in Natural Language Process and Computer Vision. For example, large-scale language models such as GPT3 have transformed our ability to generate coherent text and process text more generally. Indeed, there was much media excitement as to whether these models were even “sentient”. Dalle-2 and related models are able to generate photo-realistic images from complex text prompts. These advances have largely been driven by industry (since training a competitive large-scale language model will cost many millions of dollars). Underlying these advances is the use of massive datasets and massive computing. These raise several important questions:
– Is it justifiable to expend so much resource (energy) on training such models?
– What is the role of universities in light of the costs and human resources needed to train these large models?
– Should these “foundation” models be considered free resources for humanity given that they are arguably “just” scaled up versions of academic (publicly funded) research?
– To what extent should we be concerned about the use of these models, given that our ability to understand them (including their biases) may be limited.
– To what extent are the details of these models important? Evidence suggests that competing architectures perform similarly — perhaps scale of data and compute really the main ingredient, with the mode details being less relevant.
– Does the complexity of these models reflect that impossible dream of finding a “simple” explanation for language?
– What are the implications for future research and applications of these models?
How to ensure academic work can be reproduced by others
This session will look at best practice in AI research when it comes to reproducibility. The session has two parts. In the first hour, Dr Emma Tonkin (Research Fellow, University of Bristol) will look at data management for reproducibility. Em will talk about the distinction between reproducibility and replicability and what makes this difficult. She will then cover a range of practical steps towards reproducibility and replicability, including data formats, metadata, data management plans, and platforms for storage and sharing. In the second hour, Dr Christopher Woods (EPSRC Research Software Engineer Fellow, University of Bristol) will address similar issues from a software engineering perspective, covering best practice in software engineering, tools and platforms, and how code written in support of research can be a research output in its own right, ready to be deployed by others.
Keynote Speaker – Day 1
Professor Nello Cristianini, from the Department of Engineering Mathematics at the University of Bristol
Title: Living with intelligent machines
The key to address our social anxieties in the face of Intelligent Machines is understanding how they think, what we can expect of them, and what can be the consequences of the specific type of intelligence that they exhibit, and the specific position we have chosen for them in our global data infrastructure. We will examine the steps that took us to the present form of AI, so as to consider what we can expect from it, and how we can manage our relation with it.
Nello Cristianini is Professor of Artificial Intelligence at the University of Bristol. He has published in many areas of Artificial Intelligence, including machine learning, statistical learning theory, natural language processing, computer vision, machine translation, bioinformatics, social implications of AI, philosophical foundations of AI, digital humanities, and computational social science.
Keynote Speaker – Day 2
Professor Francesca Toni, from the Faculty of Engineering, Department of Computing, at Imperial College London
Title: Learning Argumentation Frameworks and XAI
Argumentation frameworks are well studied as concerns their support for various forms of reasoning. Amongst these, abstract argumentation and assumption-based argumentation frameworks can be used to support various forms of defeasible, non-monotonic reasoning. In this talk I will focus on methods for learning these frameworks automatically from data. Specifically, I will overview two recent methods to obtain, respectively, abstract argumentation frameworks from past cases and assumption-based argumentation frameworks from examples of concepts. In both cases, the learnt frameworks can be naturally used to obtain argumentative explanations for predictions drawn from the data (past cases or examples) in the form of disputes, thus supporting the vision of data-centric explainable AI.
Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI at the Department of Computing, Imperial College London, as well as the founder and leader of the CLArg (Computational Logic and Argumentation) research group and of the Faculty of Engineering XAI Research Centre. She has recently been awarded an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX). Her research interests lie within the broad area of Explainable AI, and in particular include Knowledge Representation and Reasoning, Argumentation, Argument Mining, Multi-Agent Systems, and Machine Learning.
She is EurAI fellow, on the editorial board of the Argument and Computation journal and the AI journal, and on the Board of Advisors for KR Inc. and for Theory and Practice of Logic Programming. She is currently programme chair of COMMA 2022, sponsorship chair of ICAIF2022, and co-organiser of XAI-FIN 2022.