Transparency and Intelligibility
Making AI systems accessible, accountable, open to inspection and intelligible to diverse stakeholders.
We are a UKRI funded Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI.
ART-AI trains professional experts to make the best, and safest, use of artificial intelligence (AI) and to explore the opportunities, challenges and constraints presented by the diverse range of contexts for AI. We bring together researchers from across disciplines to educate the next generation of specialists with expertise in AI, its applications and its implications.
We will recruit and train at least 60 PhD students from diverse academic and personal backgrounds to help ensure that developments in AI, and decisions on how and whether to use it, are informed and ethical. ART-AI will produce interdisciplinary graduates who can act as leaders and innovators with the knowledge to make the right decisions on what is possible, what is desirable, and how AI can be ethically, safely and effectively deployed.
ART-AI draws together a wide range of topics: from algorithms to ethics; robotics safety to computational and public policy; probabilistic machine learning to symbolic AI; provenance, transparency and uncertainty quantification to intelligibility and trust in heterogeneous intelligent systems; reinforcement learning to emotion in human-machine interaction; and many others.
Making AI systems accessible, accountable, open to inspection and intelligible to diverse stakeholders.
How to assess its risks and benefits, quantitatively and qualitatively.
How to approach diverse challenges from robotics safety to computational and public policy.
The effects of AI in public policy design as it becomes endemic in a broad range of work environments.
What innovations are needed to provide policy-driven, explainable and auditable support for human work.
How to apply AI techniques to solve real world engineering problems.
Our Vision:
To train diverse specialists with perspective that make and will make a positive change to AI and society through AI
Our Mission:
To provide an interdisciplinary research environment that allows PhD students to become independent interdisciplinary researchers in the field of AI, and:
To provide a rich environment of collaboration between students, academics and international institutions, organisations and industrial partners which creates a diverse environment required to bring a positive change to AI and its use and deliver research impact
Our Strategic Goal:
To train 5 cohorts of independent interdisciplinary AI researchers and thought leaders by developing interdisciplinary knowledge and skills within a collaborative research and industrial community where individual contribution, impact and diversity are valued
Our Values:
Husna Siddiqi
Barry Brooks
Fiona Colquhoun
Rachel Free
Matt Griffith
Gail Kent
Oliver Lanning
Clara Neppel
Chris Rees
Len Rinaldi
Phil Williams
Clive Hollick
Mark Coeckelbergh
Antony Walker