Interpretability, safety, and security in AI

The Interpretability, safety, and security in AI conference is being organised by The Alan Turing Institute on the 13th - 15th December.

The Interpretability, safety, and security in AI conference is being organised by The Alan Turing Institute- and it is being organised under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — bringing together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.

This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will  overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.

The conference consists of two parts:

1) Academic-oriented event – Monday 13 December (10:00 – 16:45 GMT)

2) Public-oriented event – Tuesday 14 – Wednesday 15 December (12:00 – 18:00 GMT)

To register for the event and to see a detailed agenda, please follow this link; Interpretability, safety, and security in AI | The Alan Turing Institute

Main Photo by Guillermo Velarde on Unsplash


Event Info

Start Date 13.12.2021
End Date 15.12.2021
Start Time 10:00am
End Time 6:00pm

Add to Google Calendar