While AI systems and algorithmic decision making can reap many benefits such as speed, efficiency and ability to solve complex tasks, they are commonly non-transparent and barely provide any means to understand their behaviour and reasoning behind their decisions. Such algorithms with little accountability or transparency can create a form of digital discrimination called ‘technological redlining’, which makes use of digital identities to reinforce inequality and oppression. Therefore, governance of responsible, transparent and accountable AI is necessary. My project aims to explore and research the credibility and effectiveness of algorithmic accountability in relation to the current lack of transparency and clear understanding of AI systems and algorithms among model developers, analysts and auditors. It aims to investigate and explore how AI systems and algorithms can be held accountable and regulated despite their black-box nature.
Ethical and Responsible AI.
AI’s social implications.
Transparency and Accountability in AI algorithms.
BA Business Management at The University of Sheffield.
MSc Data Science at The University of Sheffield.
Dr Iulia Cioroianu
Dr Marina De Vos