The principal aim of my research is to detail how AI could affect both privacy and prejudice for humanity. However, the broadness of all three elements of this aim requires a focus. Google and Facebook are among the world leaders in AI innovation, the sheer scale of their operations and recent scandals make them important case studies to compare. I will examine how these two private companies approach various AI challenges, from facial recognition to algorithms that decide personalized adverts for users. By focussing on these two companies, my research will be able to offer unique contributions on how privacy and prejudice can feedback on each other and exacerbate problems. My research will question whether humanity is ignorant to its gradual loss of privacy, or whether we are simply willing to sacrifice it for the ‘handy’ new AI-based services these companies are offering. Similarly, does this extend to ignoring emergent prejudices, seemingly in all of these services, against multiple different groups of people? The different origins of this prejudice will be examined, one example being the dataset choices that meant Google Photos classified two black people as ‘gorillas’. My work will be intrinsically linked to the UKRI CDT in ART-AI’s theme of policy making with and for AI and its findings will be vital in highlighting how innovation must become more accountable, responsible and transparent for the good of humanity.
‘Bias’ in AI.
AI and Privacy.
BSc Mathematical Sciences and History at University of Exeter, my dissertation was entitled ‘Artificial Intelligence and Slavery: the History and Mathematics of the Future’.
Prof Hilde Coffé