It’s hardly news that AI systems are being developed and used across sectors, including in government. Whether their aim is to detect fraud across the public sector, conduct risk assessments in social care, or to predict crime, governments around the globe have turned to data-driven and AI technologies to support decision-making and service delivery. The multitude of ethical challenges this presents has preoccupied researchers, commentators, and citizens alike.
Key among the concerns raised are questions of accountability. Many AI systems operate with relative complexity, opacity, illegibility, and automaticity; features which in turn constrain our ability to understand, review and contest processes and outcomes. Although they are often applied in fairly high-stakes situations, with potentially significant effects on people’s rights, accountability mechanisms (such as audits, public registers and oversight bodies) for the use of AI in public policy are still evolving.
A less examined dimension of AI ethics and accountability is that of the conditions shaping their adoption. At present, we know relatively little about how different forces interact to shape policy decisions and practices around the selection and procurement of particular AI technologies. As a result, there is a need for empirical research which attends not just to the use of AI in public policy settings but to the processes by which these technologies enter into and move within public institutions. Expanding our conceptions of accountability in this way progresses us beyond a narrow focus on technical solutions to ethical problems towards an understanding of AI’s complex embeddedness in political and institutional environments.
Tracing any policy process is rarely straightforward. Decision-making can be dispersed and non-linear, making it difficult to identify all relevant actors and their relationships. Fortunately, social scientists have developed theoretical and methodological tools well suited to this type of enquiry. One approach I adopt in my research is known as policy anthropology. It questions many traditional assumptions about what policy is, and rejects an instrumentalist understanding of policy as something that follows a kind of assembly line which culminates in a neatly delineated and clearly authored and documented output. Policy, rather, is made up of many different elements and is always undergoing reconstruction and translation. In order to study this type of mobile phenomenon, researchers make use of various methods from anthropology and sociology – including interviews, observations and archival analysis – that enable us to get closer to our object of study.
As we welcome public scrutiny and scholarly attention towards AI ethics, researchers also need to ensure we have the appropriate breadth of perspectives and tools to understand potentially radical transformations taking place across social and political life. The widespread adoption of AI technologies presents many possibilities for the way public power is exercised – including changes that could profoundly affect all of our lives. Questions about how AI comes to be adopted within specific contexts are key to seeing this bigger picture of what is really – ethically and politically – at stake.