Machine learning systems are increasingly employed to inform and automate consequential decisions for humans, in areas such as criminal justice, medicine, employment, and welfare programs. However, this comes with significant challenges, risks, and potential harms. For example, in recent years, we have witnessed growing concerns over the role of black-box automated decision making in exacerbating discrimination, reducing protection with regards to recourse, and diminishing or violating people’s privacy.
These concerns have led to a surge of interest in the development of a field that we refer to as Human-centric Machine Learning. This young field seeks to minimize the potential harms, risks, and burdens of machine learning systems on the public, and at the same time, maximize their societal benefits. We will focus on addressing two themes that have been the subject of heated debate in recent months.
- The differential treatment by algorithms of historically under-served and disadvantaged communities
- The development of machine learning systems to assist humans for better performance, rather than replace them.
Our workshop will bring together leading experts at the forefront of our two themes, from diverse backgrounds. One of our workshop goals is to reflect on the legitimacy of the status quo, which often takes the objective of the algorithm’s owner as a normative goal and sometimes assumes the algorithm has full agency (i.e., the algorithm is designed for full automation).
To facilitate this, our speakers will suggest and discuss promising ways of moving forward from this status quo. We will debate their suggestions in our panel discussions, bringing in audience participation at an appropriate point in the sessions.