The Algorithmic Fairness & Opacity Working Group (AFOG) is made up of UC Berkeley faculty, postdocs, and graduate students at UC Berkeley. It is housed at Berkeley’s School of Information.
We aim to develop new ideas, research directions, and policy recommendations around issues of fairness, transparency, interpretability, and accountability in algorithms and algorithm-based systems. These issues require attention from more than just engineers and technologists, as they are playing out in domains of longstanding interest to social scientists and scholars of media, law, and policy, including social equality, civil rights, labor and automation, and the evolution of the news media.
We take an interdisciplinary approach to our research, with AFOG members based at Berkeley’s School of Information, Boalt Hall School of Law, the Goldman School of Public Policy, the departments of Electrical Engineering and Computer Sciences (EECS), Sociology, and Anthropology, the Berkeley Institute of Data Science (BIDS), the Center for Science, Technology, Medicine & Society (CSTMS), and the Center for Technology, Society & Policy (CTSP). AFOG is supported by UC Berkeley’s School of Information and a gift from Google Trust and Safety and Google Privacy.
We meet several times per month at the School of Information for informal discussions, presentations, and workshops. We also host a speaker series that brings experts from academia and the technology industry to UC Berkeley to give public talks and take part in interdisciplinary conversations.
Research questions that we address include:
What is at stake?
How do trends in data collection and algorithmic classification relate to the restructuring of the life chances, opportunities, and, ultimately, the social mobility of individuals and groups in society? How does an algorithmically informed mass media and social media shape the stability of our democracy?
How should we design for users of machine-learning systems?
How can we make transparency part of the design of user interfaces for machine learning systems that will support user understanding, empowered decision-making, and human autonomy?
What emerging tools, techniques, or approaches could mitigate opacity or unfairness/bias problems?
What tools and techniques are emerging that offer ways to ensure transparency and/or fairness? What methods are best suited to what domains of application?
How can we better communicate and collaborate across disciplines?
Disciplines provide shared tools, priorities and language, but along with this come constraints in ways of thinking about a topic or problem space. How can we identify and transcend those differences to make progress on issues of algorithmic opacity and fairness?