Algorithmic Fairness and Opacity Group

The Algorithmic Fairness and Opacity Group (AFOG) is an interdisciplinary research group housed at the UC Berkeley School of Information, bringing together faculty, postdocs, and graduate students from Information Studies, Sociology, Law, Communication, Media Studies, Computer Science, and the Humanities, among others. Formed in 2017, we conduct research, education, develop policy, build systems, bring theory into practice, and bridge disciplinary boundaries. Throughout our efforts, we center human values in the design and use of technical systems to support more equitable and just societies.

Meet the group
Webpage Preview

AI On the Ground

Algorithmic decision tools increasingly impact all facets of life -- from high stakes contexts like criminal justice to everyday interactions on social media. Once deployed, they become part of complex socio-technical systems. The reality of these systems often deviates from the planned or expected. We discuss and undertake new research to understand the effects of AI systems on the ground.

Focus Area 1
Focus Area 2

Expanding the ‘Solution Space’ for Responsible AI

Designing technology to support just and equitable societies requires a broad consideration of the ‘solution space.’ We bring together scholars and practitioners with wide ranging expertise to explore solutions that include improved algorithms, interface designs, legal reforms, improved organizational policies and processes, collective action, and social movements. We also explore ways of configuring the handoffs between technical, organizational, and legal aspects of algorithmic systems to foster forms of oversight and accountability aligned with democratic norms of governance.

Public Interest Technology

Public interest technology (PIT) refers to the study and application of technical expertise to advance the public interest in a way that generates public benefits and promotes the public good, particularly for those members of our society least well served historically and today by existing systems and policies.

Learn More
Focus Area 3

We teach courses and host public events that draw students from across campus. We strive to teach the next generation of professionals working in non-profit, corporate, and government settings to think deeply and critically about technology, human values, and the social and political implications of technical systems.

Some relevant courses include:

INFO 188 - Behind the Data: Humans and Values

Prof. Deirdre Mulligan

This course provides an introduction to ethical and legal issues surrounding data and society, as well as hands-on experience with frameworks, processes, and tools for addressing them in practice. It blends social and historical perspectives on data with ethics, law, policy, and case examples — from Facebook’s “Emotional Contagion” experiment to controversies around search engine and social media algorithms, to self-driving cars — to help students develop a workable understanding of current ethical and legal issues in data science and machine learning.

INFO 239 - Technology and Delegation

Prof. Deirdre Mulligan

The introduction of technology increasingly delegates responsibility to technical actors, often reducing traditional forms of transparency and challenging traditional methods for accountability. This course explores the interaction between technical design and values including: privacy, accessibility, fairness, and freedom of expression. We will draw on literature from design, science and technology studies, computer science, law, and ethics, as well as primary sources in policy, standards and source code. We will investigate approaches to identifying the value implications of technical designs and use methods and tools for intentionally building in values at the outset.

INFO 290 - Building Data Products for Public Impact

Prof. Diag Davenport

Public policy and civic organizations are increasingly guided by data products such as empirical graphs, statistical analyses, and machine learning predictions. However, no data product can deliver an absolute, unimpeachable truth. Therefore, building these products requires navigating a delicate balance between a) moving forward with identified issues and b) refining and improving the product to address issues. This class will help students develop their intuition for striking that balance through hands-on experience.

Latest Announcement

Statement in Support of Timnit Gebru, Margaret Mitchell and the Ethical AI Team at Google

For the last three years AFOG at UC Berkeley has benefited from the participation of several members of the Ethical AI team at Google in our working group meetings and workshops. Recently Dr. Timnit Gebru who co-leads the team was fired from her position over her request for greater transparency and procedural fairness in the internal review process of the research produced by herself and her team members...

Learn More
Image of a man with an EEG (brain sensor) cap

WHAT WE’RE READING

The Society of Algorithms

Jenna Burrell and Marion Fourcade

The pairing of massive data sets with processes—or algorithms—written in computer code to sort through, organize, extract, or mine them has made inroads in almost every major social institution. This article proposes a reading of the scholarly literature concerned with the social implications of this transformation.


Image of UC Berkeley

Latest Recording

October 14, 2020

The Refusal Conference

The Refusal Conference

A virtual conference intended to 'meet the moment' by exploring organized technology refusal from historical and contemporary vantage points.

Our partners work with us to examine topics in fairness and opacity. If you are interested in becoming a partner, please contact us at afog@berkeley.edu.