AFOG offers opportunities for fellows that are passionate about algorithmic fairness and just societies.

Stay tuned for new grants in the future.

Panelists and audience members discussed specific examples of problems of fairness (and justice), including cash bail in the criminal justice system, “bad faith” search phrases (e.g., the question, “Did the Holocaust happen?”), and representational harm in image-labeling. Panelists noted a key challenge that technology, on its own, is not good at explaining when it should not be used or when it has reached its limits. Panelists pointed out that understanding broader historical and sociological debates in the domain of application and investigating contemporary reform efforts, for example in criminal justice, can help to clarify the place of algorithmic prediction and classification tools in a given domain. Partnering with civil-society groups can ensure a sound basis for making tough decisions about when and how to intervene when a platform or software is found to be amplifying societal biases, is being gamed by “bad” actors, or otherwise facilitates harm to users.

Stay tuned for new fellowships in the future.

In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.

Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.

Public Interest Technology University Network Postdoc

Applications must be submitted through the university portal: https://aprecruit.berkeley.edu/JPF02894

Recruitment Period:

Open date: March 16th, 2021

Next review date: Tuesday, Mar 30, 2021 at 11:59pm (Pacific Time)
Apply by this date to ensure full consideration by the committee.

Final date: Friday, Apr 16, 2021 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.

Position Description:

Professor Deirdre Mulligan and Associate Professor Jenna Burrell are seeking a Postdoc Researcher to work on a year-long project funded through the Public Interest Technology - University Network’s, (PIT-UN) 2020 Network Challenge Grants.

The Algorithmic Fairness and Opacity working Group (AFOG) was founded in 2017 with a research gift from Google and is made up of UC Berkeley faculty, postdocs, and graduate students at UC Berkeley. Our aim is to develop new ideas, research directions, and policy recommendations around issues of fairness, transparency, interpretability, and accountability in algorithm-based systems. AFOG hosts an ongoing speaker series and a biweekly informal lunch discussion group that has been useful for researchers to share early stage and ongoing research. We have a strong track record of placing alums in PIT positions in the private and public sectors. Scholars who have joined us have gone on to work at the AI Now Institute at NYU, the Partnership on AI, the People and AI Research team at Google, and the Aether ( AI, Ethics and Effects in Engineering and Research ) team at Microsoft.

Appointment Length:

This is a full time, one year position with the possibility of being extended subject to performance and budgetary approval. Anticipated start date is June 2021.

Additional information:

This position is housed within the Algorithmic Fairness and Opacity Working Group (https://afog.berkeley.edu/) and will work directly with Professor Deirdre Mulligan and Associate Professor Jenna Burrell as part of the Algorithmic, Fairness, and Opacity Group (AFOG), and will meet with and benefit from the advising and mentorship of Diana Lizarraga, Director of Cal NERDS and Claudia Natalia von Vacano PhD, Executive Director of the D-Lab.

The PIT-UN postdoc will help to develop a cohort of undergraduates and junior scholars as “public interest technologists” through skill building, reflection, critical service learning, and connections to the field of algorithmic justice.

In this role you will partner to develop programming with the D-Lab and Cal NERDS. This postdoc position will help undergrads to deepen technical skills, explore the social and political implications of algorithmic systems, and provide onramps into the PIT field. Hands-on workshops will surround formal curriculum in information and data science, and extend AFOG’s existing research, workshops, and public speaker series. Public lectures and lunch talks will grow students' understanding and connections to the field. Through AFOG Reflection and Critical Technical Practice Workshops, students will reflect on what it means to practice ethically and in the public interest, exploring issues of power, identity, and expertise, and their role as a professional, an employee, an expert, and a member of the public in relation to the public good, social justice and social change. You will also organize four PIT-UN related workshops and help support Cal NERDS to develop their STEMinist post-bootcamp programming, and support other activities as needed. You will work with the PIs to develop and apply metrics to assess the success of the PITUN grant activities in cultivating an undergraduate PIT cohort. In addition, you will be incorporated into the D-Lab Data Science Fellows community and will be expected to present one time in an information internal talk series.

You will dedicate 50% of your time to this effort and will have the freedom to pursue your own AFOG related research, in collaboration with the PIs or other AFOG members. We are interested in many areas of algorithmic justice, but especially research that explores the effects of AI systems on the ground, expands the ‘Solution Space’ for responsible AI including improved algorithms, interface designs, legal reforms, improved organizational policies and processes, collective action, and social movements, and research on educational approaches that help technical think deeply and critically about technology, human values, and the social and political implications of technical systems.

Support for this position is provided by the Public Interest Technology University Network Challenge Fund, a fiscally sponsored project of New Venture Fund. The Public Interest Technology University Network’s challenge grants are funded through the support of the Ford Foundation, Hewlett Foundation, Mastercard Impact Fund with support from Mastercard Center for Inclusive Growth, Patrick J. McGovern Foundation, The Raikes Foundation, Schmidt Futures and The Siegel Family Endowment.

Basic Qualifications:

PhD or equivalent international degree or enrolled in a PhD (or equivalent international degree) program at the time of application.

Additional Qualifications (by start date):

PhD or equivalent international degree
No more than four years of post-degree research experience

Preferred Qualifications:

  • PhD in an interdisciplinary program (perhaps from an Information School, STS program, digital humanities program, etc.) or have training in more than on discipline
  • Research interest in educational approaches to cultivating social/political/ethical awareness in technical fields.
  • Candidates with knowledge of and sensitivity to the challenges faced by students and scholars from non-dominant groups, the structural barriers that stand in their way, and how to tackle those challenges both through mentoring and by addressing institutional power structures directly are encouraged to apply.
  • Able to grasp technical aspects of machine learning and artificial intelligence and also trained in and comfortable with interpretivist / non-positivist research methods or philosophies.
  • Expertise in a particular domain where machine learning or artificial intelligence tools and techniques are becoming influential (i.e. education, labor and employment, journalism and media).
  • Strong research record (ideally in this area) and passionate about finding ways to communicate ideas across disciplinary boundaries and to audiences beyond the Academy.


Salary and title will be commensurate with qualifications and experience and based on UC Berkeley salary scales. The salary range for this position is $54,540-$65,000.

This position is covered by the UC-UAW collective bargaining agreement for Academic Researchers. The current contract can be found here:
https://ucnet.universityofcalifornia.edu/labor/bargaining-units/px/index.html. Additional information can be found on the website: http://uaw5810.org/welcome-new-postdocs/

Job will remain open until filled.

All letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality (http://apo.berkeley.edu/evalltr.html) prior to submitting their letters.

The University of California is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy see: http://policy.ucop.edu/doc/4000376/NondiscrimAffirmAct.

Application Requirements:

Document requirements

  • Curriculum Vitae - Your most recently updated C.V.
  • Cover Letter - Describe how you are qualified and prepared for this position and highlight any experience supporting undergraduate students, especially highlighting students from non-dominant background
  • Writing Sample - An example of your writing which may include a publication or dissertation chapter or other finished but unpublished piece that is relevant

Reference requirements

  • 2 required (contact information only)

Job Location

Berkeley, CA

+ Show More

— Show Less