Opportunities

AFOG offers opportunities for fellows that are passionate about algorithmic fairness and just societies.

Stay tuned for new grants in the future.

Panelists and audience members discussed specific examples of problems of fairness (and justice), including cash bail in the criminal justice system, “bad faith” search phrases (e.g., the question, “Did the Holocaust happen?”), and representational harm in image-labeling. Panelists noted a key challenge that technology, on its own, is not good at explaining when it should not be used or when it has reached its limits. Panelists pointed out that understanding broader historical and sociological debates in the domain of application and investigating contemporary reform efforts, for example in criminal justice, can help to clarify the place of algorithmic prediction and classification tools in a given domain. Partnering with civil-society groups can ensure a sound basis for making tough decisions about when and how to intervene when a platform or software is found to be amplifying societal biases, is being gamed by “bad” actors, or otherwise facilitates harm to users.

Stay tuned for new fellowships in the future.

In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.

Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.

Public Interest Technology University Network Postdoc

We are seeking a postdoc to work on a year-long project funded through the Public Interest Technology - University Network’s, (PIT-UN) 2020 Network Challenge Grants . Our PIT-UN postdoc will help to develop a cohort of undergraduates and junior scholars as “public interest technologists” through skill building, reflection, critical service learning, and connections to the field of algorithmic justice. In this role you will partner to develop programming with the D-Lab and Cal NERDS. The position is housed within the Algorithmic Fairness and Opacity Working Group and reports to co-directors Prof. Deirdre Mulligan and Prof. Jenna Burrell, and will also meet with and benefit from the advising and mentorship of Diana Lizarraga, Director of Cal NERDS and Claudia Natalia von Vacano PhD, Executive Director of the D-Lab.

This postdoc position will help undergrads to deepen technical skills, explore the social and political implications of algorithmic systems, and provide onramps into the PIT field. Hands-on workshops will surround formal curriculum in information and data science, and extend AFOG’s existing research, workshops, and public speaker series. Public lectures and lunch talks will grow students' understanding and connections to the field. Through AFOG Reflection and Critical Technical Practice Workshops students will reflect on what it means to practice ethically and in the public interest, exploring issues of power, identity, and expertise, and their role as a professional, as an employee, an expert, and a member of the public in relation to the public good, social justice and social change. You will also organize four PIT-UN related workshops and help support Cal NERDS to develop their STEMinist post-bootcamp programming, and support other activities as needed. In addition, you will be incorporated into the D-Lab Data Science Fellows community and will be expected to present one time in an information internal talk series.  


You will dedicate 50% of your time to this effort and will have the freedom to pursue your own research the rest of the time.

Qualifications:

  • Ideal candidates will have completed a PhD in an interdisciplinary program (perhaps from an Information School, STS or digital humanities program, etc.) or have training in more than one discipline (i.e. an undergraduate degree in CS, DS or another engineering field and a graduate degree in a social science).
  • Candidates with a research interest in educational approaches to cultivating social/political/ethical awareness in technical fields are encouraged to apply.
  • Candidates have knowledge of and sensitivity to the challenges faced by students and scholars from non-dominant groups, the structural barriers that stand in their way, and how to tackle those challenges both through mentoring and by addressing institutional power structures directly.
  • Candidates are able to grasp technical aspects of machine learning and artificial intelligence and are also trained in and comfortable with interpretivist / non-positivist research methods or philosophies.
  • Candidates may have expertise in a particular domain where machine learning or artificial intelligence tools and techniques are becoming influential (i.e. education, labor and employment, journalism and media).
  • Candidates will have a strong research record (ideally in this area) and will also be passionate about finding ways to communicate ideas across disciplinary boundaries and to audiences beyond the Academy.


Please submit:

  • a brief cover letter describing how you are qualified and prepared for this position
  • a CV
  • writing sample #1: an example of one of your published research articles or dissertation chapter or other finished, but unpublished piece that is of greatest relevance to this topic
  • writing sample #2 (optional): an example of something you’ve published that communicates research or ideas to a broader audience (i.e. a blog post, op-ed, etc.)
  • 1-2 minute video highlighting (1) your efforts to support undergraduates, especially highlighting students from non-dominant backgrounds and (2) your commitment to DEI. 


Support for this position is provided by the Public Interest Technology University Network Challenge Fund, a fiscally sponsored project of New Venture Fund. The Public Interest Technology University Network’s challenge grants are funded through the support of the Ford Foundation, Hewlett Foundation, Mastercard Impact Fund with support from Mastercard Center for Inclusive Growth, Patrick J. McGovern Foundation, The Raikes Foundation, Schmidt Futures and The Siegel Family Endowment.


Send application packet to Professor Deirdre Mulligan (dmulligan@berkeley.edu )


Applications due March 1, 2021, with a start date ideally no later than June 2021.



+ Show More

— Show Less

Apply

AFOG Postdoc

Responsibilities

Seeking a postdoc to manage and shape programming for the Algorithmic Opacity and Fairness working with Professor Jenna Burrell and Professor Deirdre Mulligan at the School of Information, UC-Berkeley.

Artificial intelligence is raising new concerns around topics of longstanding interest to sociologists, lawyers, and media scholars including social equality, civil rights, labor and automation, and the evolution of the news media. Complex, non-linear algorithms, and particularly machine learning algorithms, are increasingly being used in domains of socially consequential classification. The development of approaches or solutions to address these challenges are still nascent.

We are interested in many areas, but especially: data sets (histories), philosophy, organizational sociology, AI ethics organizational process, AI governance, and use of AI in government and more highly regulated industries.

At UC-Berkeley we are bringing together faculty and students from sociology, law, computer science and other relevant disciplines to explore and develop ideas and new research directions on this topic. There will be opportunities to dialogue about this topic and its many dimensions with researchers employed in the Bay Area’s tech industry. The project is funded by a grant and Twitter.

The postdoc hired for this one year position will provide intellectual leadership and will handle the logistics of managing an on-campus working group of faculty and students as well as organizing a speaker series for the 2021-2022 academic year.

You will dedicate 50% of your time to this effort and will have the freedom to pursue your own research the rest of the time.

Qualifications

  • Ideal candidates will have completed a PhD in an interdisciplinary program (perhaps from an Information School or an STS program) or have training in more than one discipline (i.e. an undergraduate degree in CS or another engineering field and a graduate degree in a social science/law or vice versa).
  • Candidates are able to grasp technical aspects of machine learning and artificial intelligence and are also trained in and comfortable with interpretivist / non-positivist research methods or philosophies.
  • Candidates may have expertise in a particular domain where machine learning or artificial intelligence tools and techniques are becoming influential (i.e. education, labor and employment, journalism and media).
  • Candidates will have a strong research record (ideally in this area) and will also be passionate about finding ways to communicate ideas across disciplinary boundaries and to audiences beyond the Academy.

Please submit

  • a brief cover letter describing how you are qualified and prepared for this position
  • a CV
  • writing sample #1: an example of one of your published research articles or dissertation chapter or other finished, but unpublished piece that is of greatest relevance to this topic
  • writing sample #2 (optional): an example of something you’ve published that communicates research or ideas to a broader audience (i.e. a blog post, op-ed, etc.)

Please send application packet to Professor Deirdre Mulligan (dmulligan@berkeley.edu).

+ Show More

— Show Less

Apply