Publications

AFOG members publish work on ensuring technical systems support equitable and just societies.

October 2025

Lauren M. Chambers

Beyond Big Tech: Advocacy Technologists within Mission-Driven Civil Society Organizations

A new class of technology professionals is shaping policy, informing legal arguments, and bolstering advocacy efforts from inside nonprofit and civil society organizations. This career path might be claimed by a number of different new sociotechnical domains: public interest technology (PIT), civic technology, data for good, technology for social justice, and others. Yet it is still unclear exactly what professional roles are emerging, what sorts of people are filling them, and what such individuals' work looks like and achieves. This work presents an interview study that seeks to characterize a specific sub-population of technological practitioners who are contributing materially to mission-driven projects from within the civil society or nonprofit sector: advocacy technologists. I present four patterns of praxis (i.e., professional practices and paradigms)common to advocacy technologists: their disposition as critics who interrogate technological paradigms and who introspect on their own ethical footprint, and their professional position translating between technical and non-technical worlds and trailblazing into new career paths. These four patterns demonstrate that advocacy technologists are choosing to occupy a precarious new niche within advocacy work ecosystems that has great potential to impact policy and design outcomes. Indeed, these practitioners enlist computational strategies to advance advocacy goals, situate deep sociotechnical expertise within policymaking contexts, and further civil society as an active site of tech design in its own right. This study contributes to the growing body of literature in human-computer interaction (HCI) and computer-supported cooperative work (CSCW) that explores computing technologies' role in processes and places of sociopolitical change. Ultimately, this work proposes that mission-driven civil society organizations and their technologists are not only underexplored sites for HCI and CSCW research, but also potentially rich collaborators for sociotechnical researchers who seek to deepen their impact on policy and social change.

Open PDFView Citation

Lauren Marietta Chambers. 2025. Beyond Big Tech: Advocacy Technologists within Mission-Driven Civil Society Organizations. Proc. ACM Hum.-Comput. Interact. 9, 7, Article CSCW378 (November 2025), 31 pages. https://doi.org/10.1145/3757559

November 2022

Liza Gak, Seyi Olojo, Niloufar Salehi

The Distressing Ads That Persist: Uncovering The Harms of Targeted Weight-Loss Ads Among Users with Histories of Disordered Eating

Targeted advertising can harm vulnerable groups when it targets individuals' personal and psychological vulnerabilities. We focus on how targeted weight-loss advertisements harm people with histories of disordered eating. We identify three features of targeted advertising that cause harm: the persistence of personal data that can expose vulnerabilities, over-simplifying algorithmic relevancy models, and design patterns encouraging engagement that can facilitate unhealthy behavior. Through a series of semi-structured interviews with individuals with histories of unhealthy body stigma, dieting, and disordered eating, we found that targeted weight-loss ads reinforced low self-esteem and deepened pre-existing anxieties around food and exercise. At the same time, we observed that targeted individuals demonstrated agency and resistance against distressing ads. Drawing on scholarship in postcolonial environmental studies, we use the concept of slow violence to articulate how online targeted advertising inflicts harms that may not be immediately identifiable. CAUTION: This paper includes media that could be triggering, particularly to people with an eating disorder. Please use caution when reading, printing, or disseminating this paper.

Open PDFView Citation

Liza Gak, Seyi Olojo, and Niloufar Salehi. 2022. The Distressing Ads That Persist: Uncovering The Harms of Targeted Weight-Loss Ads Among Users with Histories of Disordered Eating. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 377 (November 2022), 23 pages. https://doi.org/10.1145/3555102

September 2022

David G. Robinson

Voices in the Code: A Story about People, Their Values, and the Algorithm They Made

Algorithms—rules written into software—shape key moments in our lives: from who gets hired or admitted to a top public school, to who should go to jail or receive scarce public benefits. Such decisions are both technical and moral. Today, the logic of high stakes software is rarely open to scrutiny, and central moral questions are often left for the technical experts to answer. Policymakers and scholars are seeking better ways to share the moral decisionmaking within high stakes software—exploring ideas like public participation, transparency, forecasting, and algorithmic audits. But there are few real examples of those techniques in use. In Voices in the Code, scholar David G. Robinson tells the story of how one community built a life-and-death algorithm in an inclusive, accountable way. Between 2004 and 2014, a diverse group of patients, surgeons, clinicians, data scientists, public officials and advocates collaborated and compromised to build a new kidney transplant matching algorithm—a system to offer donated kidneys to particular patients from the U.S. national waiting list. Drawing on interviews with key stakeholders, unpublished archives, and a wide scholarly literature, Robinson shows how this new Kidney Allocation System emerged and evolved over time, as participants gradually built a shared understanding both of what was possible, and of what would be fair. Robinson finds much to criticize, but also much to admire, in this story. It ultimately illustrates both the promise and the limits of participation, transparency, forecasting and auditing of high stakes software. The book’s final chapter draws out lessons for the broader struggle to build technology in a democratic and accountable way.

Open PDFView Citation

Robinson, D.G. (2022). Voices in the Code: A Story about People, Their Values, and the Algorithm They Made. New York: Russell Sage Foundation.

October 2021

Richmond Y. Wong

Tactics of Soft Resistance in User Experience Professionals' Values Work

User experience (UX) professionals' attempts to address social values as a part of their work practice can overlap with tactics to contest, resist, or change the companies they work for. This paper studies tactics that take place in this overlap, where UX professionals try to re-shape the values embodied and promoted by their companies, in addition to the values embodied and promoted in the technical systems and products that their companies produce. Through interviews with UX professionals working at large U.S.-based technology companies and observations at UX meetup events, this paper identifies tactics used towards three goals: (1) creating space for UX expertise to address values; (2) making values visible and relevant to other organizational stakeholders; and (3) changing organizational processes and orientations towards values. This paper analyzes these as tactics of resistance: UX professionals seek to subvert or change existing practices and organizational structures towards more values-conscious ends. Yet, these tactics of resistance often rely on the dominant discourses and logics of the technology industry. The paper characterizes these as partial or "soft" tactics, but also argues that they nevertheless hold possibilities for enacting values-oriented changes.

Open PDFView Citation

Richmond Y. Wong. 2021. Tactics of Soft Resistance in User Experience Professionals' Values Work. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 355 (October 2021), 28 pages. https://doi.org/10.1145/3479499

July 2020

Deirdre K. Mulligan and Helen Nissenbaum

The Concept of Handoff as a Model for Ethical Analysis and Design

This chapter introduces the concept of handoff, which offers a lens through which to evaluate sociotechnical systems in ethical and political terms. It is particularly tuned to transformations in which system components of one type replace components of another. Of great contemporary interest are handoff instances in which AI take over tasks previously performed by humans, for example, labelling images, processing and producing natural language, controlling other machines, predicting human action (and other events), and make decisions. Grounded in past work in social studies of technology and values in design, the handoff analytical model disrupts the idea that if components of a system are modular in functional terms, replacing one with another will leave ethical and political dimensions intact. Instead, the handoff lens highlights different ways that different types of system components operate and interoperate and shows these differences to be relevant to the configuration of values in respective systems. The handoff lens offers a means to make ethically relevant changes salient that might otherwise be overlooked.

Open PDFView Citation

Mulligan, Deirdre and Helen Nissenbaum. (2020) "The Concept of Handoff as a Model for Ethical Analysis and Design," Markus D. Dubber, Frank Pasquale, and Sunit Das (Eds). The Oxford Handbook of Ethics of AI.

February 2020

Marion Fourcade and Daniel N Kluttz

A Maussian bargain: Accumulation by gift in the digital economy

The harvesting of data about people, organizations, and things and their transformation into a form of capital is often described as a process of “accumulation by dispossession,” a pervasive loss of rights buttressed by predatory practices and legal violence. Yet this argument does not square well with the fact that enrollment into digital systems is often experienced (and presented by companies) as a much more benign process: signing up for a “free” service, responding to a “friend’s” invitation, or being encouraged to “share” content. In this paper, we focus on the centrality of gifting and reciprocity to the business model and cultural imagination of digital capitalism. Relying on historical narratives and in-depth interviews with the designers and critics of digital systems, we explain the cultural genesis of these “give-to-get” relationships and analyze the socio-technical channels that structure them in practice. We suggest that the economic relation that develops as a result of a digital gift offering not only masks the structural asymmetry between giver and gifted but also permits the creation of the new commodity of personal data, obfuscates its true value, and naturalizes its private appropriation. We call this unique regime “accumulation by gift.”

Open PDFView Citation

Fourcade, M., & Kluttz, D. N. (2020). A Maussian bargain: Accumulation by gift in the digital economy. Big Data & Society, 7(1).

January 2020

R. Stuart Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang

Garbage in, garbage out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?

In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed.

Open PDFView Citation

Geiger, R. Stuart, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang. Garbage in, garbage out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’20), 2020. [PDF]

November 2019

Jenna Burrell, Zoe Kahn, Anne Jonas, and Daniel Griffin

When Users Control the Algorithms: Values Expressed in Practices on Twitter.

By examining tweets about the "Twitter algorithm" we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives.

Open PDFView Citation

Burrell, Jenna, Zoe Kahn, Anne Jonas, and Daniel Griffin. 2019. When Users Control the Algorithms: Values Expressed in Practices on Twitter. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

November 2019

K. Deirdre Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong

This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology

We examine the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of "fairness" deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions.

Open PDFView Citation

Mulligan, K. Deirdre., Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

August 2019

Daniel Kluttz and Deirdre K. Mulligan

Automated Decision Support Technologies and the Legal Profession

Through in-depth, semi-structured interviews of experts in this space, we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems

Open PDFView Citation

Kluttz, Daniel and Deirdre K. Mulligan. 2019. Automated Decision Support Technologies and the Legal Profession. Berkeley Technology Law Journal.

January 2019

Anne Jonas and Jenna Burrell

Friction, snake oil, and weird countries: Cybersecurity systems could deepen global inequality through regional blocking.

Through participant-observation at relevant events and intensive interviews with experts, we document the quest by professionals tasked with preserving online security to use new machine-learning based techniques to develop a “fairer” system to determine patterns of “good” and “bad” usage.

Open PDFView Citation

Jonas, Anne and Jenna Burrell. 2019. Friction, Snake Oil, and Weird Countries: Cybersecurity Systems Could Deepen Global Inequality through Regional Blocking. Big Data & Society, 6, 1. Jan 2019. [PDF]

January 2019

Deirdre Mulligan, Daniel Kluttz, and Nitin Kohli

Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions

Algorithmic systems, particularly those based on machine learning, are increasingly being used to help us reason and make decisions. Effective systems that also align with societal values require not only designs that foster in-the-moment human engagement with such systems but also governance models that support ongoing critical engagement with system processes and outputs. Using the case of expert decision-support systems, we introduce the concept of contestability. We argue that contestability has distinct advantages over transparency and explainability, two policy objectives often offered as antidotes to the challenges posed by black-box algorithmic systems. We then discuss contestable design and governance principles as applied to clinical decision support (CDS) systems used by health-care professionals to aid medical decisions. We explain current governance frameworks around the use of these systems — particularly laws and professional standards — and point out their limitations. We argue that approaches focused on contestability better promote professionals’ continued, active engagement with algorithmic systems than current frameworks.

Open PDFView Citation

Mulligan, K. Deirdre, Daniel Kluttz, and Nitin Kohli. 2019. Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Draft available at SSRN: http://dx.doi.org/10.2139/ssrn.3311894

January 2019

McKane Andrus and Thomas Krendl Gilbert

Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning

While formal definitions of fairness in machine learning (ML) have been proposed, its place within a broader institutional model of fair decision-making remains ambiguous. In this paper we interpret ML as a tool for revealing when and how measures fail to capture purported constructs of interest, augmenting a given institution's understanding of its own interventions and priorities. Rather than codifying "fair" principles into ML models directly, the use of ML can thus be understood as a form of quality assurance for existing institutions, exposing the epistemic fault lines of their own measurement practices. Drawing from Friedler et al's [2016] recent discussion of representational mappings and previous discussions on the ontology of measurement, we propose a social measurement assurance program (sMAP) in which ML encourages expert deliberation on a given decision-making procedure by examining unanticipated or previously unexamined covariates. As an example, we apply Rawlsian principles of fairness to sMAP and produce a provisional just theory of measurement that would guide the use of ML for achieving fairness in the case of child abuse in Allegheny County.

Open PDFView Citation

Andrus, McKane and Thomas Krendl Gilbert. 2019. Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning. In proceedings of the AAAI/ACM conference on Artificial Intelligence, Ethics and Society. Honolulu, HI. [PDF]

January 2019

Eva Yiwei Wu, Emily Pedersen, Niloufar Salehi

Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas

Online content creators have to manage their relations with opaque, proprietary algorithms that platforms employ to rank, filter, and recommend content. How do content creators make sense of these algorithms and what does that teach us about the roles that algorithms play in the social world? We take the case of YouTube because of its widespread use and the spaces for collective sense-making and mutual aid that content creators (YouTubers) have built within the last decade. We engaged with YouTubers in one-on-one interviews, performed content analysis on YouTube videos that discuss the algorithm, and conducted a wiki survey on YouTuber online groups. This triangulation of methodologies afforded us a rich understanding of content creators' understandings, priorities, and wishes as they relate to the algorithm. We found that YouTubers assign human characteristics to the algorithm to explain its behavior; what we have termed algorithmic personas. We identify three main algorithmic personas on YouTube: Agent, Gatekeeper, and Drug Dealer. We propose algorithmic personas as a conceptual framework that describes the new roles that algorithmic systems take on in the social world. As we face new challenges around the ethics and politics of algorithmic platforms such as YouTube, algorithmic personas describe roles that are familiar and can help develop our understanding of algorithmic power relations and accountability mechanisms.

Open PDFView Citation

Wu, Eva Yiwei, Emily Pedersen, Niloufar Salehi. 2019. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

August 2018

Jenna Burrell; Deirdre Mulligan; Daniel Kluttz; Andrew Smart; Josh A Kroll; Amit Elazari

Report from the First AFOG Summer Workshop

In response to the overwhelming attention to issues of algorithmic fairness framed narrowly in terms of technical solutions, this workshop sought to lay the groundwork to broaden the ‘solution space’ of responsible AI to include not only technical implementations like algorithms or user-interface design but also to consider law and policy, standards-setting, incentive programs, organizational structures, labor organizing, and direct action.

Open PDFView Citation

Burrell, Jenna, Deirdre K. Mulligan, and Daniel N. Kluttz. "Report from the first AFOG Summer Workshop." UC Berkeley Algorithmic Fairness and Opacity Working Group. (2018).

January 2018

Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli

A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics

Machine learning (ML) is increasingly deployed in real world contexts, supplying actionable insights and forming the basis of automated decision-making systems. While issues resulting from biases pre-existing in training data have been at the center of the fairness debate, these systems are also affected by technical and emergent biases, which often arise as context-specific artifacts of implementation. This position paper interprets technical bias as an epistemological problem and emergent bias as a dynamical feedback phenomenon. In order to stimulate debate on how to change machine learning practice to effectively address these issues, we explore this broader view on bias, stress the need to reflect on epistemology, and point to value-sensitive design methodologies to revisit the design and implementation process of automated decision-making systems.

Open PDFView Citation

Dobbe, Roel, Sarah Dean, Thomas Gilbert, and Nitin Kohli (2018) A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. Presented at the 2018 Workshop on Fairness, Accountability and Transparency in Machine Learning during ICML 2018. Stockholm, Sweden.