Publications

AFOG members publish work on ensuring technical systems support equitable and just societies.

July 2020

Deirdre K. Mulligan and Helen Nissenbaum

The Concept of Handoff as a Model for Ethical Analysis and Design

Open PDFView Citation

Mulligan, Deirdre and Helen Nissenbaum. (2020) "The Concept of Handoff as a Model for Ethical Analysis and Design," Markus D. Dubber, Frank Pasquale, and Sunit Das (Eds). The Oxford Handbook of Ethics of AI.

January 2020

R. Stuart Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang

Garbage in, garbage out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?

In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed.

Open PDFView Citation

Geiger, R. Stuart, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang. Garbage in, garbage out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’20), 2020. [PDF]

November 2019

K. Deirdre Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong

This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology

We examine the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of "fairness" deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions.

Open PDFView Citation

Mulligan, K. Deirdre., Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

November 2019

Jenna Burrell, Zoe Kahn, Anne Jonas, and Daniel Griffin

When Users Control the Algorithms: Values Expressed in Practices on Twitter.

By examining tweets about the "Twitter algorithm" we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives.

Open PDFView Citation

Burrell, Jenna, Zoe Kahn, Anne Jonas, and Daniel Griffin. 2019. When Users Control the Algorithms: Values Expressed in Practices on Twitter. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

August 2019

Daniel Kluttz and Deirdre K. Mulligan

Automated Decision Support Technologies and the Legal Profession

Through in-depth, semi-structured interviews of experts in this space, we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems

Open PDFView Citation

Kluttz, Daniel and Deirdre K. Mulligan. 2019. Automated Decision Support Technologies and the Legal Profession. Berkeley Technology Law Journal.

January 2019

Anne Jonas and Jenna Burrell

Friction, snake oil, and weird countries: Cybersecurity systems could deepen global inequality through regional blocking.

Through participant-observation at relevant events and intensive interviews with experts, we document the quest by professionals tasked with preserving online security to use new machine-learning based techniques to develop a “fairer” system to determine patterns of “good” and “bad” usage.

Open PDFView Citation

Jonas, Anne and Jenna Burrell. 2019. Friction, Snake Oil, and Weird Countries: Cybersecurity Systems Could Deepen Global Inequality through Regional Blocking. Big Data & Society, 6, 1. Jan 2019. [PDF]

January 2019

Eva Yiwei Wu, Emily Pedersen, Niloufar Salehi

Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas

Online content creators have to manage their relations with opaque, proprietary algorithms that platforms employ to rank, filter, and recommend content. How do content creators make sense of these algorithms and what does that teach us about the roles that algorithms play in the social world? We take the case of YouTube because of its widespread use and the spaces for collective sense-making and mutual aid that content creators (YouTubers) have built within the last decade. We engaged with YouTubers in one-on-one interviews, performed content analysis on YouTube videos that discuss the algorithm, and conducted a wiki survey on YouTuber online groups. This triangulation of methodologies afforded us a rich understanding of content creators' understandings, priorities, and wishes as they relate to the algorithm. We found that YouTubers assign human characteristics to the algorithm to explain its behavior; what we have termed algorithmic personas. We identify three main algorithmic personas on YouTube: Agent, Gatekeeper, and Drug Dealer. We propose algorithmic personas as a conceptual framework that describes the new roles that algorithmic systems take on in the social world. As we face new challenges around the ethics and politics of algorithmic platforms such as YouTube, algorithmic personas describe roles that are familiar and can help develop our understanding of algorithmic power relations and accountability mechanisms.

Open PDFView Citation

Wu, Eva Yiwei, Emily Pedersen, Niloufar Salehi. 2019. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

January 2019

McKane Andrus and Thomas Krendl Gilbert

Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning

While formal definitions of fairness in machine learning (ML) have been proposed, its place within a broader institutional model of fair decision-making remains ambiguous. In this paper we interpret ML as a tool for revealing when and how measures fail to capture purported constructs of interest, augmenting a given institution's understanding of its own interventions and priorities. Rather than codifying "fair" principles into ML models directly, the use of ML can thus be understood as a form of quality assurance for existing institutions, exposing the epistemic fault lines of their own measurement practices. Drawing from Friedler et al's [2016] recent discussion of representational mappings and previous discussions on the ontology of measurement, we propose a social measurement assurance program (sMAP) in which ML encourages expert deliberation on a given decision-making procedure by examining unanticipated or previously unexamined covariates. As an example, we apply Rawlsian principles of fairness to sMAP and produce a provisional just theory of measurement that would guide the use of ML for achieving fairness in the case of child abuse in Allegheny County.

Open PDFView Citation

Andrus, McKane and Thomas Krendl Gilbert. 2019. Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning. In proceedings of the AAAI/ACM conference on Artificial Intelligence, Ethics and Society. Honolulu, HI. [PDF]

January 2019

Deirdre Mulligan, Daniel Kluttz, and Nitin Kohli

Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions

Algorithmic systems, particularly those based on machine learning, are increasingly being used to help us reason and make decisions. Effective systems that also align with societal values require not only designs that foster in-the-moment human engagement with such systems but also governance models that support ongoing critical engagement with system processes and outputs. Using the case of expert decision-support systems, we introduce the concept of contestability. We argue that contestability has distinct advantages over transparency and explainability, two policy objectives often offered as antidotes to the challenges posed by black-box algorithmic systems. We then discuss contestable design and governance principles as applied to clinical decision support (CDS) systems used by health-care professionals to aid medical decisions. We explain current governance frameworks around the use of these systems — particularly laws and professional standards — and point out their limitations. We argue that approaches focused on contestability better promote professionals’ continued, active engagement with algorithmic systems than current frameworks.

Open PDFView Citation

Mulligan, K. Deirdre, Daniel Kluttz, and Nitin Kohli. 2019. Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Draft available at SSRN: http://dx.doi.org/10.2139/ssrn.3311894

August 2018

Jenna Burrell; Deirdre Mulligan; Daniel Kluttz; Andrew Smart; Josh A Kroll; Amit Elazari

Report from the First AFOG Summer Workshop

In response to the overwhelming attention to issues of algorithmic fairness framed narrowly in terms of technical solutions, this workshop sought to lay the groundwork to broaden the ‘solution space’ of responsible AI to include not only technical implementations like algorithms or user-interface design but also to consider law and policy, standards-setting, incentive programs, organizational structures, labor organizing, and direct action.

Open PDFView Citation

Burrell, Jenna, Deirdre K. Mulligan, and Daniel N. Kluttz. "Report from the first AFOG Summer Workshop." UC Berkeley Algorithmic Fairness and Opacity Working Group. (2018).

January 2018

Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli

A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics

Machine learning (ML) is increasingly deployed in real world contexts, supplying actionable insights and forming the basis of automated decision-making systems. While issues resulting from biases pre-existing in training data have been at the center of the fairness debate, these systems are also affected by technical and emergent biases, which often arise as context-specific artifacts of implementation. This position paper interprets technical bias as an epistemological problem and emergent bias as a dynamical feedback phenomenon. In order to stimulate debate on how to change machine learning practice to effectively address these issues, we explore this broader view on bias, stress the need to reflect on epistemology, and point to value-sensitive design methodologies to revisit the design and implementation process of automated decision-making systems.

Open PDFView Citation

Dobbe, Roel, Sarah Dean, Thomas Gilbert, and Nitin Kohli (2018) A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. Presented at the 2018 Workshop on Fairness, Accountability and Transparency in Machine Learning during ICML 2018. Stockholm, Sweden.