Symposium on Surveillance and Education

The Symposium on Surveillance and Education, co-sponsored by AFOG, Othering & Belonging Institute, Cal NERDS, and D-Lab - April 15, 2022

The 2022 Symposium on Surveillance and Education, held via Zoom on April 15, explored the roles of surveillance and policing in the classroom, on campus, and in the community. The symposium considered how layers of surveillance and policing affect relationships among students, faculty, staff, administrators, and other citizens. Featured speakers included community activists, faculty members, university administrators, and students. Each session included remarks from speakers as well as small-group discussion among symposium participants.

Below are descriptions of the sessions and recordings of the spotlight panels.

In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.

Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.

Ed Tech and Surveillance in Classrooms and on Campus

This session asked how the relationships among students, faculty, and staff have been reshaped by surveillance-based educational technologies such as remote proctoring, learning management system metrics, and virtual classroom monitoring.


  • Elana Zeide
    , University of Nebraska College of Law
  • Scott Seaborn, Campus Privacy Officer, University of California Berkeley
  • Anna Gueorguieva, UC Berkeley Student
Image of Elana Zeide with dark hair and a blue shirt
  • Elana Zeide is an Assistant Professor at the University of Nebraska's College of Law. She teaches, writes, and consults about student privacy, artificial intelligence, and the modern day "permanent record." Her work focuses on how school and workplace technologies impact education, equality, and access to opportunity. Recent publications include Robot Teachers, Pedagogy, and Policy, Student Privacy in the Age of Big Data, and The Structural Consequences of Big Data-Driven Education.

Image of Scott Seaborn with short light gray hair and a red shirt

Scott Seaborn is the Campus Privacy Officer at the University of California, Berkeley. In this role, he has continued the UC Berkeley Privacy Office’s tradition of promoting the principles of institutional transparency and autonomy privacy for individual data subjects. Scott has worked in the privacy and compliance fields for over 20 years, including positions in the U.S. Department of Health and Human Services, Office for Civil Rights (OCR); County  of Napa; and Genentech, Inc. While at OCR, Scott was part of the initial investigative team tasked with enforcing HIPAA, when enforcement of the HIPAA Privacy Rule began in the early 2000’s, and he has been passionate about protecting the privacy rights of individuals ever since. 

Panelists and audience members discussed specific examples of problems of fairness (and justice), including cash bail in the criminal justice system, “bad faith” search phrases (e.g., the question, “Did the Holocaust happen?”), and representational harm in image-labeling. Panelists noted a key challenge that technology, on its own, is not good at explaining when it should not be used or when it has reached its limits. Panelists pointed out that understanding broader historical and sociological debates in the domain of application and investigating contemporary reform efforts, for example in criminal justice, can help to clarify the place of algorithmic prediction and classification tools in a given domain. Partnering with civil-society groups can ensure a sound basis for making tough decisions about when and how to intervene when a platform or software is found to be amplifying societal biases, is being gamed by “bad” actors, or otherwise facilitates harm to users.

Anna Gueorguieva is a third year undergraduate at the University of California, Berkeley studying Data Science and Legal Studies. She is interested in the interactions and impacts of data, society, and law on one another, specifically with the goal of promoting justice and equity. Her work has explored predictive analytics with academic settings and what it means to integrate ethics into data science research.

Policing, Disciplinary Structures, and Abolition

This session explored the disciplinary structures that arise through surveillance-based ed tech, contemplating methods for reform and abolition that can promote healthy relationships among students and educators.


  • Shea Swauger
    , University of Colorado Denver Auraria Library
  • Jennifer Jones, ACLU of Northern California
Image of Shea Swauger with a black shirt, short dark brown hair, and tattoo sleeves
  1. Shea Swauger is a queer, neurodivergent librarian and PhD student in Education and Critical Studies at the University of Colorado Denver. His research is on surveillance, privacy, abolition, and pedagogy. Otherwise, he has three chickens and a cat.

    • Image of Jennifer Jones with long dark hair and a blue shirt

      Jennifer Jones is a Staff Attorney for the Technology and Civil Liberties program at the ACLU of Northern California, where she defends and promotes civil rights and civil liberties in the digital age, with a focus on work at the intersection of government surveillance, immigrants’ rights, and racial justice.

    The panel describes a common rejoinder to criticism of automated decision-making. This panel sought to consider the assumptions of this comparison between humans and machine automation. There is a need to account for differences in the kinds of biases associated with human decision-making (including cognitive biases of all sorts) and those uniquely generated by machine reasoning. The panel discussed the ways that humans rely on or reject decision-support software. For example, work by one of the panelists, Professor Angèle Christin, shows how algorithmic tools deployed in professional environments may be contested or ignored. Guidelines directed at humans about how to use particular systems of algorithmic classification in low- as opposed to high-stakes domains can go unheeded. This seemed to be the case in at least one example of how Amazon’s facial recognition system has been applied in a law-enforcement context. Such cases underscore the point that humans aren’t generally eliminated when automated-decision systems are deployed; they still decide how they are to be configured and implemented, which may disrupt whatever gains in “fairness” might otherwise be realized. Rather than working to establish which is better–human or machine decision-making–we suggest developing research on the most effective ways to bring automated tools and humans together to form hybrid decision-making systems.

    Surveillance Futures

    1:00 - 2:00 pm PDT
    This session envisioned new futures in education and surveillance, articulating means for fostering better technologies and communities.

    • Richmond Wong, University of California Berkeley

    The panel examined how we can enhance the autonomy of humans who are subject to automated decision-making tools. Focusing on “fairness” as a resource allocation or algorithmic problem tends to assume it is something to be worked out by experts. Taking an alternative approach, we discussed how users and other ‘stakeholders’ can identify errors, unfairness, and make other kinds of requests to influence and improve the platform or system in question. What is the best way to structure points of user feedback? Panelists pointed out that design possibilities range from lightweight feedback mechanisms to support for richer, agonistic debate. Not-for-profit models, such as Wikipedia, demonstrate the feasibility of high transparency and open debate about platform design. Yet participation on Wikipedia, while technically open to anyone, requires a high investment of time and energy to develop mastery of the platform and the norms of participation. “Flagging” functions, on the other hand, are pervasive, lightweight tools found on most mainstream platforms. However, they often serve primarily to shift governance work onto users without the potential to fundamentally influence platform policies or practices. Furthermore, limiting consideration to the autonomy of platform users misses the crucial fact that many automated decisions are imposed on people who never use the system directly.