AFOG's 2022 Panel Series will explore the connections between Justice and Content Governance.
We invite you to join us on September 9, September 23, October 7, and October 21 for our Justice and Content Governance Panel Series!
Our Justice and Content Governance Panel Series will address content moderation, algorithmic assemblages, online harm, restorative & transformative justice, infrastructure, law and policy, and other pressing issues.
Addressing harms related to online content has proven to be a major challenge. These harms range from interpersonal harassment and abuse to dispersed attacks on the publics' understanding of important information about historic human rights atrocities, the safety and efficacy of public health interventions, and the outcomes of free and fair elections. Online platforms have developed a series of technical and legal frameworks for moderating content, including the review and removal of content and banning of repeat offenders. While consensus largely exists around the importance of mitigating and providing meaningful remedies for harms such as harassment, abuse, stalking, and disinformation, major open questions remain on what effectively addressing those harms might look like and how it can be achieved. Moreover, the remedies offered by platforms to victims of harm are often considered unsatisfactory. Procedures for remedy tend to operate through flawed content removal systems instead of offering paths toward justice, healing, and restoration. Finally, addressing the harms related to online content is complicated by platform norms of non-intervention, First Amendment limits, and the trans-jurisdictional scope of platform operations.
This panel seriesaims to open up the solution space for addressing harms arising from online content moderation by shifting the perspective in two key ways:
First, we aim to open new ways of thinking about the sites, interactions, and logics that produce harmful content. Deploying concepts such as infrastructures, repair, and algorithmic assemblages, we hope to surface interventions that may alter the production and circulation of harm and harmful content. Instead of after-the-fact harm remedies, we ask: what interventions might reshape the practices and logics of assembling content and reduce the production or negative impact of harms arising from online content?
Second, current content moderation practices often follow a punitive justice approach which punishes offenders in proportion to the offense. Platforms’ punitive remedial frameworks have often proven non-responsive to individuals experiencing harm and at times have caused more harm. In recent years, researchers have begun to apply alternative justice frameworks (e.g., restorative justice and transformative justice) in the online context, which center the needs of victims and address structural issues that enable or amplify harm. What new models for remediation can we identify through alternative justice frameworks such as restorative justice?
Throughout the series, we hope to explore the following questions:
In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.
Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.
Panel 1: Constructions of Justice and "Ethical Tech"
September 9, 2022, 9:30-11:00am Pacific
Panelists: Anna Lauren Hoffmann and Hadar Dancig-Rosenberg
Panel 2: Refiguring Systems and Justice
September 23, 2022, 9:30-11:00am Pacific
Panelists: Amy Hasinoff and TBA
Panel 3: Infrastructures, Assemblages, and Ecosystems
October 7, 2022, 9:30-11:00am Pacific
Panelists: Nick Seaver and Julie Cohen
Panels 1-3 will take place as Zoom webinars. No registration is necessary to join the webinars. Links forthcoming for each webinar.
Workshop: Restorative Justice in Action
October 21, 2022, 9:30-11:00am Pacific
Hosts: Julie Shackford-Bradley, Niloufar Salehi, and Sijia Xiao
To participate in the Restorative Justice in Action Workshop, you will need to register in advance to receive a Zoom link. Registration link forthcoming.
In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.
Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.
Dr. Anna Lauren Hoffmann is currently an Assistant Professor with The Information School at the University of Washington where she is also co-founder and co-director of the UW iSchool's AfterLab. She is also a senior fellow with the Center for Applied Transgender Studies and affiliate faculty with the UW iSchool’s DataLab. Prior to joining the UW iSchool, she was a postdoctoral scholar at the UC Berkeley School of Information and received her PhD from the School of Information Studies at the University of Wisconsin-Milwaukee. Dr. Hoffmann's work has appeared in academic venues like New Media & Society, Review of Communication, JASIST, and Information, Communication, and Society and her research has been supported by the National Science Foundation. In addition, her public writing has appeared The Guardian, Slate, The Seattle Times, and The Los Angeles Review of Books. She lives in Seattle, WA with her wife and two kids.
Professor Hadar Dancig-Rosenberg is a Professor of Law at the Bar-Ilan University Faculty of Law. She is a co-founder and co-chair of the Israeli Criminal Law Association. She specializes in criminal law and procedure, and her areas of expertise include the philosophy of criminal law, non-adversarial criminal justice, therapeutic jurisprudence, and the interface between criminal and constitutional law.
Dr. Amy Hasinoff is an Associate Professor in the department of Communication at the University of Colorado Denver. Dr. Hasinoff studies gender, sexuality, and new media. Her book, Sexting Panic, is about the construction of sexting as a social problem and the responses to it in mass media, law, and education. The book won the National Communication Association’s Diamond Anniversary Book Award in 2016. Her research also appears in journals such as New Media & Society, International Journal of Communication, Communication and Critical/Cultural Studies, Critical Studies in Media Communication, and Feminist Media Studies.
Dr. Nick Seaver is an anthropologist who studies how people use technology to make sense of cultural things. He is an Assistant Professor in the Department of Anthropology at Tufts University, where he also directs the program in Science, Technology, and Society. His first book is about the people who make music recommender systems and how they think about their work. The book is titled Computing Taste: Algorithms and the Makers of Music Recommendation. He is currently studying the rise of attention as a value and virtue in machine learning worlds, from the new tech humanism to the infrastructure of neural networks.
Professor Julie Cohen teaches and writes about surveillance, privacy and data protection, intellectual property, information platforms, and the ways that networked information and communication technologies are reshaping legal institutions. She is the author of Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford University Press, 2019), Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press, 2012), and numerous articles and book chapters, and she is a co-author of Copyright in a Global Information Economy (Wolters Kluwer, 5th ed. 2020). She is a faculty co-director of the Institute for Technology Law and Policy, a faculty advisor of the Center on Privacy and Technology, and a member of the Advisory Board of the Electronic Privacy Information Center.
Dr. Julie Shackford-Bradley is the co-founder and Coordinator for the Restorative Justice Center at UC Berkeley. She has 15 years experience teaching in Global Studies and Peace and Conflict Studies, with a research focus on traditional and community-based justice in international and local contexts. She is a trained mediator and RJ practitioner. With the RJ Center, she conducts trainings and circles on the UC Berkeley campus and in the local community, supervises research projects regarding campus and community-based issues pertaining to conflict, justice and reconciliation, and facilitates internship programs and other collaborations with San Francisco Bay Area Restorative Justice organizations. She has facilitated trainings for staff, faculty and students at Stanford, St. Mary’s of California, Mills College, University of Puget Sound, and beyond. Her specific RJ interests include applications of restorative processes for SVSH (Sexual Violence and Sexual Harassment), equity and inclusion and racial healing. Outside of work, she loves to hike around the Bay Area and create textile art with fabrics she has collected from around the world.
In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.
Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.