Justice and Content Governance Panel Series

AFOG's 2022 Panel Series will explore the connections between Justice and Content Governance.

We invite you to join us on September 9, September 23, and October 7 for our Justice and Content Governance Panel Series!

Our Justice and Content Governance Panel Series will address content moderation, algorithmic assemblages, online harm, restorative & transformative justice, infrastructure, law and policy, and other pressing issues.

Addressing harms related to online content has proven to be a major challenge. These harms range from interpersonal harassment and abuse to dispersed attacks on the publics' understanding of important information about historic human rights atrocities, the safety and efficacy of public health interventions, and the outcomes of free and fair elections. Online platforms have developed a series of technical and legal frameworks for moderating content, including the review and removal of content and banning of repeat offenders. While consensus largely exists around the importance of mitigating and providing meaningful remedies for harms such as harassment, abuse, stalking, and disinformation, major open questions remain on what effectively addressing those harms might look like and how it can be achieved. Moreover, the remedies offered by platforms to victims of harm are often considered unsatisfactory. Procedures for remedy tend to operate through flawed content removal systems instead of offering paths toward justice, healing, and restoration. Finally, addressing the harms related to online content is complicated by platform norms of non-intervention, First Amendment limits, and the trans-jurisdictional scope of platform operations.

This panel seriesaims to open up the solution space for addressing harms arising from online content moderation by shifting the perspective in two key ways:

First, we aim to open new ways of thinking about the sites, interactions, and logics that produce harmful content. Deploying concepts such as infrastructures, repair, and algorithmic assemblages, we hope to surface interventions that may alter the production and circulation of harm and harmful content. Instead of after-the-fact harm remedies, we ask: what interventions might reshape the practices and logics of assembling content and reduce the production or negative impact of harms arising from online content?

Second, current content moderation practices often follow a punitive justice approach which punishes offenders in proportion to the offense. Platforms’ punitive remedial frameworks have often proven non-responsive to individuals experiencing harm and at times have caused more harm. In recent years, researchers have begun to apply alternative justice frameworks (e.g., restorative justice and transformative justice) in the online context, which center the needs of victims and address structural issues that enable or amplify harm. What new models for remediation can we identify through alternative justice frameworks such as restorative justice?

Throughout the series, we hope to explore the following questions:

  • How can a focus on the production of content–its assembling–suggest different forms of collaboration or regulation? Or different sites of intervention?
  • How does centering restorative justice shape the goals and mechanisms for harm mitigation? Or open up new ideas who might provide mechanisms to address harms?
  • How does thinking beyond “content” help us understand how relationships and social practices are shaped by infrastructures and algorithmic     assemblages? 
  • What are the relationships among content governance strategies and our conceptions of justice? How does thinking beyond “content” help us understand how relationships and social practices are shaped by infrastructures and algorithmic assemblages? 
  • What are the responsibilities of various actors in the online ecosystem (sometimes referred to as stack) in foreseeing and addressing particular harms and risks?
  • How can the frames of systemic risk and collective harm reshape our understanding of meaningful interventions?
  • What kinds of laws and regulations could promote or maintain more just systems?
  • What forms of collaborative infrastructure–such as technical standards, human fact checkers, etc.–might assist in addressing harmful content from online assemblages?
  • How can users and citizens exercise agency in mitigating harm and reshaping the platforms they use?

In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.

Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.

Panel 1: Views from the Ground
September 9, 2022, 9:30-11:00am Pacific
Panelists TBA

Panel 2: Refiguring Justice
September 23, 2022, 9:30-11:00am Pacific
Panelists TBA

Panel 3: Infrastructures, Assemblages, and Ecosystems
October 7, 2022, 9:30-11:00am Pacific
Panelists TBA

Each of our sessions will be conducted via Zoom. To receive the link, you must register through Eventbrite (link forthcoming). Once you register, a link will be emailed to you.

In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.

Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.

Our panelist speakers will include a variety of experts from across academia and industry. We will announce our panelists in August.

In June of 2018, the Algorithmic Fairness and Opacity Working Group (AFOG) held a summer workshop with the theme “Algorithms are Opaque and Unfair: Now What?.” The event was organized by Berkeley I School Professors (and AFOG co-directors) Jenna Burrell and Deirdre Mulligan and postdoc Daniel Kluttz, and Allison Woodruff and Jen Gennai from Google. Our working group is generously sponsored by Google Trust and Safety and hosted at the UC Berkeley School of Information.

Inspired by questions that came up at our biweekly working group meetings during the 2017-2018 academic year, we organized four panels for the workshop. The panel topics raised issues that we felt required deeper consideration and debate. To make progress we brought together a diverse, interdisciplinary group of experts from academia, industry, and civil society in a workshop-style environment. In panel discussions, we considered potential ways of acting on algorithmic (un)fairness and opacity. We sought to consider the fullest possible range of ‘solutions,’ including technical implementations (algorithms, user-interface designs), law and policy, standard-setting, incentive programs, new organizational processes, labor organizing, and direct action.