AFOG co-sponsors projects with UC Berkeley’s Center for Society, Technology, and Policy (CTSP)
Liza Gak, Seyi Olojo
Our project aims to uncover the emotionally predatory nature of online advertising on disenfranchised populations. While there have been a variety of privacy and security concerns about data collection for advertising, online targeted advertising can also cause emotional harm due to the hyper-individualized, seemingly invasive nature of ad delivery. We are investigating how online targeted ads inflict emotional harm, and specifically how targeted diet and food ads are harmful to people recovering from disordered eating issues. In this project, we use virtual ethnography and semi-structured interviews to understand how diet ads contribute to emotional distress, and what strategies users take to mitigate the reach of these ads. We hope to continue our inquiry into emotional and psychological responses towards harmful diet ads by analyzing the rise of collective action within online communities.
Jon Gillick, Jeremy Gordon, Pierre Tchetgen
The purpose of this project is to create a platform for embodied learning that builds the capacity of parents and teachers to interact with young learners’ in ways that helps them understand and acquire the skills necessary for multimodal meaning-making and active social-emotional engagement via rhythm. The objectives of the project are to design, implement and disseminate a system using machine learning and sensing technologies to gamify literacy skills development via rhythm and movement. This global community of practice will connect learners of all ages through rhythm as the common language and aims to create a bridge for children to learn, compete and have fun together. Parents and teachers could use it at home, in the classroom, outside in the community (playgrounds, parks), to encourage their language and social-emotional development so they can track the children’s improvement over time.
Darya Kaviani, Tonya Nguyen
Mutual aid groups increasingly rely on online infrastructure to carry out their operations. However, mutual aid groups suffer from burnout, dominance behaviors, and failures to address intersectional power structures. To address these problems, past groups have customized their own networked infrastructures as a form of political participation. This includes an array of innovative structures including Zoom calls, ICE-raid hotlines, and automated systems for volunteer-reimbursements. However, the best strategies and design implications for mutual aid and other systems of care remains unclear and understudied. We plan to explore how mutual aid technologies and infrastructures are designed, built, and maintained.
Andrew Chong and Emma Lurie
Online marketplaces, where firms like Uber and Amazon control the terms of economic interaction, exert an increasing influence on economic life. Algorithms on these platforms are drawing greater scrutiny, whether in how different price and quality characteristics are determined for different users, the end outcomes algorithms optimize for, and ultimately, how surplus created by these networks is allocated between buyers, sellers, and the platform. This project undertakes a systematic survey of perceptions on fairness among riders and drivers in ride-sharing marketplaces. We seek to carefully catalogue different notions of fairness among different groups, examining where they might cohere and where they might be in tension. We explore obligations platform firms might have as custodians of market information and arbiters of market choice and structure, to contribute to developing pubic debate on what a “just” algorithmic regime might resemble for online marketplaces.
Noura Howell and Noopur Raval
This project joins the “second wave” of AI scholars in examining structural questions around what constitutes the field of social concerns within current AI and Social Impact research. Under this project, we will map the ethical and social landscape of current AI research and its limits by conducting a critical and comparative content analysis of how social/ethical concerns have been represented over time at leading AI/ML conferences. Based on our findings, we will also develop a draft syllabus on ‘Global and Critical AI’ and will convene a one-day workshop to build vocabulary for such AI thinking and writing. With this project we aim to join the growing community at UC Berkeley and beyond in identifying the dominant techno-imaginaries of AI and Social Impact research, and 2) critically and tactically expanding that field to bring diverse experiential, social, cultural, and political realities beyond the Silicon Valley to bear upon AI thinking. Morgan Ames is also collaborating on this project.
Millie Chapman and Caleb Scoville
While human impacts on the rest of nature accelerate, our techniques of observing those impacts are rapidly outstripping our ability to react to them. Artificial Intelligence (AI) techniques are quickly being adopted in the environmental sphere not only to inform decisions through providing more useful datasets, but also to facilitate more robust decisions about complex natural resource and conservation problems. The onset of decision-making algorithms requires us to urgently ask the question: Whose values are shaping AI decision making systems in natural resource management? In the shadow of this problem, our project seeks to understand the expansion of privately developed but publicly available environmental data and algorithms through a critical study of algorithmic governance. It aims to facilitate an analysis of how governments and nongovernmental entities deploy techniques of algorithmic conservation to aid in collective judgments about our complex and troubled relation to our natural environments. Carl Boettiger is also a collaborator on the project.
This qualitative dissertation project investigates how the Chinese government and domestic technology companies are collaboratively constructing the country’s social credit system. Through interviews with government officials, tech industry representatives, lawyers, and academics, I argue that China’s government and tech firms rely on and influence one another in their efforts to engineer social trust through incentives of punishment and reward.
Sofia Gutierrez-Dewar, Mehtab Khan, and Joyce Lee
Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human emotion. Powered by artificial intelligence, emerging applications of affect recognition in the workplace raise pressing ethical and regulatory questions: what happens when an automated understanding of human affect enters the real world, in the form of systems that have life-altering consequences? This is particularly pertinent in the realm of workplace surveillance, with no clear answers about how to address privacy, bias, and discrimination problems. As the underlying technologies are generally proprietary and therefore opaque, their impact can only be assessed with a deeper look into how they are designed and implemented. In collaboration with Coworker.org, a nonprofit that helps people organize for improvements in their jobs and workplace, we thus aim to evaluate applications of affect recognition and the potential risks and implications of these technologies.
Eric Harris Bernstein, Julia Hubbell, Nandita Sampath, and Matthew Spring
Advanced analytical software is changing the dynamics between workers and their employers, exacerbating the existing power asymmetry. Combined with AI, technologies like facial recognition, email monitoring, and audio recordings can all be analyzed to infer workers’ emotions and behavior to determine facets of worker productivity or whether an employee is, for example, “threatening.” This technology often reinforces racial and gender bias, and little is known about how the results of these analyses affect managerial decisions like promotions and terminations. Not only is this surveillance a huge loss of privacy for employees, but it may also have a negative impact on their stress levels or ability to perform in the workplace. Our project will investigate the different workplace surveillance technologies on the market and their effects on workers, and then provide potential policy responses to these issues.
Fellows: Zoe Kahn, Amy Turner, Michell Chen, Mahmoud Hamsho, and Yuval Barash
Governments are increasingly using technology to allocate scarce social service resources, like housing services. In collaboration with a Continuum of Care in Northern California, this project will involve using qualitative research methods (i.e. interviews, participatory design, and usability testing) to conduct a needs assessment and system recommendation around “matching” unhoused people to appropriate services. Our goal is to identify matching systems (or design requirements) that suit the needs of diverse housing service providers across the county without compromising the needs and personal information of vulnerable populations. In addition to efficiency, we will consider how systems handle values such as privacy, security, autonomy, dignity, safety, and resiliency.
Angela Okune, Leah Horgan, and Anne Jonas
From the Bill and Melinda Gates Foundation to the Chan Zuckerberg Initiative (CZI), tech billionaires have undertaken development projects that address poverty, disease, education, global climate change, gender inequality, and other urgent social issues. This project seeks to understand how development is framed as a global “skills problem” through the lens of Silicon Valley logics and characterized as a problem of moral and humanitarian concern in need of technological intervention. This interdisciplinary, collaborative team proposes to understand how implicit, explicit, and sometimes contested desires for “scale”, “standardization”, and “sustainability” inform programming, funding, and evaluation in and of technologically-oriented foundations and firms. The project will leverage ethnographic insights derived from participant observation at relevant events in the Bay area and Los Angeles, in-depth interviews with key stakeholders working on technology and education/training, and textual analysis of artifacts and materials including training manuals, academic rubrics, blog posts, and reports.
MLUX (“Machine Learning and User Experience”) is a professional meetup group focused on building a community around the emerging field of human-centered machine learning, meeting in San Francisco for monthly tech talks. We are professional UX Designers and Researchers, Data Scientists, PMs, Developers and everyone in between, and we aim to organize a community that helps foster cooperation, creativity, and learning across the UX and Data Science disciplines. One of the key areas of interest in this field is understanding how to design and use data science effectively and ethically. By partnering with CTSP and AFOG, we are excited to host an event centered around “Designing and using Data Science Ethically,” aimed at Tech professionals to share best practices and lessons learned from the field. MLUX Website: https://www.mluxsf.com/
Morgan Ames and Roel Dobbie
Machine learning has undergone a renaissance of methods in the last six years, and is being quietly introduced into nearly every aspect of our daily lives. In many instances, though, this is handled by private companies deploying proprietary software with little oversight. This results in a widespread impression that machine learning is a ‘black box’ with little hope for supervision or regulation. With this project, we aim to join a growing community of researchers focused on unpacking this black box. First, we seek to map the disconnect between public conceptions and the actual processes of machine learning to illuminate how contemporary machine learning is done. Second, we seek to intervene in the process of defining and tuning machine learning models themselves, using the framework of value-sensitive design as a point of departure, to understand the values-related challenges in the design of machine learning systems.