AFOG consists of UC Berkeley faculty, postdocs, and graduate students and is housed at Berkeley’s School of Information.
Professor, School of Information
Professor, School of Information
Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, and an affiliated faculty on the new Hewlett funded Berkeley Center for Long-Term Cybersecurity. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. Mulligan recently chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a research agenda. She is a member of the National Academy of Science Forum on Cyber Resilience. She is Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; a founding member of the standing committee for the AI 100 project, a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She is a Commissioner on the Oakland Privacy Advisory Commission. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.
Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Values in design; governance of technology and governance through technology to support human rights/civil liberties; administrative law.
Domain of Application
Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), discrimination, privacy, cybersecurity, regulation generally.
[Temporarily on Leave] Assistant Professor, School of Information
[Temporarily on Leave] Assistant Professor, School of Information
Niloufar Salehi is an Assistant Professor at the School of Information at UC, Berkeley. Her research interests are in social computing, technologically mediated collective action, digital labor, and more broadly, human-computer-interaction (HCI). Her work has been published and received awards in premier venues in HCI including CHI and CSCW. Through building computational social systems in collaboration with existing communities, controlled experiments, and ethnographic fieldwork, her research contributes the design of alternative social configurations online.
Her current project looks at affect recognition used in automated hiring from a fairness and social justice perspective.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
employment/hiring, content filtering algorithms (e.g. YouTube, Facebook, Twitter)
Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society
Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society
Morgan G. Ames is an Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society at the University of California, Berkeley. Morgan’s research explores the role of utopianism in the technology world, and the imaginary of the “technical child” as fertile ground for this utopianism. Based on eight years of archival and ethnographic research, she is finishing a book manuscript on One Laptop per Child which explores the motivations behind the project and the cultural politics of a model site in Paraguay.
Morgan was previously a postdoctoral researcher at the Intel Science and Technology Center for Social Computing at the University of California, Irvine, working with Paul Dourish. Morgan’s PhD is in communication (with a minor in anthropology) from Stanford, where her dissertation won the Nathan Maccoby Outstanding Dissertation Award in 2013. She also has a B.A. in computer science and M.S. in information science, both from the University of California, Berkeley. See http://bio.morganya.org for more.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Machine learning, particular deep neural networks, have become subjects of intense utopianism and dystopianism in the popular press. Alongside this rhetoric, scholars have been finding that these new machine learning techniques are not and likely will never be bias-free. I am interested in exploring both of these topics and how they interconnect.
Domain of Application
General Machine Learning (not domain specific).
Associate Professor, School of Information
Associate Professor, School of Information
Joshua Blumenstock is an Assistant Professor at the U.C. Berkeley School of Information, and the Director of the Data-Intensive Development Lab. His research lies at the intersection of machine learning and development economics, and focuses on using novel data and methods to better understand the causes and consequences of global poverty. At Berkeley, Joshua teaches courses in machine learning and data-intensive development. Previously, Joshua was on the faculty at the University of Washington, where he founded and co-directed the Data Science and Analytics Lab, and led the school’s Data for Social Good initiative. He has a Ph.D. in Information Science and a M.A. in Economics from U.C. Berkeley, and Bachelor’s degrees in Computer Science and Physics from Wesleyan University. He is a recipient of the Intel Faculty Early Career Honor, a Gates Millennium Grand Challenge award, a Google Faculty Research Award, and is a former fellow of the Thomas J. Watson Foundation and the Harvard Institutes of Medicine.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Credit, Fraud/Spam, Network Security, General Machine Learning (not domain specific), Economics
Assistant Professor of Technology Policy, Governance, & Society, School of Information & Goldman School of Public Policy
Assistant Professor of Technology Policy, Governance, & Society, School of Information & Goldman School of Public Policy
Diag Davenport is a behavioral economist studying technological and social problems that drive inequality. He is currently an Assistant Professor of Technology Policy, Governance, and Society at UC Berkeley with appointments in the Goldman School of Public Policy and the School of Information. He conducts research in three areas: empowering good ideas, responsible AI, and cultural evolution. His work develops theory by blending natural, field, and lab experiments. He typically focuses on applications relevant to criminal justice reform, tech policy, and the future of work. Before joining UC Berkeley, Diag was a Presidential Postdoctoral Research Fellow at Princeton. Diag earned his PhD in Behavioral Science from the University of Chicago Booth School of Business. He also has a master’s in Mathematics and Statistics from Georgetown and bachelor’s degrees in Economics and Management from Penn State.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Professor, Sociology
Professor, Sociology
I am a Professor of Sociology at UC Berkeley and an associate fellow of the Max Plack-Sciences Po Center on Coping with Instability in Market Societies (Maxpo). A comparative sociologist by training and taste, I am interested in national variations in knowledge and practice. My first book, Economists and Societies (Princeton University Press 2009), explored the distinctive character of the discipline and profession of economics in three countries. A second book, The Ordinal Society (with Kieran Healy), is under contract. This book investigates new forms of social stratification and morality in the digital economy. Other recent research focuses on the valuation of nature in comparative perspective; the moral regulation of states; the comparative study of political organization (with Evan Schofer and Brian Lande); the microsociology of courtroom exchanges (with Roi Livne); the sociology of economics, with Etienne Ollion and Yann Algan, and with Rakesh Khurana; the politics of wine classifications in France and the United States (with Rebecca Elliott and Olivier Jacquet).
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Credit, General Machine Learning (not domain specific), Health, Employment/Hiring.
Professor, UCSD Department of Communication; Fall 2024 Visiting Professor, Berkeley School of Information
Professor, UCSD Department of Communication; Fall 2024 Visiting Professor, Berkeley School of Information
Lilly Irani is an Associate Professor of Communication & Science Studies at University of California, San Diego where she co-directs the Just Transitions Initiative. She also serves as faculty in the Design Lab, Institute for Practical Ethics, the program in Critical Gender Studies. She is author of Chasing Innovation: Making Entrepreneurial Citizens in Modern India (Princeton University Press, 2019) and Redacted (with Jesse Marx) (Taller California, 2021). Chasing Innovation has been awarded the 2020 International Communication Association Outstanding Book Award and the 2019 Diana Forsythe Prize for feminist anthropological research on work, science, or technology, including biomedicine. Her research examines the cultural politics of high-tech work and the counter-practices they generate, as both an ethnographer, a designer, and a former technology worker. She is a co-founder of the digital worker advocacy organization Turkopticon. Her work has appeared at ACM SIGCHI, New Media & Society, Science, Technology & Human Values, South Atlantic Quarterly, and other venues. She sits on the Editorial Committee of Public Culture and on the Editorial Advisory Boards of New Technology, Work, and Employment and Design and Culture. She has a Ph.D. in Informatics from University of California, Irvine.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Distinguished Haas Professor, School of Law
Distinguished Haas Professor, School of Law
Professor Sonia Katyal’s award winning scholarly work focuses on the intersection of technology, intellectual property, and civil rights (including antidiscrimination, privacy, and freedom of speech).
Prof. Katyal’s current projects focus on the intersection between internet access and civil/human rights, with a special emphasis on the right to information; artificial intelligence and discrimination; trademarks and advertising; source code and the impact of trade secrecy; and a variety of projects on the intersection between gender and the commons. As a member of the university-wide Haas LGBT Cluster, Professor Katyal also works on matters regarding law, gender and sexuality.
Professor Katyal’s recent publications include The Numerus Clausus of Sex, in the University of Chicago Law Review; Technoheritage, in the California Law Review; Rethinking Private Accountability in the Age of Artificial Intelligence, in the UCLA Law Review; The Paradox of Source Code Secrecy, in the Cornell Law Review (forthcoming); Transparenthood in the Michigan Law Review (with Ilona Turner) (forthcoming); and Platform Law and the Brand Enterprise in the Berkeley Journal of Law and Technology (with Leah Chan Grinvald).
Katyal’s past projects have studied the relationship between informational privacy and copyright enforcement; the impact of advertising, branding and trademarks on freedom of expression; and issues relating to art and cultural property, focusing on new technologies and the role of museums in the United States and abroad.
Professor Katyal is the co-author of Property Outlaws (Yale University Press, 2010) (with Eduardo M. Peñalver), which studies the intersection between civil disobedience and innovation in property and intellectual property frameworks. Professor Katyal has won several awards for her work, including an honorable mention in the American Association of Law Schools Scholarly Papers Competition, a Yale Cybercrime Award, and twice received a Dukeminier Award from the Williams Project at UCLA for her writing on gender and sexuality.
She has previously published with a variety of law reviews, including the Yale Law Journal, the University of Pennsylvania Law Review, Washington Law Review, Texas Law Review, and the UCLA Law Review, in addition to a variety of other publications, including the New York Times, the Brooklyn Rail, Washington Post, CNN, Boston Globe’s Ideas section, Los Angeles Times, Slate, Findlaw, and the National Law Journal. Katyal is also the first law professor to receive a grant through The Creative Capital/ Warhol Foundation for her forthcoming book, Contrabrand, which studies the relationship between art, advertising and trademark and copyright law.
In March of 2016, Katyal was selected by U.S. Commerce Secretary Penny Pritzker to be part of the inaugural U.S. Commerce Department’s Digital Economy Board of Advisors. Katyal also serves as an Affiliate Scholar at Stanford Law’s Center for Internet and Society, and is a founding advisor to the Women in Technology Law organization. She also serves on the Executive Committee for the Berkeley Center for New Media (BCNM), on the Advisory Board for Media Studies at UC Berkeley, and on the Advisory Board of the CITRIS Policy Lab.
Before entering academia, Professor Katyal was an associate specializing in intellectual property litigation in the San Francisco office of Covington & Burling. Professor Katyal also clerked for the Honorable Carlos Moreno (later a California Supreme Court Justice) in the Central District of California and the Honorable Dorothy Nelson in the U.S. Court of Appeals for the Ninth Circuit.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Lecturer, Interdisciplinary Studies
Lecturer, Interdisciplinary Studies
I study computing infrastructures and their relationship to work, labor, and expertise.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
My new project tries to understand the tensions in data science between domain expertise and machine learning; this is an issue that is salient to the question of opacity and interpretability.
Domain of Application
General Machine Learning (not domain specific), Health, Employment/Hiring, Education.
Professor, School of Information
Professor, School of Information
AnnaLee (Anno) Saxenian is a professor in the School of Information at the University of California, Berkeley. Her scholarship focuses on regional economies and the conditions under which people, ideas, and geographies combine and connect into hubs of economic activity. She was Dean of the School of Information from 2004-2019, and upon stepping down she received the Berkeley Citation "for distinguished achievement and notable service to the University." She has served as a member of the Apple Academic Advisory Board, and Chair of the Advisory Committee for the National Science Foundation Division of Social, Behavioral, and Economic Sciences (NSF-SBE).
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Senior Researcher, International Computer Science Institution
Senior Researcher, International Computer Science Institution
Michael Carl Tschantz received a Ph.D. from Carnegie Mellon University in 2012 and a Sc.B. from Brown University in 2005, both in Computer Science. Before becoming a researcher at the International Computer Science Institute in 2014, he did two years of postdoctoral research at the University of California, Berkeley. He uses the models of artificial intelligence and statistics to solve the problems of privacy and security. His interests also include experimental design, formal methods, and logics. His current research includes automating information flow experiments, circumventing censorship, and securing machine learning. His dissertation formalized and operationalized what it means to use information for a purpose.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
My prior work has looked at detecting discrimination in online advertising. My ongoing work is looking at how people understand mathematical models of discrimination.
Domain of Application
General machine learning (not domain specific), Advertising
Researcher, Google
Researcher, Google
Andrew Smart is a researcher at Google in the Trust & Safety organization, working on algorithmic fairness, opacity and accountability. His research at Google is on internal ethical audits of machine learning systems, causality in machine learning and understanding structural vulnerability in society. His background is in philosophy, anthropology and cognitive neuroscience. He worked on the neuroscience of language at NYU. He was then a research scientist at Honeywell Aerospace working on machine learning for neurotechnology as well as aviation human factors. He was a Principal Research Scientist at Novartis in Basel, Switzterland working on connected medical devices and machine learning in clinical applications. Prior to joining Google he was a researcher at Twitter working on misinformation.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Currently I am very interested in the scientific and epistemic foundations of machine learning and why we believe what algorithms say at all. Is there a philosophy of science of machine learning? What kind of knowledge does machine learning produce? Is it reliable? Is it scientific? I am also very worried about structural inequality and what impacts the introduction of massive-scale algorithms has on our stratified society. So far the evidence indicates that, in general, algorithms are entrenching unjust social systems and hierarchies. Instead, can we use machine learning to help dismantle oppressive social systems?
Domain of Application
General machine learning (not domain specific)
Assistant Professor, University of California-San Diego, Department of Communication, Halicioglu Data Science Insitute
Assistant Professor, University of California-San Diego, Department of Communication, Halicioglu Data Science Insitute
Stuart Geiger was a Staff Ethnographer & Principal Investigator at the Berkeley Institute for Data Science at UC-Berkeley, where he studies various topics about the infrastructures and institutions that support the production of knowledge. His Ph.D research at the UC-Berkeley School of Information investigated the role of automation in the governance and operation of Wikipedia and Twitter. He has studied topics including moderation and quality control processes, human-in-the-loop decision making, newcomer socialization, cooperation and conflict around automation, the roles of support staff and technicians, and bias, diversity, and inclusion. He uses ethnographic, historical, qualitative, quantitative, and computational methods in his research, which is grounded in the fields of Computer-Supported Cooperative Work, Science and Technology Studies, and communication and new media studies.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
I study how people design, develop, deploy, understand, negotiate, contest, maintain, and repair algorithmic systems within communities of knowledge production. Most of the communities I study — including Wikipedia and the scientific reproducibility / open science movement — have strong normative commitments to openness and transparency. I study how these communities are using (and not using) various technologies and practices around automation, including various forms of machine learning, collaboratively-curated training data sets, data-driven decision-making processes, human-in-the-loop mechanisms, documentation tools and practices, code and data repositories, auditing frameworks, containerization, and interactive notebooks.
Domain of Application
Information Search & Filtering, General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci), Education.
Senior Research Scientist, Google
Senior Research Scientist, Google
Madeleine is a Senior Research Scientist at Google. Previously she led the AI on the Ground Initiative at Data & Society, where she and her team investigate the promises and risks of integrating AI technologies into society. Through human-centered and ethnographic research, AI on the Ground sheds light on the consequences of deploying AI systems beyond the research lab, examining who benefits, who is harmed, and who is accountable. The initiative’s work has focused on how organizations grapple with the challenges and opportunities of AI, from changing work practices and responsibilities to new ethics practices and forms of AI governance.As a researcher and anthropologist, Madeleine has worked to reframe debates about the ethical design, use, and governance of AI systems. She has conducted field work across varied industries and communities, ranging from the Air Force, the driverless car industry, and commercial aviation to precision agriculture and emergency healthcare. Her research has been published and cited in scholarly journals as well as publications including The New York Times, Slate, The Guardian, Vice, and USA Today. She holds a PhD in Anthropology from Columbia University and an S.M. in Comparative Media Studies from MIT.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Employment/Hiring, Information Search & Filtering, General Machine Learning (not domain specific), Health/Medicine, Law/Policy, Scholarship (digital humanities, computational social sci), Agriculture
Director of Research, Distributed AI Research Institute
Director of Research, Distributed AI Research Institute
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
General Machine Learning; Scholarship; Social Media
Visiting Scientist, Cornell - AI Policy and Practice Project
Visiting Scientist, Cornell - AI Policy and Practice Project
David Robinson is a Visiting Scientist in the AI Policy and Practice Initiative, in Cornell's College of Computing and Information Science. Earlier in his career, he cofounded Upturn, an NGO that advances equity and justice in the design, governance, and use of digital technology.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
David studies the ways that algorithms, ethical commitments, and human identities influence and shape each other. He's currently working on a book about the ethics of the algorithm that matches American transplant patients with donated kidneys.
Domain of Application
Criminal Justice, Education, Employment/Hiring, Health/Medicine, Law/Policy, News/Journalism
PhD Candidate, School of Information; former AFOG co-organizer
PhD Candidate, School of Information; former AFOG co-organizer
Zoe Kahn is a PhD student at the UC Berkeley School of Information where she collaborates with data scientists, computer scientists, and designers to understand how technologies impact people and society, with a particular interest in AI and ethics, algorithmic decision making, and responsible innovation. As a qualitative researcher, Zoe ask questions of people and data that surface rich and actionable insights. Zoe brings an interdisciplinary background to her work that blends sociology, technology, law, and policy. She received her B.A. summa cum laude in Sociology from New York University in 2014. She is joint-fellow at the Center for Technology, Society and Policy, Center for Long-Term Cyber Security, and Algorithmic Fairness and Opacity Working Group at UC Berkeley.
Zoe’s current project with four MIMS students, Coordinated Entry System Research and Development for a Continuum of Care in Northern California, is jointly-funded by the Center for Center for Technology, Society and Policy, Center for Long-Term Cyber Security, and Algorithmic Fairness and Opacity Working Group at UC Berkeley.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Housing
PhD Candidate, School of Information
PhD Candidate, School of Information
Andrew Chong is a PhD student at the UC Berkeley School of Information, where his research focuses on how the use of algorithms influences market competition and outcomes. Previously, Andrew worked at the National Bureau of Economic Research examining the impact of behavioral interventions in healthcare and consumer finance. He also has experience developing and implementing pricing models for prescription health insurance (PicWell), and developing dashboards for city governments (with Accela and Civic Insight).
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
I am interested in the increasing role algorithms and firm-controlled marketplaces play in economic life, and their wider implications for fairness, efficiency and competition.
Domain of Application
General Machine Learning (not domain specific), Law/Policy, Scholarship (digital humanities, computational social sci), Online Markets, Algorithmic Pricing
PhD Candidate, School of Information
PhD Candidate, School of Information
I'm a PhD student at the UC Berkeley School of Information where I am advised by Dr. Jenna Burrell. My current research focuses on how violence is perpetuated through information infrastructures and algorithmic logics. I use critical race theory and black feminist epistemology in understanding the implications of technology on society.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
I'm interested in how algorithmic determinism influences the social lives of minority populations. This means deconstructing the notion of algorithms as "black boxes" but rather as tools holding particular biases iterating on pre-existing structural and systemic processes within society.
Domain of Application
Critical Data Studies
PhD Candidate, School of Information
PhD Candidate, School of Information
Liza is a PhD student at the UC Berkeley School of Information, advised by Dr. Niloufar Salehi. She is interested in human-computer interaction, social computing, virtual communities, and online harms. She is a joint CTSP/AFOG fellow for the 2021 calendar year. Previously, she graduated with a BA in Computer Science and Mathematics from Washington University in St. Louis.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Criminal Justice; Social Media
PhD Student, School of Information
PhD Student, School of Information
Tonya Nguyen is a Ph.D. student at the UC Berkeley School of Information. Her work centers on human-computer interaction (HCI), social computing, and human-centered AI. She builds computational social systems by collaborating with communities, using ethnography, and conducting controlled experiments. In the past, she worked on digital fabrication projects, built CSCW tools for online collaboration, and built online experiments to study team viability and fracture. She received her in B.A in Interdisciplinary Studies (Computer Science, Design Innovation, and Critical Theory) from UC Berkeley.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
My work centers on studying how technical systems and algorithmic processes, such as school assignment algorithms, recommender systems, and AI tools, can become fairer and more equitable. For instance, I am studying why school assignment algorithms, while theoretically promising, have failed to promote equity and racial integration in OUSD and SFUSD. My research in mutual aid also investigates the tension between scalability in technical systems and meeting underserved communities' needs via participatory design and ethnography.
Domain of Application
Education; Information Search & Filtering; Scholarship; Social Media
PhD Student, School of Information; AFOG co-organizer
PhD Student, School of Information; AFOG co-organizer
Lauren Chambers is a Ph.D. student at the School of Information, where she studies the intersection of data, technology, and sociopolitical advocacy. Previously Lauren was the staff technologist at the ACLU of Massachusetts, where she explored government data in order to inform citizens and lawmakers about the effects of legislation and government on our civil liberties. Her current research explores the roles of 'public interest technologists' within civil society as they are shaping policy, reconfiguring service delivery, and transforming political campaigns.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
PhD Candidate in Philosophy, University of Zurich
PhD Candidate in Philosophy, University of Zurich
Anna Boos is a PhD candidate in political philosophy at the University of Zurich, and is visiting Berkeley in Fall 2024. Her research focuses on the ethics and politics of automated decision-making and is part of the research project “Automated Decision-Making and the Foundations of Political Authority”, funded by the Swiss National Science Foundation. She also serves as a board member at Dezentrum, Think & Do Tank for Digitalization & Society.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
PhD Student, UC Berkeley EECS
PhD Student, UC Berkeley EECS
I am a PhD student at UC Berkeley studying Theoretical Computer Science (TCS) and I also plan to complete the PhD Designated Emphasis in Science, Technology, and Society studies (STS). My current research investigates how to support accountability in data sharing systems, such as statistical data analysis and machine learning. Currently, I believe that utilizing tools from cryptography, and more specifically interactive proof systems, is a promising avenue given their ability to reason about parties with disparate resources, power, and goals and expand the solution space in surprising ways. In this work, I also draw on the intellectual toolkit of feminist and anti-colonial STS to think rigorously about the societal work done by mathematical abstractions, what makes a mathematical abstraction "good,” and which methods are useful for designing good abstractions.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
AFOG co-founder; (former) Professor, School of Information
AFOG co-founder; (former) Professor, School of Information
Jenna Burrell was a Professor in the School of Information at UC Berkeley and co-founder of AFOG. Her research focuses on how marginalized communities adapt digital technologies to meet their needs and to pursue their goals and ideals. Burrell is the author of Invisible Users: Youth in the Internet Cafes of Urban Ghana (The MIT Press) and is currently working on a second book about rural communities that host critical Internet infrastructure such as fiber optic cables and data centers. She earned a PhD in Sociology from the London School of Economics and a BA in Computer Science from Cornell University.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
I am concerned with how trends toward social classification via algorithms can lead to divergent consequences for majority and minority groups. How do these new mechanisms of classification impact the life circumstances of individuals or reshape their potential for social mobility? To what extent can problems of algorithmic opacity and fairness be addressed by technical solutions? What is the broader solution space for making algorithmic systems more just? How can people resist algorithmic domination?
Domain of Application
Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning, Social Media
Director, Artificial Intelligence Security Initiative; Co-Director, AI Policy Hub; Research Fellow, Center for Long-Term Cybersecurity
Director, Artificial Intelligence Security Initiative; Co-Director, AI Policy Hub; Research Fellow, Center for Long-Term Cybersecurity
Jessica Newman is Director of CLTC’s AI Security Initiative, a hub for interdisciplinary research on the global security implications of artificial intelligence. She is also Co-Director of the AI Policy Hub, a UC Berkeley initiative advancing interdisciplinary research to anticipate and address AI policy opportunities.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Director, CITRIS Policy Lab; Associate Research Professor at the Goldman School of Public Policy; Co-Director, AI Policy Hub; Faculty Director, Berkeley Center for Law and Technology
Director, CITRIS Policy Lab; Associate Research Professor at the Goldman School of Public Policy; Co-Director, AI Policy Hub; Faculty Director, Berkeley Center for Law and Technology
Brandie Nonnecke, PhD is Founding Director of the CITRIS Policy Lab, headquartered at UC Berkeley. She is an Associate Research Professor at the Goldman School of Public Policy (GSPP) where she directs the Tech Policy Initiative, a collaboration between CITRIS and GSPP to strengthen tech policy education, research, and impact. Brandie is the Director of Our Better Web, a program that supports empirical research, policy analysis, training, and engagement to address the sharp rise of online harms. She is a co-director of the Berkeley Center for Law and Technology at Berkeley Law where she leads the Project on Artificial Intelligence, Platforms, and Society. She also co-directs the UC Berkeley AI Policy Hub, an interdisciplinary initiative training researchers to develop effective AI governance and policy frameworks.
How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Domain of Application
Abigail Jacobs
Assistant Professor of Information and Complex Systems, School of Information and College of Literature, Science, and the Arts (dual appointment), University of Michigan
Amit Elazari Bar On
Director, Global Cybersecurity Policy, Intel Corporation
Amy Turner
Master’s Student, School of Information
Angela Okune
PhD Student, Anthropology, UC Irvine
Anne Jonas
Assistant Professor, College of Innovation & Technology at University of Michigan-Flint; PhD '21, School of Information
Benjamin Shestakofsky
Assistant Professor, Department of Sociology, University of Pennsylvania
Dan Sholler
Project Scientist, Technology Management Program, UC-Santa Barbara College of Engineering
Daniel Griffin
PhD '22, School of Information
Daniel Kluttz
Senior Program Manager, Microsoft
David Platzer
Research Fellow (2019-2020), Berkeley Center for New Media
Elif Sert
Research Affiliate, Berkman Klein Center for Internet & Society, Harvard University; Researcher, UC Berkeley Department of Sociology
Emanuel Moss
PhD Candidate, Department of Anthropology / CUNY Graduate Center
Jen Gong
Postdoctoral Scholar, Center for Clinical Informatics and Improvement Research (CLIIR), UCSF
Jeremy David Johnson
PIT-UN Postdoc (2021-22), School of Information
Jeremy Gordon
PhD '23, School of Information
Joshua Kroll
Assistant Professor of Computer Science at the Naval Postgraduate School
Kweku Opoku-Agyemang
Postdoctoral Research Fellow (2022), Center for Effective Global Action, Department of Agricultural and Resource Economics
Marc Faddoul
Director, AI Forensics; MIMS '19, School of Information
McKane Andrus
PhD Student, UW Human Centered Design & Engineering
Michelle R. Carney
UX Researcher, Machine Learning + AI, Google
Moritz Hardt
Director, Max Planck Institute for Intelligent Systems
Nitin Kohli
Staff Scientist at Berkeley Center for Effective Global Action; PhD '21, School of Information
Randi Heinrichs
Postdoc, Center for Digital Cultures at Leuphana University Luneburg
Rebecca C. Fan
Visiting Scholar (2019), UC Berkeley Center for Science, Technology, Medicine, & Society (CSTMS)
Renata Barreto
JD / PhD '22, Jurisprudence and Social Policy - Berkeley Law
Roel I.J. Dobbe
Assistant Professor, Delft University of Technology
Sam Meyer
Product Manager, StreamSets
Sarah M. Brown
Data Science Postdoctoral Research Associate, Brown University
Shazeda Ahmed
Postdoctoral Research Fellow, UCLA; PhD '22, School of Information
Sofia Dewar
Research Lead, Asana; MIMS '19, School of Information
Thomas Krendl Gilbert
PhD Candidate, Machine Ethics and Epistemology
Our partners work with us to examine topics in fairness and opacity. If you are interested in becoming a partner, please contact us at afog@berkeley.edu.