People

AFOG consists of UC Berkeley faculty, postdocs, and graduate students and is housed at Berkeley’s School of Information.

Deirdre K. Mulligan

Deirdre K. Mulligan

[Temporarily on Leave] Professor, School of Information

Deirdre K. Mulligan

[Temporarily on Leave] Professor, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, and an affiliated faculty on the new Hewlett funded Berkeley Center for Long-Term Cybersecurity. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. Mulligan recently chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a research agenda. She is a member of the National Academy of Science Forum on Cyber Resilience. She is Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; a founding member of the standing committee for the AI 100 project, a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She is a Commissioner on the Oakland Privacy Advisory Commission. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.

Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Values in design; governance of technology and governance through technology to support human rights/civil liberties; administrative law.

Domain of Application

Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), discrimination, privacy, cybersecurity, regulation generally.

Close Icon
Niloufar Salehi

Niloufar Salehi

[Temporarily on Leave] Assistant Professor, School of Information

Niloufar Salehi

[Temporarily on Leave] Assistant Professor, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Niloufar Salehi is an Assistant Professor at the School of Information at UC, Berkeley. Her research interests are in social computing, technologically mediated collective action, digital labor, and more broadly, human-computer-interaction (HCI). Her work has been published and received awards in premier venues in HCI including CHI and CSCW. Through building computational social systems in collaboration with existing communities, controlled experiments, and ethnographic fieldwork, her research contributes the design of alternative social configurations online.

Her current project looks at affect recognition used in automated hiring from a fairness and social justice perspective.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

employment/hiring, content filtering algorithms (e.g. YouTube, Facebook, Twitter)

Close Icon
Jessica Newman

Jessica Newman

Director, Artificial Intelligence Security Initiative; Co-Director, AI Policy Hub; Research Fellow, Center for Long-Term Cybersecurity

Jessica Newman

Director, Artificial Intelligence Security Initiative; Co-Director, AI Policy Hub; Research Fellow, Center for Long-Term Cybersecurity

This person has not submitted a bio.
Please stay tuned for any update.

Jessica Newman is Director of CLTC’s AI Security Initiative, a hub for interdisciplinary research on the global security implications of artificial intelligence. She is also Co-Director of the AI Policy Hub, a UC Berkeley initiative advancing interdisciplinary research to anticipate and address AI policy opportunities.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Close Icon
Brandie Nonnecke

Brandie Nonnecke

Director, CITRIS Policy Lab; Associate Research Professor at the Goldman School of Public Policy; Co-Director, AI Policy Hub; Faculty Director, Berkeley Center for Law and Technology

Brandie Nonnecke

Director, CITRIS Policy Lab; Associate Research Professor at the Goldman School of Public Policy; Co-Director, AI Policy Hub; Faculty Director, Berkeley Center for Law and Technology

This person has not submitted a bio.
Please stay tuned for any update.

Brandie Nonnecke, PhD is Founding Director of the CITRIS Policy Lab, headquartered at UC Berkeley. She is an Associate Research Professor at the Goldman School of Public Policy (GSPP) where she directs the Tech Policy Initiative, a collaboration between CITRIS and GSPP to strengthen tech policy education, research, and impact. Brandie is the Director of Our Better Web, a program that supports empirical research, policy analysis, training, and engagement to address the sharp rise of online harms. She is a co-director of the Berkeley Center for Law and Technology at Berkeley Law where she leads the Project on Artificial Intelligence, Platforms, and Society. She also co-directs the UC Berkeley AI Policy Hub,  an interdisciplinary initiative training researchers to develop effective AI governance and policy frameworks.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Close Icon
Jenna Burrell

Jenna Burrell

Adjunct Professor, School of Information

Jenna Burrell

Adjunct Professor, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Jenna Burrell is a Professor in the School of Information at UC Berkeley and co-director of AFOG. Her research focuses on how marginalized communities adapt digital technologies to meet their needs and to pursue their goals and ideals. Burrell is the author of Invisible Users: Youth in the Internet Cafes of Urban Ghana (The MIT Press) and is currently working on a second book about rural communities that host critical Internet infrastructure such as fiber optic cables and data centers. She earned a PhD in Sociology from the London School of Economics and a BA in Computer Science from Cornell University.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I am concerned with how trends toward social classification via algorithms can lead to divergent consequences for majority and minority groups. How do these new mechanisms of classification impact the life circumstances of individuals or reshape their potential for social mobility? To what extent can problems of algorithmic opacity and fairness be addressed by technical solutions? What is the broader solution space for making algorithmic systems more just? How can people resist algorithmic domination?

Domain of Application

Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning, Social Media

Close Icon
Joshua Blumenstock

Joshua Blumenstock

Associate Professor, School of Information

Joshua Blumenstock

Associate Professor, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Joshua Blumenstock is an Assistant Professor at the U.C. Berkeley School of Information, and the Director of the Data-Intensive Development Lab. His research lies at the intersection of machine learning and development economics, and focuses on using novel data and methods to better understand the causes and consequences of global poverty. At Berkeley, Joshua teaches courses in machine learning and data-intensive development. Previously, Joshua was on the faculty at the University of Washington, where he founded and co-directed the Data Science and Analytics Lab, and led the school’s Data for Social Good initiative. He has a Ph.D. in Information Science and a M.A. in Economics from U.C. Berkeley, and Bachelor’s degrees in Computer Science and Physics from Wesleyan University. He is a recipient of the Intel Faculty Early Career Honor, a Gates Millennium Grand Challenge award, a Google Faculty Research Award, and is a former fellow of the Thomas J. Watson Foundation and the Harvard Institutes of Medicine.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Credit, Fraud/Spam, Network Security, General Machine Learning (not domain specific), Economics

Close Icon
Marion Fourcade

Marion Fourcade

Professor, Sociology

Marion Fourcade

Professor, Sociology

This person has not submitted a bio.
Please stay tuned for any update.

I am a Professor of Sociology at UC Berkeley and an associate fellow of the Max Plack-Sciences Po Center on Coping with Instability in Market Societies (Maxpo). A comparative sociologist by training and taste, I am interested in national variations in knowledge and practice. My first book, Economists and Societies (Princeton University Press 2009), explored the distinctive character of the discipline and profession of economics in three countries. A second book, The Ordinal Society (with Kieran Healy), is under contract. This book investigates new forms of social stratification and morality in the digital economy. Other recent research focuses on the valuation of nature in comparative perspective; the moral regulation of states; the comparative study of political organization (with Evan Schofer and Brian Lande); the microsociology of courtroom exchanges (with Roi Livne); the sociology of economics, with Etienne Ollion and Yann Algan, and with Rakesh Khurana; the politics of wine classifications in France and the United States (with Rebecca Elliott and Olivier Jacquet).

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Credit, General Machine Learning (not domain specific), Health, Employment/Hiring.

Close Icon
Moritz Hardt

Moritz Hardt

Assistant Professor, Electrical Engineering and Computer Science

Moritz Hardt

Assistant Professor, Electrical Engineering and Computer Science

This person has not submitted a bio.
Please stay tuned for any update.

My mission is to build theory and tools that make the practice of machine learning across science and industry more robust, reliable, and aligned with societal values.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

General Machine Learning

Close Icon
Sonia Katyal

Sonia Katyal

Distinguished Haas Professor, School of Law

Sonia Katyal

Distinguished Haas Professor, School of Law

This person has not submitted a bio.
Please stay tuned for any update.

Professor Sonia Katyal’s award winning scholarly work focuses on the intersection of technology, intellectual property, and civil rights (including antidiscrimination, privacy, and freedom of speech).

Prof. Katyal’s current projects focus on the intersection between internet access and civil/human rights, with a special emphasis on the right to information; artificial intelligence and discrimination; trademarks and advertising; source code and the impact of trade secrecy; and a variety of projects on the intersection between gender and the commons. As a member of the university-wide Haas LGBT Cluster, Professor Katyal also works on matters regarding law, gender and sexuality.

Professor Katyal’s recent publications include The Numerus Clausus of Sex, in the University of Chicago Law Review; Technoheritage, in the California Law Review; Rethinking Private Accountability in the Age of Artificial Intelligence, in the UCLA Law Review; The Paradox of Source Code Secrecy, in the Cornell Law Review (forthcoming); Transparenthood in the Michigan Law Review (with Ilona Turner) (forthcoming); and Platform Law and the Brand Enterprise in the Berkeley Journal of Law and Technology (with Leah Chan Grinvald).

Katyal’s past projects have studied the relationship between informational privacy and copyright enforcement; the impact of advertising, branding and trademarks on freedom of expression; and issues relating to art and cultural property, focusing on new technologies and the role of museums in the United States and abroad.

Professor Katyal is the co-author of Property Outlaws (Yale University Press, 2010) (with Eduardo M. Peñalver), which studies the intersection between civil disobedience and innovation in property and intellectual property frameworks. Professor Katyal has won several awards for her work, including an honorable mention in the American Association of Law Schools Scholarly Papers Competition, a Yale Cybercrime Award, and twice received a Dukeminier Award from the Williams Project at UCLA for her writing on gender and sexuality.

She has previously published with a variety of law reviews, including the Yale Law Journal, the University of Pennsylvania Law Review, Washington Law Review, Texas Law Review, and the UCLA Law Review, in addition to a variety of other publications, including the New York Times, the Brooklyn Rail, Washington Post, CNN, Boston Globe’s Ideas section, Los Angeles Times, Slate, Findlaw, and the National Law Journal. Katyal is also the first law professor to receive a grant through The Creative Capital/ Warhol Foundation for her forthcoming book, Contrabrand, which studies the relationship between art, advertising and trademark and copyright law.

In March of 2016, Katyal was selected by U.S. Commerce Secretary Penny Pritzker to be part of the inaugural U.S. Commerce Department’s Digital Economy Board of Advisors. Katyal also serves as an Affiliate Scholar at Stanford Law’s Center for Internet and Society, and is a founding advisor to the Women in Technology Law organization. She also serves on the Executive Committee for the Berkeley Center for New Media (BCNM), on the Advisory Board for Media Studies at UC Berkeley, and on the Advisory Board of the CITRIS Policy Lab.

Before entering academia, Professor Katyal was an associate specializing in intellectual property litigation in the San Francisco office of Covington & Burling. Professor Katyal also clerked for the Honorable Carlos Moreno (later a California Supreme Court Justice) in the Central District of California and the Honorable Dorothy Nelson in the U.S. Court of Appeals for the Ninth Circuit.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Close Icon
Shreeharsh Kelkar

Shreeharsh Kelkar

Lecturer, Interdisciplinary Studies

Shreeharsh Kelkar

Lecturer, Interdisciplinary Studies

This person has not submitted a bio.
Please stay tuned for any update.

I study computing infrastructures and their relationship to work, labor, and expertise.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

My new project tries to understand the tensions in data science between domain expertise and machine learning; this is an issue that is salient to the question of opacity and interpretability.

Domain of Application

General Machine Learning (not domain specific), Health, Employment/Hiring, Education.

Close Icon
Morgan G. Ames

Morgan G. Ames

Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society

Morgan G. Ames

Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society

This person has not submitted a bio.
Please stay tuned for any update.

Morgan G. Ames is an Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society at the University of California, Berkeley. Morgan’s research explores the role of utopianism in the technology world, and the imaginary of the “technical child” as fertile ground for this utopianism. Based on eight years of archival and ethnographic research, she is finishing a book manuscript on One Laptop per Child which explores the motivations behind the project and the cultural politics of a model site in Paraguay.

Morgan was previously a postdoctoral researcher at the Intel Science and Technology Center for Social Computing at the University of California, Irvine, working with Paul Dourish. Morgan’s PhD is in communication (with a minor in anthropology) from Stanford, where her dissertation won the Nathan Maccoby Outstanding Dissertation Award in 2013. She also has a B.A. in computer science and M.S. in information science, both from the University of California, Berkeley. See http://bio.morganya.org for more.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Machine learning, particular deep neural networks, have become subjects of intense utopianism and dystopianism in the popular press. Alongside this rhetoric, scholars have been finding that these new machine learning techniques are not and likely will never be bias-free. I am interested in exploring both of these topics and how they interconnect.

Domain of Application

General Machine Learning (not domain specific).

Close Icon
Michael Carl Tschantz

Michael Carl Tschantz

Senior Researcher, International Computer Science Institution

Michael Carl Tschantz

Senior Researcher, International Computer Science Institution

This person has not submitted a bio.
Please stay tuned for any update.

Michael Carl Tschantz received a Ph.D. from Carnegie Mellon University in 2012 and a Sc.B. from Brown University in 2005, both in Computer Science. Before becoming a researcher at the International Computer Science Institute in 2014, he did two years of postdoctoral research at the University of California, Berkeley. He uses the models of artificial intelligence and statistics to solve the problems of privacy and security. His interests also include experimental design, formal methods, and logics. His current research includes automating information flow experiments, circumventing censorship, and securing machine learning. His dissertation formalized and operationalized what it means to use information for a purpose.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

My prior work has looked at detecting discrimination in online advertising. My ongoing work is looking at how people understand mathematical models of discrimination.

Domain of Application

General machine learning (not domain specific), Advertising

Close Icon
Andrew Smart

Andrew Smart

Researcher, Google

Andrew Smart

Researcher, Google

This person has not submitted a bio.
Please stay tuned for any update.

Andrew Smart is a researcher at Google in the Trust & Safety organization, working on algorithmic fairness, opacity and accountability. His research at Google is on internal ethical audits of machine learning systems, causality in machine learning and understanding structural vulnerability in society. His background is in philosophy, anthropology and cognitive neuroscience. He worked on the neuroscience of language at NYU. He was then a research scientist at Honeywell Aerospace working on machine learning for neurotechnology as well as aviation human factors. He was a Principal Research Scientist at Novartis in Basel, Switzterland working on connected medical devices and machine learning in clinical applications. Prior to joining Google he was a researcher at Twitter working on misinformation.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Currently I am very interested in the scientific and epistemic foundations of machine learning and why we believe what algorithms say at all. Is there a philosophy of science of machine learning? What kind of knowledge does machine learning produce? Is it reliable? Is it scientific? I am also very worried about structural inequality and what impacts the introduction of massive-scale algorithms has on our stratified society. So far the evidence indicates that, in general, algorithms are entrenching unjust social systems and hierarchies. Instead, can we use machine learning to help dismantle oppressive social systems?

Domain of Application

General machine learning (not domain specific)

Close Icon
Stuart Geiger

Stuart Geiger

Staff Ethnographer and Principal Investigator, Berkeley Institute for Data Science

Stuart Geiger

Staff Ethnographer and Principal Investigator, Berkeley Institute for Data Science

This person has not submitted a bio.
Please stay tuned for any update.

Stuart Geiger is a Staff Ethnographer & Principal Investigator at the Berkeley Institute for Data Science at UC-Berkeley, where he studies various topics about the infrastructures and institutions that support the production of knowledge. His Ph.D research at the UC-Berkeley School of Information investigated the role of automation in the governance and operation of Wikipedia and Twitter. He has studied topics including moderation and quality control processes, human-in-the-loop decision making, newcomer socialization, cooperation and conflict around automation, the roles of support staff and technicians, and bias, diversity, and inclusion. He uses ethnographic, historical, qualitative, quantitative, and computational methods in his research, which is grounded in the fields of Computer-Supported Cooperative Work, Science and Technology Studies, and communication and new media studies.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I study how people design, develop, deploy, understand, negotiate, contest, maintain, and repair algorithmic systems within communities of knowledge production. Most of the communities I study — including Wikipedia and the scientific reproducibility / open science movement — have strong normative commitments to openness and transparency. I study how these communities are using (and not using) various technologies and practices around automation, including various forms of machine learning, collaboratively-curated training data sets, data-driven decision-making processes, human-in-the-loop mechanisms, documentation tools and practices, code and data repositories, auditing frameworks, containerization, and interactive notebooks.

Domain of Application

Information Search & Filtering, General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci), Education.

Close Icon
Madeleine Clare Elish

Madeleine Clare Elish

Senior Research Scientist, Google

Madeleine Clare Elish

Senior Research Scientist, Google

This person has not submitted a bio.
Please stay tuned for any update.

Madeleine is a Senior Research Scientist at Google. Previously she led the AI on the Ground Initiative at Data & Society, where she and her team investigate the promises and risks of integrating AI technologies into society. Through human-centered and ethnographic research, AI on the Ground sheds light on the consequences of deploying AI systems beyond the research lab, examining who benefits, who is harmed, and who is accountable. The initiative’s work has focused on how organizations grapple with the challenges and opportunities of AI, from changing work practices and responsibilities to new ethics practices and forms of AI governance.As a researcher and anthropologist, Madeleine has worked to reframe debates about the ethical design, use, and governance of AI systems. She has conducted field work across varied industries and communities, ranging from the Air Force, the driverless car industry, and commercial aviation to precision agriculture and emergency healthcare. Her research has been published and cited in scholarly journals as well as publications including The New York Times, Slate, The Guardian, Vice, and USA Today. She holds a PhD in Anthropology from Columbia University and an S.M. in Comparative Media Studies from MIT.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Employment/Hiring, Information Search & Filtering, General Machine Learning (not domain specific), Health/Medicine, Law/Policy, Scholarship (digital humanities, computational social sci), Agriculture

Close Icon
Alex Hanna

Alex Hanna

Sr. Research Scientist, Google

Alex Hanna

Sr. Research Scientist, Google

This person has not submitted a bio.
Please stay tuned for any update.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

General Machine Learning; Scholarship; Social Media

Close Icon
David Robinson

David Robinson

Visiting Scientist, Cornell - AI Policy and Practice Project

David Robinson

Visiting Scientist, Cornell - AI Policy and Practice Project

This person has not submitted a bio.
Please stay tuned for any update.

David Robinson is a Visiting Scientist in the AI Policy and Practice Initiative, in Cornell's College of Computing and Information Science. Earlier in his career, he cofounded Upturn, an NGO that advances equity and justice in the design, governance, and use of digital technology.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

David studies the ways that algorithms, ethical commitments, and human identities influence and shape each other. He's currently working on a book about the ethics of the algorithm that matches American transplant patients with donated kidneys.

Domain of Application

Criminal Justice, Education, Employment/Hiring, Health/Medicine, Law/Policy, News/Journalism

Close Icon
Rebecca C. Fan

Rebecca C. Fan

Visiting Scholar, UC Berkeley Center for Science, Technology, Medicine, & Society (CSTMS)

Rebecca C. Fan

Visiting Scholar, UC Berkeley Center for Science, Technology, Medicine, & Society (CSTMS)

This person has not submitted a bio.
Please stay tuned for any update.

Rebecca Fan is a social scientist with an interdisciplinary background (anthropology, international human rights law and politics, socio-legal studies). She is currently a visiting scholar at UC Berkeley’s Center for Science, Technology, Medicine, & Society (CSTMS). Prior to completing her PhD, she worked for a number of human rights organizations (e.g., Amnesty International) and contributed to advocacy work at regional and global forums. Her dissertation brings together fieldwork at the United Nations and participatory action research to investigate what she identifies as the epistemological struggle of governance via regime analysis and institutional studies. Continuing her engagement with global civil society, she currently serves on the Commission on Environmental Economic and Social Policy (CEESP, which is one of the 6 Commissions of the International Union for Conservation of Nature) as a contributing member. When time permits, she plays Indonesian gamelan music and enjoys hiking and floral arrangements.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

My interest in the subject arises from my concerns about the socio-technical phenomenon of late that has the sort of double-edged sword effect that needs to be better articulated and addressed: e.g. 1) how it is simultaneously empowering and dis-empowering for some but not others; 2) how we are actually getting rich information but poor data; or 3) how we tend to trust the machine to be objective but only to see how human prejudices can be amplified by the machine taught/designed by human. Furthermore, algorithms often live in a black box that is likely to be proprietary. As such, it’s difficult to monitor or evaluate it for accountability or fairness. It also blinds us from seeing the struggle of power asymmetry clearly. These are some of the issues that occupied my thoughts, to name a few, that will continue to shape the work-in-progress that I am developing now.”

Domain of Application

General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci)

Close Icon
Kweku Opoku-Agyemang

Kweku Opoku-Agyemang

Postdoctoral Research Fellow, Center for Effective Global Action, Department of Agricultural and Resource Economics

Kweku Opoku-Agyemang

Postdoctoral Research Fellow, Center for Effective Global Action, Department of Agricultural and Resource Economics

This person has not submitted a bio.
Please stay tuned for any update.

Kweku Opoku-Agyemang is an (honorary) postdoctoral research fellow with the Center for Effective Global Action at UC Berkeley. His research interests span development economics, industrial organization, research transparency, ethics and technological impacts. He is an author of the book Encountering Poverty: Thinking and Acting in an Unequal World, published by the University of California Press in 2016. He was previously a Research Associate in Human-Computer Interaction and Social Computing at Cornell Tech.My research focuses on how social science research can become more transparent with the aid of computational tools and on relevant challenges. I am also interested in the causes and consequences of algorithmic bias in the both developed and developing countries as well as the potential role of industrial organization in promoting algorithmic fairness in firms that focus on artificial intelligence.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Credit, Criminal Justice, Information Search & Filtering, General Machine Learning, Health, Employment/Hiring, Scholarship, Education

Close Icon
David Platzer

David Platzer

Research Fellow, Berkeley Center for New Media

David Platzer

Research Fellow, Berkeley Center for New Media

This person has not submitted a bio.
Please stay tuned for any update.

David Platzer is a recent graduate of the anthropology program at Johns Hopkins and is currently a Berggruen Institute Transformations of the Human research fellow at UC Berkley’s Center for New Media. His dissertation research focused on neurodiversity employment initiatives in the tech industry, while his current research investigates the existential risk movement in its intersection with value-alignment in AI development.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Close Icon
Jeremy David Johnson

Jeremy David Johnson

PIT-UN Postdoc, School of Information

Jeremy David Johnson

PIT-UN Postdoc, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

I’ve always considered myself a tech-focused person, so throughout my education, I’ve studied digital technologies to keep things interesting. I completed my MA and PhD in Communication Arts & Sciences at Penn State and most recently worked at University of the Pacific. I consider myself an interdisciplinary scholar with a wide range of interests in algorithms, digital platforms, gaming, and social justice. I’m a first-gen student, part of the federal McNair Scholars program to help underserved students earn graduate degrees. I’m excited to work with AFOG and continue my research in algorithmic rhetoric!

Personally, I’m a gamer of various sorts. I play a variety of video games (and sometimes stream on Twitch). I’ve also DM’d for D&D campaigns for the past few years and often play in other campaigns. Outdoors, I love hiking, biking, and soccer. I also love going for walks with my dog, Jessie, and my partner, Judi. I’ve lived in Florida, Colorado, Wisconsin, Illinois, Pennsylvania, Virginia, and California, so I have many thoughts on the regions of the US. I’m elated to work in the Bay Area, which I hope will be the best place I have lived!

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I study how algorithms create the conditions for our public discourse and facilitate or hamper social justice. I understand opacity, transparency, and fairness not just as ends to achieve, but also as discursive commonplaces for our responses to algorithmic world-making. I aim for algorithmic justice and ask how opacity/transparency, fairness, power, and agency can help us craft more just and equitable networked spaces and practices.

Domain of Application

Information Search & Filtering; News/Journalism; Social Media; Politics & Public Discourse

Close Icon
Default Profile ImageZoe Kahn

Zoe Kahn

PhD Student, School of Information

Default Profile Image

Zoe Kahn

PhD Student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Zoe Kahn is a PhD student at the UC Berkeley School of Information where she collaborates with data scientists, computer scientists, and designers to understand how technologies impact people and society, with a particular interest in AI and ethics, algorithmic decision making, and responsible innovation. As a qualitative researcher, Zoe ask questions of people and data that surface rich and actionable insights. Zoe brings an interdisciplinary background to her work that blends sociology, technology, law, and policy. She received her B.A. summa cum laude in Sociology from New York University in 2014. She is joint-fellow at the Center for Technology, Society and Policy, Center for Long-Term Cyber Security, and Algorithmic Fairness and Opacity Working Group at UC Berkeley.

Zoe’s current project with four MIMS students, Coordinated Entry System Research and Development for a Continuum of Care in Northern California, is jointly-funded by the Center for Center for Technology, Society and Policy, Center for Long-Term Cyber Security, and Algorithmic Fairness and Opacity Working Group at UC Berkeley.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Housing

Close Icon
Default Profile ImageShazeda Ahmed

Shazeda Ahmed

PhD Candidate, School of Information

Default Profile Image

Shazeda Ahmed

PhD Candidate, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Shazeda is a third-year Ph.D. student at the I School. She has worked as a researcher for the Council on Foreign Relations, Asia Society, the U.S. Naval War College, Citizen Lab, Ranking Digital Rights, and the Mercator Institute for China Studies. Her research focuses on China’s social credit system, information technology policy, and role in setting norms of global Internet governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I study China’s social credit system, which uses troves of Chinese citizens’ personal and behavioral data to assign them scores meant to reflect how “trustworthy,” law-abiding, and financially responsible they are. The algorithms used to calculate these scores are classified as either trade or state secrets, and to date it seems that score issuers cannot fully explain score breakdowns to users. There are plans to identify low scorers on public blacklists, which could discriminate against people who are unaware of how the system operates. Through my research I hope to discover how average users perceive and are navigating the system as it develops.

Domain of Application

Credit.

Close Icon
Default Profile ImageMarc Faddoul

Marc Faddoul

Master’s Student, School of Information

Default Profile Image

Marc Faddoul

Master’s Student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

After an MS in Data Science, I came to the ISchool to pursue transdisplinary interests related to information technologies. My research focusses on computational propaganda and algorithmic fairness.Youtube said they would recommend less conspiratorial content through their Autoplay algorithm. Can they be held accountable for this, despite the opacity of their system? One of my project is to measure if this trend is actually changing.I am also in the process of publishing a paper on the limits and potential mitigations of the PSA, a software used for pre-trial risk-assessement. Fairness and transparency are at the core of the value tussles.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Criminal Justice, Information Search & Filtering, General Machine Learning

Close Icon
Default Profile ImageThomas Krendl Gilbert

Thomas Krendl Gilbert

PhD Candidate, Machine Ethics and Epistemology

Default Profile Image

Thomas Krendl Gilbert

PhD Candidate, Machine Ethics and Epistemology

This person has not submitted a bio.
Please stay tuned for any update.

I am an interdisciplinary candidate at UC Berkeley, affiliated with the Center for Human-Compatible AI. My prior training in philosophy, sociology, and political theory has led me to study the various technical and organizational dilemmas that emerge when experts use machine learning to aid decision-making. In my spare time I enjoy sailing and creative writing.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I am interested in how different algorithmic learning procedures (e.g. reinforcement learning) reframe classical ethical questions and recall the original problems of political economy, such as aggregating human values and preferences. This necessarily includes asking what we mean by “explainable” AI, what it means for machine learning to be “fair” when enmeshed with institutional practices, and how new forms of social autonomy are made possible through automation.

Domain of Application

Credit, Criminal Justice, General Machine Learning (not domain specific), Housing, Education, Measurement

Close Icon
Default Profile ImageDaniel Griffin

Daniel Griffin

PhD Candidate, School of Information

Default Profile Image

Daniel Griffin

PhD Candidate, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Daniel Griffin is a doctoral student at the School of Information at UC Berkeley. His research interests center on intersections between information and values and power, looking at freedom and control in information systems. He is a co-director of UC Berkeley’s Center for Technology, Society & Policy and a commissioner on the City of Berkeley’s Disaster and Fire Safety Commission. Prior to entering the doctoral program, he completed the Master of Information Management and Systems program, also at the School of Information. Before graduate school he served as an intelligence analyst in the US Army. As an undergraduate, he studied philosophy at Whitworth University.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

In what some have called an age of disinformation, how, and with what effects, do people using search engines imagine and interact with the search engine algorithms? How do the teams of people at search engines seek to understand and satisfy the goals and behavior of people using their services? What sort of normative claims does, and possibly can, society make of the design of the search engine algorithms and services?

Domain of Application

Information Search & Filtering.

Close Icon
Default Profile ImageAnne Jonas

Anne Jonas

PhD Candidate, School of Information

Default Profile Image

Anne Jonas

PhD Candidate, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

After previously working in program management at the Participatory Culture Foundation and the Barnard Center for Research on Women, I now study education, information systems, culture, and inequality here at the I School. I am a Fellow with the Center for Technology, Society, and Policy and a Research Grantee of the Center for Long-Term Cybersecurity on several collaborative projects.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Use of algorithms in educational curriculum provision, assessment, evaluation, surveillance and discipline. Also working on a project related to “regional discrimination” that looks at how geographic markers are used to block people from certain websites and web based services.

Domain of Application

Criminal Justice, Information Search & Filtering, Employment/Hiring, Education.

Close Icon
Default Profile ImageNitin Kohli

Nitin Kohli

PhD Candidate, School of Information

Default Profile Image

Nitin Kohli

PhD Candidate, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Nitin Kohli is a PhD student at UC Berkeley’s School of Information, working under Deirdre Mulligan. His research examines privacy, security, and fairness in algorithmic systems from technical and legal perspectives. On the technical side, Nitin employs theoretical and computational techniques to construct algorithmic mechanisms with such properties. On the legal side, Nitin explores institutional and organizational mechanisms to protect these values by examining the incentive structures and power dynamics that govern these environments. His work draws upon mathematics, statistics, computer science, economics, and law.Prior to his PhD work, Nitin worked both as a data scientist in industry and as an academic. Within industry, Nitin developed machine learning and natural language processing algorithms to identify occurrences and locations of future risk in healthcare settings. Within academia, Nitin worked as an adjunct instructor and as a summer lecturer at UC Berkeley, teaching introductory and advanced courses in probability, statistics, and game theory. Nitin holds a Master’s degree in Information and Data Science from Berkeley’s School of Information, and a Bachelor’s degree in Mathematics and Statistics, where he received departmental honors in statistics for his work in stochastic modeling and game theory.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

My research interests explicitly are in the construction algorithms that preserve certain human values, such as fairness and privacy. I’m also interested in legal and policy solutions that promote and incentivize transparency and fairness within algorithmic decision making.

Domain of Application

Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), Employment/Hiring, Scholarship (digital humanities, computational social sci), Education.

Close Icon
Default Profile ImageAndrew Chong

Andrew Chong

PhD Student, School of Information

Default Profile Image

Andrew Chong

PhD Student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Andrew Chong is a PhD student at the UC Berkeley School of Information, where his research focuses on how the use of algorithms influences market competition and outcomes. Previously, Andrew worked at the National Bureau of Economic Research examining the impact of behavioral interventions in healthcare and consumer finance. He also has experience developing and implementing pricing models for prescription health insurance (PicWell), and developing dashboards for city governments (with Accela and Civic Insight).

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I am interested in the increasing role algorithms and firm-controlled marketplaces play in economic life, and their wider implications for fairness, efficiency and competition.

Domain of Application

General Machine Learning (not domain specific), Law/Policy, Scholarship (digital humanities, computational social sci), Online Markets, Algorithmic Pricing

Close Icon
Default Profile ImageJi Su Yoo

Ji Su Yoo

PhD Student, School of Information

Default Profile Image

Ji Su Yoo

PhD Student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum sagittis, odio consectetur euismod tempus, elit urna dictum ipsum, vel molestie diam ex tempus lacus. Nulla sagittis, est sed molestie fringilla, erat dolor eleifend arcu, sit amet scelerisque nisl nisl a arcu. Morbi condimentum ex elementum dapibus fermentum. Praesent tristique, dui bibendum gravida pretium, ipsum ipsum viverra tellus, ultrices blandit est nunc quis augue. Praesent diam justo, consectetur non gravida sit amet, eleifend ac enim. Integer aliquet elit vitae tortor mattis sollicitudin. Nullam interdum velit quis gravida laoreet. Donec pretium ipsum et laoreet pharetra. Integer lacinia lobortis risus, efficitur vehicula orci ultrices quis. Pellentesque faucibus, lectus id gravida vehicula, arcu felis scelerisque neque, sed pharetra urna purus at magna. Sed scelerisque ornare tortor a pretium. Phasellus vel dapibus enim. Maecenas maximus quis ligula eget vehicula. Donec vel erat non quam blandit mattis.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Close Icon
Default Profile ImageRenata Barreto

Renata Barreto

JD / PhD Candidate, Jurisprudence and Social Policy - Berkeley Law

Default Profile Image

Renata Barreto

JD / PhD Candidate, Jurisprudence and Social Policy - Berkeley Law

This person has not submitted a bio.
Please stay tuned for any update.

Renata is a JD / PhD candidate at Berkeley, where her dissertation focuses on the impact of SESTA / FOSTA on content moderation. She is trained in STS and computational social science. Renata has interned at Twitter, Facebook, and the Center on Privacy and Technology at Georgetown Law.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Broadly, my research agenda examines the harms of machine learning models on marginalized communities. My work explores how machine learning exacerbates human biases in a variety of applications, such as computational genetics, facial recognition in passport kiosks, dating apps, and hate speech detection. I am also interested in the surveillance implications of machine learning technologies, specifically as it pertains to immigrant and BIPOC communities.

Domain of Application

Health/Medicine; Law/Policy; Scholarship; Social Media; Immigration

Close Icon
Default Profile ImageSeyi Olojo

Seyi Olojo

PhD Student, School of Information

Default Profile Image

Seyi Olojo

PhD Student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

I'm a PhD student at the UC Berkeley School of Information where I am advised by Dr. Jenna Burrell. My current research focuses on how violence is perpetuated through information infrastructures and algorithmic logics. I use critical race theory and black feminist epistemology in understanding the implications of technology on society.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I'm interested in how algorithmic determinism influences the social lives of minority populations. This means deconstructing the notion of algorithms as "black boxes" but rather as tools holding particular biases iterating on pre-existing structural and systemic processes within society.

Domain of Application

Critical Data Studies

Close Icon
Default Profile ImageLiza Gak

Liza Gak

PhD student, School of Information

Default Profile Image

Liza Gak

PhD student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Liza is a PhD student at the UC Berkeley School of Information, advised by Dr. Niloufar Salehi. She is interested in human-computer interaction, social computing, virtual communities, and online harms. She is a joint CTSP/AFOG fellow for the 2021 calendar year. Previously, she graduated with a BA in Computer Science and Mathematics from Washington University in St. Louis.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Criminal Justice; Social Media

Close Icon
Default Profile ImageTonya Nguyen

Tonya Nguyen

PhD Student, School of Information

Default Profile Image

Tonya Nguyen

PhD Student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Tonya Nguyen is a Ph.D. student at the UC Berkeley School of Information. Her work centers on human-computer interaction (HCI), social computing, and human-centered AI. She builds computational social systems by collaborating with communities, using ethnography, and conducting controlled experiments. In the past, she worked on digital fabrication projects, built CSCW tools for online collaboration, and built online experiments to study team viability and fracture. She received her in B.A in Interdisciplinary Studies (Computer Science, Design Innovation, and Critical Theory) from UC Berkeley.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

My work centers on studying how technical systems and algorithmic processes, such as school assignment algorithms, recommender systems, and AI tools, can become fairer and more equitable. For instance, I am studying why school assignment algorithms, while theoretically promising, have failed to promote equity and racial integration in OUSD and SFUSD. My research in mutual aid also investigates the tension between scalability in technical systems and meeting underserved communities' needs via participatory design and ethnography.

Domain of Application

Education; Information Search & Filtering; Scholarship; Social Media

Close Icon
Default Profile ImageJeremy Gordon

Jeremy Gordon

PhD student, School of Information

Default Profile Image

Jeremy Gordon

PhD student, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

Jeremy Gordon is a PhD student at Berkeley's School of Information where he conducts research at the BioSense lab and is advised by John Chuang. He is a computational cognitive scientist investigating the role of embodied simulation, imagination, and prospection as part of decision-making processes across multiple timescales. He uses emerging technologies like virtual reality and biosensing devices to investigate human behavior in the context of uncertainty, as well as to lend insight into critical ethical questions about the use and privacy implications of such technologies. Jeremy is also a full stack developer and product designer, and has extensive experience working with NGOs, social enterprises and government agencies in emerging markets. Before returning to the academic world, Jeremy was based in Nairobi where he founded Echo Mobile, a mobile messaging platform allowing businesses and organizations to communicate with and better understand the people they work with. Jeremy graduated from Stanford University where he received dual degrees in Mechanical Engineering and Political Science.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Though my work often targets questions typical of cognitive science and experimental psychology, I engage with critical ethical questions that are raised inherently by the use of generative machine learning methods, and models of human decision-making in general. For example, in a recent paper to be presented at CHI 2021, I explored the mismatch between individuals' assumptions about the capabilities and dynamics of predictive surveillance technologies, and their actual function in a lab-based experiment. I believe that academia has a responsibility to apply a balancing force to the strong and rarely social-values-aligned incentives inherent to technology development in the private sector.

Domain of Application

Education

Close Icon
Default Profile ImageCedric Whitney

Cedric Whitney

PhD Candidate, School of Information

Default Profile Image

Cedric Whitney

PhD Candidate, School of Information

This person has not submitted a bio.
Please stay tuned for any update.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

Domain of Application

Close Icon

McKane Andrus

Research Associate, Partnership on AI

Benjamin Shestakofsky

Assistant Professor, Department of Sociology, University of Pennsylvania

Elif Sert

Research Affiliate, Berkman Klein Center for Internet & Society, Harvard University; Researcher, UC Berkeley Department of Sociology

Sam Meyer

Product Manager, StreamSets

Daniel Kluttz

Senior Program Manager, Microsoft

Abigail Jacobs

Assistant Professor of Information and Complex Systems, School of Information and College of Literature, Science, and the Arts (dual appointment), University of Michigan

Randi Heinrichs

PhD student, Leuphana University Luneburg in Germany

Jen Gong

Postdoctoral Scholar, Center for Clinical Informatics and Improvement Research (CLIIR), UCSF

Roel I.J. Dobbe

Assistant Professor, Delft University of Technology

Michelle R. Carney

UX Researcher, Machine Learning + AI, Google

Sarah M. Brown

Data Science Postdoctoral Research Associate, Brown University

Amit Elazari Bar On

Director, Global Cybersecurity Policy, Intel Corporation

Amy Turner

Master’s Student, School of Information

Angela Okune

PhD Student, Anthropology, UC Irvine

Emanuel Moss

PhD Candidate, Department of Anthropology / CUNY Graduate Center

Sofia Dewar

Master’s Student, School of Information

Dan Sholler

Project Scientist, Technology Management Program, UC-Santa Barbara College of Engineering

Joshua Kroll

Assistant Professor of Computer Science at the Naval Postgraduate School

Our partners work with us to examine topics in fairness and opacity. If you are interested in becoming a partner, please contact us at afog@berkeley.edu.