Faculty Organizers

Jenna Burrell

Associate Professor, School of Information

Jenna Burrell

Jenna Burrell is an Associate Professor in the School of Information at UC Berkeley. She has a PhD in Sociology from the London School of Economics. Before pursuing her PhD, she was an Application Concept Developer in the People and Practices Research Group at Intel Corporation. Broadly, her research is concerned with the new challenges and opportunities of digital connectivity among marginalized populations. Her most recent research topics include (1) fairness and transparency in algorithmic classification and (2) Internet connectivity issues in rural parts of the USA.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am concerned with how trends toward social classification via algorithms can lead to divergent consequences for majority and minority groups. How do these new mechanisms of classification impact the life circumstances of individuals or reshape their potential for social mobility? To what extent can problems of algorithmic opacity and fairness be addressed by technical solutions? What are the limits of a technical fix for unfairness? What other tools or methods are available for addressing opacity or discrimination in algorithmic classification?

Domain of Application: Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning

Deirdre K. Mulligan

Associate Professor, School of Information

Deirdre K. Mulligan

Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, and an affiliated faculty on the new Hewlett funded Berkeley Center for Long-Term Cybersecurity. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. Mulligan recently chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a research agenda. She is a member of the National Academy of Science Forum on Cyber Resilience. She is Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; a founding member of the standing committee for the AI 100 project, a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She is a Commissioner on the Oakland Privacy Advisory Commission. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.

Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Values in design; governance of technology and governance through technology to support human rights/civil liberties; administrative law.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), discrimination, privacy, cybersecurity, regulation generally.

Faculty

Joshua Blumenstock

Assistant Professor, School of Information

Joshua Blumenstock

Joshua Blumenstock is an Assistant Professor at the U.C. Berkeley School of Information, and the Director of the Data-Intensive Development Lab. His research lies at the intersection of machine learning and development economics, and focuses on using novel data and methods to better understand the causes and consequences of global poverty. At Berkeley, Joshua teaches courses in machine learning and data-intensive development. Previously, Joshua was on the faculty at the University of Washington, where he founded and co-directed the Data Science and Analytics Lab, and led the school’s Data for Social Good initiative. He has a Ph.D. in Information Science and a M.A. in Economics from U.C. Berkeley, and Bachelor’s degrees in Computer Science and Physics from Wesleyan University. He is a recipient of the Intel Faculty Early Career Honor, a Gates Millennium Grand Challenge award, a Google Faculty Research Award, and is a former fellow of the Thomas J. Watson Foundation and the Harvard Institutes of Medicine.

Domains of Application: Credit, Fraud/Spam, Network Security, General Machine Learning (not domain specific), Economics

Marion Fourcade

Professor, Sociology

Marion Fourcade

I am a Professor of Sociology at UC Berkeley and an associate fellow of the Max Plack-Sciences Po Center on Coping with Instability in Market Societies (Maxpo). A comparative sociologist by training and taste, I am interested in national variations in knowledge and practice. My first book, Economists and Societies (Princeton University Press 2009), explored the distinctive character of the discipline and profession of economics in three countries. A second book, The Ordinal Society (with Kieran Healy), is under contract. This book investigates new forms of social stratification and morality in the digital economy. Other recent research focuses on the valuation of nature in comparative perspective; the moral regulation of states; the comparative study of political organization (with Evan Schofer and Brian Lande); the microsociology of courtroom exchanges (with Roi Livne); the sociology of economics, with Etienne Ollion and Yann Algan, and with Rakesh Khurana; the politics of wine classifications in France and the United States (with Rebecca Elliott and Olivier Jacquet).

Domain of Application: Credit, General Machine Learning (not domain specific), Health, Employment/Hiring.

Moritz Hardt

Assistant Professor, Electrical Engineering and Computer Science

Moritz Hardt

My mission is to build theory and tools that make the practice of machine learning across science and industry more robust, reliable, and aligned with societal values.

Domain of Application: General Machine Learning

Sonia Katyal

Professor, School of Law

Sonia Katyal

Professor Katyal joined the Berkeley Law faculty in fall 2015 from Fordham Law School, where she served as the associate dean for research and the Joseph M. McLaughlin Professor of Law.

Her scholarly work focuses on intellectual property, civil rights (including gender, race and sexuality) and technology. Her past projects have studied the relationship between copyright enforcement and informational privacy; the impact of artistic activism on brands and advertising; and the intersection between copyright law and gender with respect to fan-generated works. Katyal also works on issues relating to cultural property and art, with a special focus on new media and the role of museums in the United States and abroad. Her current projects focus on the intersection between internet access and civil/human rights, with a special focus on the right to information; algorithmic transparency and discrimination; and a variety of projects on the intersection between gender and the commons. As a member of the university-wide Haas LGBT Cluster, Professor Katyal also works on matters regarding law and sexuality. Current projects involve an article on technology, surveillance and gender, and another on family law’s governance of transgender parents. Professor Katyal’s recent publications include The Numerus Clausus of Sex, in the University of Chicago Law Review; Technoheritage, in the California Law Review; and Algorithmic Civil Rights, forthcoming in the Iowa Law Review.

Professor Katyal is the co-author of Property Outlaws (Yale University Press, 2010) (with Eduardo M. Peñalver), which studies the intersection between civil disobedience and innovation in property and intellectual property frameworks. Professor Katyal has won several awards for her work, including an honorable mention in the American Association of Law Schools Scholarly Papers Competition, a Yale Cybercrime Award, and a Dukeminier Award from the Williams Project at UCLA. She has published with a variety of law reviews, including the Yale Law Journal, the University of Pennsylvania Law Review, Washington Law Review, Texas Law Review, and the UCLA Law Review, in addition to a variety of other publications, including the New York Times, the Brooklyn Rail, Washington Post, CNN, Boston Globe’s Ideas section, Los Angeles Times, Slate, Findlaw, and the National Law Journal. Katyal is also the first law professor to receive a grant through The Creative Capital/ Warhol Foundation for her forthcoming book, Contrabrand, which studies the relationship between art, advertising and trademark and copyright law.

In March of 2016, Katyal was selected by U.S. Commerce Secretary Penny Pritzker to be part of the inaugural U.S. Commerce Department’s Digital Economy Board of Advisors. Katyal also serves as an Affiliate Scholar at Stanford Law’s Center for Internet and Society, and is a founding advisor to the Women in Technology Law organization. She also serves on the Executive Committee for the Berkeley Center for New Media (BCNM) and on the Advisory Board for Media Studies at UC Berkeley.

Before entering academia, Professor Katyal was an associate specializing in intellectual property litigation in the San Francisco office of Covington & Burling. Professor Katyal also clerked for the Honorable Carlos Moreno (later a California Supreme Court Justice) in the Central District of California and the Honorable Dorothy Nelson in the U.S. Court of Appeals for the Ninth Circuit.

Courses taught include Property Law; Law and Sexuality; Advertising, Branding and the First Amendment; Law and Technology Writing Workshop; and Law, Innovation and Entrepreneurship (in 2019).

Domain of Application: Scholarship (digital humanities, computational social sci).

Shreeharsh Kelkar

Lecturer, Interdisciplinary Studies

Shreeharsh Kelkar

I study computing infrastructures and their relationship to work, labor, and expertise.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My new project tries to understand the tensions in data science between domain expertise and machine learning; this is an issue that is salient to the question of opacity and interpretability.

Domain of Application: General Machine Learning (not domain specific), Health, Employment/Hiring, Education.

Arash Nourian

Faculty, EECS and School of Information

Arash Nourian

Domains of Application: Credit, Fraud/Spam, Network Security, General Machine Learning (not domain specific), Health

Postdoctoral Scholars

Morgan G. Ames

Postdoctoral Scholar, Center for Science, Technology, Medicine & Society

Morgan G. Ames

Morgan G. Ames is a research fellow at the Center for Science, Technology, Medicine, and Society at the University of California, Berkeley. Morgan’s research explores the role of utopianism in the technology world, and the imaginary of the “technical child” as fertile ground for this utopianism. Based on eight years of archival and ethnographic research, she is finishing a book manuscript on One Laptop per Child which explores the motivations behind the project and the cultural politics of a model site in Paraguay.

Morgan was previously a postdoctoral researcher at the Intel Science and Technology Center for Social Computing at the University of California, Irvine, working with Paul Dourish. Morgan’s PhD is in communication (with a minor in anthropology) from Stanford, where her dissertation won the Nathan Maccoby Outstanding Dissertation Award in 2013. She also has a B.A. in computer science and M.S. in information science, both from the University of California, Berkeley. See http://bio.morganya.org for more.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Machine learning, particular deep neural networks, have become subjects of intense utopianism and dystopianism in the popular press. Alongside this rhetoric, scholars have been finding that these new machine learning techniques are not and likely will never be bias-free. I am interested in exploring both of these topics and how they interconnect.

Domain of Application: General Machine Learning (not domain specific).

Sarah Brown

Postdoctoral Scholar, Electrical Engineering and Computer Science

Sarah Brown

Sarah Brown is a Chancellor’s Postdoctoral Fellow in the Department of Electrical Engineering and Computer Science. Sarah’s research to date has focused on the design and analysis of machine learning methods in experimental psychology settings. This includes development of machine learning models and algorithms that are reflective of scientific thinking about the data, analyzing their limits in context and developing context-appropriate performance measures. She is curious about how these data provenance related issues and analysis techniques translate into the study of fair machine learning.

Sarah Received her BS, MS and PhD from the the Electrical and Computer Engineering Department at Northeastern University. Her graduate studies were supported by a Draper Laboratory Fellowship and a National Science Foundation Graduate Research Fellowship. Her dissertation, Machine Learning Methods for Computational Psychology, developes application-tailored learning solutions and a better understanding of how to interpret machine learning results in the context of studying how the brain creates affective experiences and mental pathologies.

Outside of the lab, Sarah is a passionate advocate for underrepresented STEM engagement at all levels. Currently she serves as treasurer for Women In Machine Learning and previously as finance and sponsorship chair as a co-organizer for the WiML Workshop. She has also served in a variety of leadership positions in the National Society of Black Engineers at both the local and national levels including National Academic Excellence Chair.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in how the type of adaptations to and analyses of machine learning algorithms used to satisfy requirements of scientists based on their data relate to and translate to issues of fairness. I see parallels between the two, rooted in a greater dependence on data provenance than is often present in the conversations in the machine learning community.

Domain of Application: General Machine Learning (not domain specific).

Rebecca C. Fan

Postdoctoral Scholar, UC Berkeley Institute for the Study of Societal Issues

Rebecca C. Fan

Rebecca Fan is a social scientist with interdisciplinary background (anthropology, international human rights law and politics, socio-legal studies). Prior to completing her PhD, she worked for a number of human rights organizations (e.g. Amnesty International) and contributed to advocacy work at regional and global forums. Her dissertation brings together fieldwork at the United Nations and participatory action research to investigate what she identifies as the epistemological struggle of governance via regime analysis and institutional studies. Continuing engaging with global civil society, she is currently serving on the Commission on Environmental Economic and Social Policy (CEESP, which is one of the 6 Commissions of the International Union for Conservation of Nature) as a contributing member. When time permits, she plays Indonesian gamelan music and enjoys hiking and floral arrangements.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’ My interest in the subject arises from my concerns about the socio-technical phenomenon of late that has the sort of double-edged sword effect that needs to be better articulated and addressed: e.g. 1) how it is simultaneously empowering and dis-empowering for some but not others; 2) how we are actually getting rich information but poor data; or 3) how we tend to trust the machine to be objective but only to see how human prejudices can be amplified by the machine taught/designed by human. Furthermore, algorithms often live in a black box that is likely to be proprietary. As such, it’s difficult to monitor or evaluate it for accountability or fairness. It also blinds us from seeing the struggle of power asymmetry clearly.

These are some of the issues that occupied my thoughts, to name a few, that will continue to shape the work-in-progress that I am developing now.”

Domain of application: General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci)

Stuart Geiger

Postdoctoral Scholar, Berkeley Institute for Data Science

Stuart Geiger

Stuart Geiger is an ethnographer and post-doctoral scholar at the Berkeley Institute for Data Science at UC-Berkeley, where he studies various topics about the infrastructures and institutions that support the production of knowledge. His Ph.D research at the UC-Berkeley School of Information investigated the role of automation in the governance and operation of Wikipedia and Twitter. He has studied topics including moderation and quality control processes, human-in-the-loop decision making, newcomer socialization, cooperation and conflict around automation, the roles of support staff and technicians, and bias, diversity, and inclusion. He uses ethnographic, historical, qualitative, quantitative, and computational methods in his research, which is grounded in the fields of Computer-Supported Cooperative Work, Science and Technology Studies, and communication and new media studies.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I study how people design, develop, deploy, understand, negotiate, contest, maintain, and repair algorithmic systems within communities of knowledge production. Most of the communities I study — including Wikipedia and the scientific reproducibility / open science movement — have strong normative commitments to openness and transparency. I study how these communities are using (and not using) various technologies and practices around automation, including various forms of machine learning, collaboratively-curated training data sets, data-driven decision-making processes, human-in-the-loop mechanisms, documentation tools and practices, code and data repositories, auditing frameworks, containerization, and interactive notebooks.

Domain of Application: Information Search & Filtering, General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci), Education.

Daniel Kluttz

Postdoctoral Scholar, Algorithmic Fairness & Opacity Group, School of Information

Daniel Kluttz

Daniel N. Kluttz is a postdoctoral scholar in the Algorithmic Fairness and Opacity Group (AFOG) at UC Berkeley’s School of Information, where he helps develop research and policy recommendations on issues of algorithmic fairness, bias, transparency, interpretability, and accountability, especially in machine-learning and artificial-intelligence applications. Drawing from intellectual traditions in law and society, organizational theory, cultural sociology, economic sociology, and technology studies, his research is oriented around two broad lines of inquiry: 1) the formal and informal governance of economic and technological innovations, and 2) the organizational and legal environments centered around such innovations. He employs both quantitative and qualitative methods in his work, including longitudinal and multi-level modeling techniques, geospatial analysis, historical/archival methods, and in-depth interviews.

Daniel’s dissertation used the case of U.S. fossil-fuel development and the practice of high-volume hydraulic fracturing (“fracking”) to assess the sociopolitical and economic foundations of law and legal processes governing controversial technological innovations. His other projects have included studies of socio-legal institutions and social change across such settings as contemporary markets for personal data, classification and scoring systems, American legal education, and early American markets for literature.

Daniel received his PhD in sociology from UC Berkeley, his JD from the UNC-Chapel Hill School of Law, and his BA in sociology and psychology from UNC-Chapel Hill. Prior to obtaining his PhD, he practiced law in Raleigh, NC.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

With training in both law and sociology, my interests pertain to the formal and informal governance of algorithmically based systems and practices, as well as the people and organizations who operate within this domain. I am particularly interested in the legal conditions and organizational structures that drive how firms and professionals perceive and use decision-support tools/systems that are based on artificial intelligence and machine learning. For example, in a current project, I am investigating how companies, regulators and consumer-rights organizations engage in, govern, and contest the collection and commodification of digitally sourced personal data. The aim is to shed new light on the often-opaque infrastructures on which these markets depend and the social construction of notions of privacy, consent, and fairness within this domain. I am also interested in how law (both formal and informal), firm structure, and professional identity differentially affect professionals’ attention to issues of transparency, interpretability, oversight, and contestability of machine-learning-based systems. Finally, I am interested in how these factors shape the ways in which companies and third parties assign value to the data used to train these systems.

Domain of Application: Credit, General Machine Learning (not domain specific), Law/Policy, Organizational Structures, Professional Practices.

Abigail Jacobs

Postdoctoral Scholar, Haas School of Business

Abigail Jacobs

Abigail Jacobs is a postdoctoral scholar in computational social science in the Haas School of Business, Management of Organizations group. Her current research is primarily focused on the social structure of the opioid epidemic, organizations, and measurement in social networks.

Abigail received her PhD in Computer Science from the University of Colorado Boulder, supported in part by the NSF Graduate Research Fellowship Program. Her work focused on structure in social networks, including interpretable machine learning models and developing novel observational social datasets to explore heterogeneity in social systems. During this time, she also spent time at Microsoft Research NYC, and in 2015 she served as an organizer for the Women in Machine Learning Workshop, a technical workshop co-located with NIPS. She now sits on the executive board of the Women in Machine Learning organization. Previously, she received a BA in Mathematical Methods in the Social Sciences and Mathematics from Northwestern University.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’ I am interested in two areas where computational social science overlaps strongly with fairness, accountability, and transparency: measurement of social processes (a fundamental goal of computational social science!) and governance.

First, studying social networks, platforms, organizations, or any large-scale social data requires understanding the boundaries of these systems and the larger ecosystem in which they reside. For example, we know that events and processes happening off-platform change on-platform behavior, and vice versa; what {design, norms, structure} are embedded within these platforms changes what happens on them, and that changes what we can measure about social behavior; endogeneities between group attributes/identities and outcomes means that any data about social behavior embeds these biases and can exacerbate them.

Second, algorithmic and designed, data-driven systems have become deeply entrenched in social, organizational and economic interactions. Some of these systems simply make explicit or exaggerate processes that were already, or are still jointly, performed by humans. Regardless, understanding how these systems work together, inherit biases, or compress social relationships suggests a new space of governance problems, cutting across technical, social, organizational, and ecosystem levels.

Domains of Application: Information Search & Filtering, General Machine Learning (not domain specific), Health, Scholarship (digital humanities, computational social sci)

Joshua Kroll

Postdoctoral Scholar, School of Information

Joshua Kroll

Joshua A. Kroll is a computer scientist studying the relationship between governance, public policy, and computer systems. Currently, Joshua is a Postdoctoral Research Scholar at the School of Information at the University of California at Berkeley. His research focuses on how technology fits within a human-driven, normative context and how it satisfies goals driven by ideals such as fairness, accountability, transparency, and ethics. He is most interested in the governance of automated decision-making systems, especially those using machine learning. His paper “Accountable Algorithms” in the University of Pennsylvania Law Review received the Future of Privacy Forum’s Privacy Papers for Policymakers Award in 2017.

Joshua’s previous work spans accountable algorithms, cryptography, software security, formal methods, Bitcoin, and the technical aspects of cybersecurity policy. He also spent two years working on cryptography and internet security at the web performance and security company Cloudflare. Joshua holds a PhD in computer science from Princeton University, where he received the National Science Foundation Graduate Research Fellowship in 2011.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I’m basically here to study these precise topics.

Domain of Application: Credit, Criminal Justice, Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning (not domain specific), Health, Employment/Hiring, Housing, Political/redistricting.

Brandie Nonnecke

Postdoctoral Scholar, Research & Development Manager, CITRIS & the Banatao Institute

Brandie Nonnecke

Dr. Brandie Nonnecke is the Research & Development Manager for CITRIS, UC Berkeley and Program Director for CITRIS, UC Davis. Brandie researches the dynamic interconnections between law, policy, and emerging technologies. She studies the influence of non-binding, multi-stakeholder policy networks on stakeholder participation in internet governance and information and communication technology (ICT) policymaking. Her current research and publications can be found at nonnecke.com

She investigates how ICTs can be used as tools to support civic participation, to improve governance and accountability, and to foster economic and social development. In this capacity, she designs and deploys participatory evaluation platforms that utilize statistical models and collaborative filtering to tap into collective intelligence and reveal novel insights (See Projects), including the California Report Card launched in collaboration with the Office of California Lt. Gov. Gavin Newsom and the DevCAFE system launched in Mexico, Uganda, and the Philippines to enable participatory evaluation of the effectiveness of development interventions.

Brandie received her Ph.D. in Mass Communications from The Pennsylvania State University. She is a Fellow at the World Economic Forum where she serves on the Council on the Future of the Digital Economy and Society and is chair of the Internet Society SF Chapter Working Group on Internet Governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I conduct research on the benefits and risks of algorithmic-based decision-making, including recommendations on how to better ensure fairness, accountability, and positive socioeconomic inclusion. This research is available at http://citris-uc.org/connected-communities/project/inclusive-ai-technology-policy-diverse-urban-future/ and through the World Economic Forum at https://www.weforum.org/agenda/2017/09/applying-ai-to-enable-an-equitable-digital-economy-and-society

Domain of Application: General Machine Learning (not domain specific), Policy and governance of AI.

Dan Sholler

Postdoctoral Scholar, r OpenSci at the Berkeley Institute for Data Science

Dan Sholler

I study the occupational, organizational, and institutional implications of technological change using qualitative, ethnographic techniques. For example, I studied the implementation of federally-mandated electronic medical records in the United States healthcare industry and found that unwanted changes in the day-to-day activities of doctors influenced a national resistance movement, ultimately leading to the revision of federal technology policy. Currently, I am conducting a comparative study of the ongoing shifts toward open science in the ecology and astronomy disciplines to identify and explain the factors that may influence engagement and resistance with open science tools and communities.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in discussing and studying how the implementation of AI and other algorithmic applications might impact the day-to-day activities of workers and alter the structures of organizations. In particular, I would like to interrogate how AI-led changes might influence workers’ perceptions of what it means to be a member of an occupational or professional community and how the designers and implementers of algorithmic technologies consider these potential implications.

Domain of Application: Health, Scientific Research, Open Science (open source software, open data, open access).

.

Graduate Students

Shazeda Ahmed

PhD Candidate, School of Information

Shazeda Ahmed

Shazeda is a third-year Ph.D. student at the I School. She has worked as a researcher for the Council on Foreign Relations, Asia Society, the U.S. Naval War College, Citizen Lab, Ranking Digital Rights, and the Mercator Institute for China Studies. Her research focuses on China’s social credit system, information technology policy, and role in setting norms of global Internet governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I study China’s social credit system, which uses troves of Chinese citizens’ personal and behavioral data to assign them scores meant to reflect how “trustworthy,” law-abiding, and financially responsible they are. The algorithms used to calculate these scores are classified as either trade or state secrets, and to date it seems that score issuers cannot fully explain score breakdowns to users. There are plans to identify low scorers on public blacklists, which could discriminate against people who are unaware of how the system operates. Through my research I hope to discover how average users perceive and are navigating the system as it develops.

Domain of Application: Credit.

Michelle Carney

MIMS Candidate, School of Information

Michelle Carney

Currently a graduate student at the UC Berkeley Master’s of Information Management and Systems, Michelle studies the intersection of Data Science and User Experience. Michelle also facilitates the meetup Machine Learning and User Experience in San Francisco, where she organizes professional tech talks and panels on the topic of human-centered machine learning.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in how to design for machine learning: how do we design in ways that collect the right data that also empower our users to understand what the machine learning models are doing, and also allow users transparency into how the algorithms decide their recommendations/personalization. While I am interested in the academic research (and there’s a lot of really great stuff!), I am particularly interested in the applications of transparent design of models in the professional space and how to engage tech companies in a way to create best practices.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific).

Roel Dobbe

PhD Candidate, EECS/Berkeley Artificial Intelligence Research Lab

Roel Dobbe

Since Fall 2013, I am pursuing a PhD degree in Electrical Engineering & Computer Sciences at UC Berkeley, under the guidance of Professor Claire Tomlin in the Hybrid Systems Group. My main interests are in modernizing energy systems and other societal infrastructures and decision making through the integration of control theory, machine learning and optimization. With this comes an interest in understanding social implications of automation technologies, and integrating critical thinking and social science perspectives in research, design and integration of new technologies. More recently, I have been teaching about issues of social justice and how these relate to our work as research and engineering professionals.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I build algorithms that are used by operators to balance electric grids and require a certain level of interpretability and performance. In the industry, I have build tools to help explain predictions made by algorithms in decision making tools, aimed at improving transparency, understanding and trust for end-users. Lastly, I am studying principles from value sensitive design and responsible innovation and aim to translate these into concrete principles for the development of systems that rely on machine learning and artificial intelligence.

Domain of Application: General Machine Learning (not domain specific), Health, Energy.

Amit Elazari Bar On

PhD Candidate, School of Law

Amit Elazari Bar On

Amit is a doctoral law candidate at Berkeley Law and a Research Fellow at CTSP, Berkeley School of Information. She is the first Israeli LL.M. graduate to been admitted to the doctoral program at Berkeley or any other top U.S. doctoral program in law, on a direct-track basis. She graduated Summa Cum Laude from her LL.M. in IDC, Israel following the submission of a research thesis in the field of intellectual property law and standard-form contracts. She holds an LL.B. and a B.A. in Business Administration (Summa Cum Laude) from IDC, is admitted to practice law in Israel and has worked at one of Israel’s leading law firms, GKH Law. Amit has been engaged in extensive academic work, including research, teaching, and editorial positions. Her research interests include patents, privacy, cyber law, copyright, and private ordering in technology law. Her work on Intellectual Property and cyber law has been published in the Canadian Intellectual Property Journal and presented in leading security and Intellectual Property conferences such as IPSC, Defcon and BsidesLV.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? The law, and specifically IP and anti-hacking laws could serve as a barrier for algorithmic opacity, stifling vital research and tinkering efforts, or as an enabler of transparency: as means to construct safe harbors limiting enclosure and monopolization of algorithms. Similarly, private ordering mechanisms, whether tech-based or contract-based (EULAs and ToUs), operating in the shadow of the law, affecting millions, but dictated by few, serve a key function in regulating the algorithmic landscape, including by limiting users and researchers access to the design and backbone of algorithms. In this respect, information security and the study of algorithms have much in common, and I hope to explore what lessons can be learned from cyber law.

More generally, the law could construct incentives that will foster algorithmic transparency and even the use of AI for the greater good, promoting social justice — yet the challenge will be to create incentives that internalize the benefit from compliance with the law (either by introducing market based incentives or safe harbors) without relying on stringent, costly, enforcement. I hope to explore these questions and others in the course of the workshop.

Domain of Application: General Machine Learning, Law/policy

Thomas Krendl Gilbert

PhD Candidate, Machine Ethics and Epistemology

Thomas Krendl Gilbert

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am specifically interested in the potential forms of social autonomy and moral agency that are available to different classes of algorithms. Alongside this, I focus on the work that goes into training machine learning tools and how this compares to historical waves of automation and inequality.

Domain of Application: Information Search & Filtering, General Machine Learning (not domain specific).

Daniel Griffin

PhD Candidate, School of Information

Daniel Griffin

Daniel Griffin is a doctoral student at the School of Information at UC Berkeley. His research interests center on intersections between information and values and power, looking at freedom and control in information systems. He is a co-director of UC Berkeley’s Center for Technology, Society & Policy and a commissioner on the City of Berkeley’s Disaster and Fire Safety Commission. Prior to entering the doctoral program, he completed the Master of Information Management and Systems program, also at the School of Information. Before graduate school he served as an intelligence analyst in the US Army. As an undergraduate, he studied philosophy at Whitworth University.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? In what some have called an age of disinformation, how, and with what effects, do people using search engines imagine and interact with the search engine algorithms? How do the teams of people at search engines seek to understand and satisfy the goals and behavior of people using their services? What sort of normative claims does, and possibly can, society make of the design of the search engine algorithms and services?

Domain of Application: Information Search & Filtering.

Randi Heinrichs

Visiting Student Researcher, Center for Science, Technology, Medicine & Society

Randi Heinrichs

Randi Heinrichs is a visiting student researcher at the CSTMS at UC Berkeley. She is working on her dissertation about a conception of algorithmic anonymity at Leuphana University Luneburg in Germany, where she is affiliated with Center for Digital Cultures. She received her M.A. in “Culture, Arts and Media” from the Leuphana with a thesis titled “Whistleblowing and the dangerous Game of Truth”. Her doctoral research project “(Re)Programming Regimes of Anonymity in Digital Culture” examines the negotiations of anonymity at the intersections of technology, practices and regulations. She is a member of the editorial boards of the open access journals „ephemera“ and „spheres: Journal for Digital Cultures“.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’  I am interested in the cultural politics of data handling, the ideologies behind imaginaries of the user and the (re)configuring of “algorithmic anonymity”. To better grasp on the opacity of algorithms and to understand the related interdependence of emerging power structures with specific forms of sociotechnical knowledge, values and ideologies, my dissertation fieldwork focuses on the level of production – where anonymity is governed on a daily basis by “algorithmic actors”, like software developers, data scientists, designers, marketers and in the logic of machine learning by the algorithm itself. In my specific case the “algorithmic actors” are working on neighborhood technologies. My project aims to elaborate on the ways in which anonymity is (re-)configured in technological and social dimensions with an specific focus on (data-) neighborhoods. Furthermore, I ask how the (algorithmic) decision-making about anonymity affects subjectivity, fairness, relations of equality and difference, as well as comprehensions of common and personal property.

Domain of application: Information Search & Filtering, General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci), Surveillance, Data Profiling,

Anne Jonas

PhD Candidate, School of Information

Anne Jonas

After previously working in program management at the Participatory Culture Foundation and the Barnard Center for Research on Women, I now study education, information systems, culture, and inequality here at the I School. I am a Fellow with the Center for Technology, Society, and Policy and a Research Grantee of the Center for Long-Term Cybersecurity on several collaborative projects.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Use of algorithms in educational curriculum provision, assessment, evaluation, surveillance and discipline. Also working on a project related to “regional discrimination” that looks at how geographic markers are used to block people from certain websites and web based services.

Domain of Application: Criminal Justice, Information Search & Filtering, Employment/Hiring, Education.

Nitin Kohli

PhD Candidate, School of Information

Nitin Kohli

Nitin Kohli is a PhD student at UC Berkeley’s School of Information, working under Deirdre Mulligan. His research examines privacy, security, and fairness in algorithmic systems from technical and legal perspectives. On the technical side, Nitin employs theoretical and computational techniques to construct algorithmic mechanisms with such properties. On the legal side, Nitin explores institutional and organizational mechanisms to protect these values by examining the incentive structures and power dynamics that govern these environments. His work draws upon mathematics, statistics, computer science, economics, and law.

Prior to his PhD work, Nitin worked both as a data scientist in industry and as an academic. Within industry, Nitin developed machine learning and natural language processing algorithms to identify occurrences and locations of future risk in healthcare settings. Within academia, Nitin worked as an adjunct instructor and as a summer lecturer at UC Berkeley, teaching introductory and advanced courses in probability, statistics, and game theory. Nitin holds a Master’s degree in Information and Data Science from Berkeley’s School of Information, and a Bachelor’s degree in Mathematics and Statistics, where he received departmental honors in statistics for his work in stochastic modeling and game theory.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My research interests explicitly are in the construction algorithms that preserve certain human values, such as fairness and privacy. I’m also interested in legal and policy solutions that promote and incentivize transparency and fairness within algorithmic decision making.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), Employment/Hiring, Scholarship (digital humanities, computational social sci), Education.

Sam Meyer

MIMS Candidate, School of Information

Sam Meyer

Sam Meyer is a masters student at the School of Information studying data visualization and product management. In the past, Sam worked as a software engineer in biotech.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My MIMS final project is focused on algorithmic transparency to a non-technical audience.

Domain of Application: General Machine Learning (not domain specific).

Angela Okune

PhD Student, Anthropology, UC Irvine

Angela Okune

Angela is a doctoral student in the Anthropology Department at the University of California, Irvine (UCI) working on questions of expertise and the politics of knowledge production in technology & development in Africa. Angela is a recipient of a 2016 Graduate Research Fellowship from the National Science Foundation. From 2010 – 2015, as a co-founder of the research department at iHub, Nairobi’s innovation hub for the tech community, Angela provided strategic guidance for the growth of tech research in Kenya.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’ An unprecedented amount of digital information is collected, stored, and analyzed the world over to predict what people do, think, and buy. But what epistemological standpoints are assumed in the design and application of algorithmic technologies that structure everyday interactions, digital and otherwise? I am interested in how seemingly contradictory notions of scale, standardization and personalization are simultaneously leveraged in promises of algorithmic technologies and what the implementation of AI in various contexts across the African continent reveals.

Domain of application: General Machine Learning (not domain specific), Scholarship (digital humanities, computational social science)

Benjamin Shestakofsky

PhD Candidate, Sociology

Benjamin Shestakofsky

I am a PhD Candidate in the Department of Sociology at the University of California, Berkeley. My research centers on how digital technologies are affecting work and employment, organizations, and economic exchange.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in the human labor that supports machine-learning systems.

Domain of Application: Work and labor.