Political Economy of Reinforcement Learning (PERLS) Workshop

Neural Information Processing Systems (NeurIPS)

December, 2021


Workshop Organizers

Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and from 2012 to 2014 held the Chaire Blaise Pascal in Paris. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with an emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
​Thomas Krendl Gilbert is an interdisciplinary Ph.D. candidate in Machine Ethics and Epistemology at UC Berkeley, supported by the Center for Human-Compatible AI, the Simons Institute, and the Center for Long-Term Cybersecurity. His interest in the societal implications of RL systems grows out of his affiliation with the Simons program on the Theory of Reinforcement Learning held during fall 2020. Tom’s research examines how to ensure that the objectives for which we are optimizing in RL are aligned with normative social, political, and economic goals, and how notions of fairness, justice, equality, and rule of law can be prioritized in the design of our objectives, and objective functions. Thomas’ recent work investigates how specific algorithmic learning procedures (such as RL) reframe classical ethical questions and recall the foundations of democratic political philosophy, namely the significance of popular sovereignty and dissent for resolving normative uncertainty and modeling human preferences.
Tom Zick earned her PhD from UC Berkeley and is currently pursuing her JD at Harvard. Her research bridges between AI ethics and law, with a focus on how to craft safe and equitable policy surrounding the adoption of AI in high-stakes domains. In the past, she has worked as a data scientist at the Berkeley Center for Law and Technology, evaluating the capacity of regulations to promote open government data. She has also collaborated with graduate students across social science and engineering to advocate for pedagogy reform focused on infusing social context into technical coursework. Outside of academia, Tom has crafted digital policy for the City of Boston as a fellow for the Mayor’s Office for New Urban Mechanics and helped early stage startups develop responsible AI frameworks. Her current research centers on the near term policy concerns surrounding reinforcement learning.
Aaron is a research fellow in computational law at the Australian Research Council Centre of Excellence for Autonomous Decision Making and Society. With a background in cross-disciplinary mechatronic engineering, Aaron’s Ph.D. research developed new theory and algorithms for Inverse Reinforcement Learning in the maximum conditional entropy and multiple intent settings. Aaron’s ongoing work investigates technical measures for achieving value alignment for autonomous decision making systems, and legal-theoretic models for AI accountability.
Michael Dennis is a 5th year grad student at the Center for Human-Compatible AI. With a background in theoretical computer science, he is working to close the gap between decision theoretic and game theoretic recommendations and the current state of the art approaches to robust RL and multi-agent RL. The overall aim of this work is to ensure that our systems behave in a way that is robustly beneficial. In the single agent setting, this means making decisions and managing risk in the way the designer intends. In the multi-agent setting, this means ensuring that the concerns of the designer and those of others in the society are fairly and justly negotiated to the benefit of all involved.

Volunteers

We are indebted to our 2021 workshop volunteers!

Deborah Morgan is a PhD student within the Centre for Accountable, Responsible and Transparent AI in the Department of Computer Science at the University of Bath, UK. She holds an LLB Law degree and previously practiced law as a corporate projects lawyer in the UK. She has also worked within industry as a communications consultant and in education. She holds an MSt in Education from the University of Cambridge where her research explored feedback methods within narrative writing. Her current PhD research explores the legal and societal challenges of regulating AI systems, in particular the use of experimental legal methods and regulatory sandboxes.
Blake Elias is a researcher at the New England Complex Systems Institute, working under Yaneer Bar-Yam. His current research applies game theory and control theory to inform optimal pandemic response. His broader interests include the application of AI to problems of coordination, governance and public policy. Previously, he was an AI Resident at Microsoft Research working on neural-symbolic computation and human-AI collaboration. Before this, he completed his SB and MEng degrees at MIT in Electrical Engineering and Computer Science, where he worked in the Media Lab and Synthetic Biology Center.

Reviewers

In addition to the workshop organizers, our workshop reviewer comittee includes the following members: Archie Chapman, Daniela Cialfi, Sarah Dean, Vektor Dewanto, Henry Fraser, Cameron Gordon, Xiao Guo, Nathan Lambert, and Abdul Obeid.