The 57th Annual Allerton Conference on Communications, Control, and Computing Plenary Lecture will be given on the afternoon of Thursday, September 26, 2019, by Professor Benjamin Van Roy from the Department of Electrical Engineering at Stanford University.
Title: Making Reinforcement Learning Data-Efficient
Abstract: Applied work in reinforcement learning has focused on simulated environments that allow an agent to gather enormous quantities of data. Despite their successes, when data must be gathered in real time, algorithms in current use can require too long to gain competence, even in very simple environments. One critical issue is how an agent explores. Common approaches to exploration are highly inefficient. I will discuss how this can be addressed through uncertainty representation and judicious probing. I will also discuss important directions for further work on exploration, learning from rich observations, and hierarchical representations, each of which may be essential to making reinforcement learning data-efficient.
Biography: Benjamin Van Roy is a Professor at Stanford University, where he has served on the faculty since 1998. His research focuses on understanding how an agent interacting with a poorly understood environment can learn to make effective decisions. Beyond academia, he leads a DeepMind Research team in Mountain View. He is a Fellow of INFORMS and IEEE, and in addition to those communities, he is a regular participant in ICML, NeurIPS, and RLDM. He has served on the editorial boards of Machine Learning, Mathematics of Operations Research, for which he co-edits the Learning Theory Area, Operations Research, for which he edited the Financial Engineering Area, and the INFORMS Journal on Optimization.