One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL

NeurIPS 2020

One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL

Dec 06, 2020
|
25 views
|
Details
While reinforcement learning algorithms can learn effective policies for complex tasks, these policies are often brittle to even minor task variations, especially when variations are not explicitly provided during training. One natural approach to this problem is to train agents with manually specified variation in the training task or environment. However, this may be infeasible in practical situations, either because making perturbations is not possible, or because it is unclear how to choose suitable perturbation strategies without sacrificing performance. The key insight of this work is that learning diverse behaviors for accomplishing a task can directly lead to behavior that generalizes to varying environments, without needing to perform explicit perturbations during training. By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations by abandoning solutions that are no longer effective and adopting those that are. We theoretically characterize a robustness set of environments that arises from our algorithm and empirically find that our diversity-driven approach can extrapolate to various changes in the environment and task. Speakers: Saurabh Kumar, Aviral Kumar, Sergey Levine, Chelsea Finn

Comments
loading...