Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

NeurIPS 2020

Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

Dec 06, 2020
|
40 views
|
Details
Multi-task reinforcement learning (RL) aims to simultaneously learn policies for solving many tasks. Several prior works have found that relabeling past experience with different reward functions can improve sample efficiency. Relabeling methods typically ask: if, in hindsight, we assume that our experience was optimal for some task, for what task was it optimal? In this paper, we show that hindsight relabeling is inverse RL, an observation that suggests that we can use inverse RL in tandem for RL algorithms to efficiently solve many tasks. We use this idea to generalize goal-relabeling techniques from prior work to arbitrary classes of tasks. Our experiments confirm that relabeling data using inverse RL accelerates learning in general multi-task settings, including goal-reaching, domains with discrete sets of rewards, and those with linear reward functions. Speakers: Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov, Young Geng

Comments
loading...