Accelerating Online Reinforcement Learning with Offline Datasets
CrossMind.ai logo

Accelerating Online Reinforcement Learning with Offline Datasets

Jun 16, 2020
|
24 views
|
Details
Abstract: Reinforcement learning (RL) provides an appealing formalism for learning control policies from experience. However, the classic active formulation of RL necessitates a lengthy active exploration process for each behavior, making it difficult to apply in real-world settings such as robotic control. If we can instead allow RL algorithms to effectively use previously collected data to aid the online learning process, such applications could be made substantially more practical: the prior data would provide a starting point that mitigates challenges due to exploration and sample complexity, while the online training enables the agent to perfect the desired skill. Such prior data could either constitute expert demonstrations or, more generally, sub-optimal prior data that illustrates potentially useful transitions. But it remains difficult to train a policy with potentially sub-optimal offline data and improve it further with online RL. In this paper we systematically analyze why this problem is so challenging, and propose an algorithm that combines sample-efficient dynamic programming with maximum likelihood policy updates, providing a simple and effective framework that is able to leverage large amounts of offline data and then quickly perform online fine-tuning of RL policies. We show that our method, advantage weighted actor critic (AWAC), enables rapid learning of skills with a combination of prior demonstration data and online experience. Authors: Ashvin Nair, Murtaza Dalal, Abhishek Gupta, Sergey Levine (UC Berkeley)

Comments
loading...