Provably Efficient Exploration for RL with Unsupervised Learning

NeurIPS 2020

Provably Efficient Exploration for RL with Unsupervised Learning

Dec 06, 2020
|
38 views
|
Details
We study how to use unsupervised learning for efficient exploration in reinforcement learning with rich observations generated from a small number of latent states. We present a novel algorithmic framework that is built upon two components: an unsupervised learning algorithm and a no-regret reinforcement learning algorithm. We show that our algorithm provably finds a near-optimal policy with sample complexity polynomial in the number of latent states, which is significantly smaller than the number of possible observations. Our result gives theoretical justification to the prevailing paradigm of using unsupervised learning for efficient exploration [tang2017exploration,bellemare2016unifying]. Speakers: Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang

Comments
loading...