[Uber & Georgia Tech] Estimating Q (s, s') with Deep Deterministic Dynamics Gradients -ICML 2020

ICML 2020

[Uber & Georgia Tech] Estimating Q (s, s') with Deep Deterministic Dynamics Gradients -ICML 2020

Jul 15, 2020
|
126 views
|
|
Code
Details
"Estimating Q (s, s') with Deep Deterministic Dynamics Gradients" is research conducted by Ashley D. Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, and Jason Yosinski. In this paper, we introduce a novel form of value function, Q(s, s0), that expresses the utility of transitioning from a state s to a neighboring state s1 and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at sites.google.com/view/qss-paper. This work was accepted to the 2020 International Conference on Machine Learning (ICML). Full paper: https://arxiv.org/abs/2002.09505

Comments
loading...