From Importance Sampling to Doubly Robust Policy Gradient

ICML 2020

From Importance Sampling to Doubly Robust Policy Gradient

Jul 12, 2020
|
22 views
|
Details
We show that policy gradient (PG) and its variance reduction variants can be derived by taking finite difference of function evaluations supplied by estimators from the importance sampling (IS) family for off-policy evaluation (OPE). Starting from the doubly robust (DR) estimator [Jiang and Li, 2016], we provide a simple derivation of a very general and flexible form of PG, which subsumes the state-of-the-art variance reduction technique [Cheng et al., 2019] as its special case and immediately hints at further variance reduction opportunities overlooked by existing literature. Speakers: Jiawei Huang, Nan Jiang

Comments
loading...