Approximation based Variance Reduction for Reparameterization Gradients

NeurIPS 2020

Approximation based Variance Reduction for Reparameterization Gradients

Dec 06, 2020
|
34 views
|
Details
Flexible variational distributions improve variational inference but are harder to optimize. In this work we present a control variate that is applicable for any reparameterizable distribution with known mean and covariance matrix, e.g. Gaussians with any covariance structure. The control variate is based on a quadratic approximation of the model, and its parameters are set using a double-descent scheme by minimizing the gradient estimator's variance. We empirically show that this control variate leads to large improvements in gradient variance and optimization convergence for inference with non-factorized variational distributions. Speakers: Tomas Geffner, Justin Domke

Comments
loading...