The Wasserstein Proximal Gradient Algorithm

NeurIPS 2020

The Wasserstein Proximal Gradient Algorithm

Dec 06, 2020
|
27 views
|
Details
We consider the task of sampling from a log-concave probability distribution. This target distribution can be seen as a minimizer of the relative entropy functional defined on the space of probability distributions. The relative entropy can be decomposed as the sum of a functional called the potential energy, assumed to be smooth, and a nonsmooth functional called the entropy. We adopt a Forward Backward (FB) Euler scheme for the discretization of the gradient flow of the relative entropy. This FB algorithm can be seen as a proximal gradient algorithm to minimize the relative entropy over the space of probability measures. Using techniques from convex optimization and optimal transport, we provide a non-asymptotic analysis of the FB algorithm. The convergence rate of the FB algorithm matches the convergence rate of the classical proximal gradient algorithm in Euclidean spaces. The practical implementation of the FB algorithm can be challenging. In practice, the user may choose to discretize the space and work with empirical measures. In this case, we provide a closed form formula for the proximity operator of the entropy. Speakers: Adil Salim, Anna Korba, Giulia Luise

Comments
loading...