SGD Learns One-Layer Networks in WGANs

ICML 2020

Details
Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. Speakers: Qi Lei, Alex Dimakis, Costis Daskalakis, Jason D. Lee

Comments
loading...