[NeurIPS 2019 Paper Highlight] Yang Song @ Stanford University on Generative Modeling - Crossminds
CrossMind.ai logo
[NeurIPS 2019 Paper Highlight] Yang Song @ Stanford University on Generative Modeling
Aug 14, 2020
|
15 views
Details
This episode is an interview with Yang Song from Stanford University, discussing highlights from his paper "Generative Modeling by Estimating Gradients of the Data Distribution". Yang Song is a 4th year PhD student in Computer Science at Stanford University. He works with Prof. Stefano Ermon on generative modeling and robust machine learning. He obtained his Bachelor of Science in Mathematics and Physics from Tsinghua University. Paper at a Glance: We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments. Poster: https://drive.google.com/file/d/133Ql8z_Javs62w4tiNQD2Q70iLMn5nI3/view?usp=sharing Paper: https://arxiv.org/abs/1907.05600
This episode is an interview with Yang Song from Stanford University, discussing highlights from his paper "Generative Modeling by Estimating Gradients of the Data Distribution". Yang Song is a 4th year PhD student in Computer Science at Stanford University. He works with Prof. Stefano Ermon on generative modeling and robust machine learning. He obtained his Bachelor of Science in Mathematics and Physics from Tsinghua University. Paper at a Glance: We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments. Poster: https://drive.google.com/file/d/133Ql8z_Javs62w4tiNQD2Q70iLMn5nI3/view?usp=sharing Paper: https://arxiv.org/abs/1907.05600
NeurIPS 2019
Comments
loading...
Reaction (0) | Note (0)
    📝 No reactions and notes yet
    Be the first one to share your thoughts!
Recommended