[NeurIPS 2019 Paper Highlight] Sharon Zhou @ Stanford: HYPE of Generative Models - Crossminds
CrossMind.ai logo
[NeurIPS 2019 Paper Highlight] Sharon Zhou @ Stanford: HYPE of Generative Models
Aug 08, 2020
|
71 views
Image Generation
Deep Learning
ENet
Machine Learning
Generative Adversarial Networks
Details
This episode is an interview with Sharon Zhou from Stanford University (advised by Andrew Ng), discussing highlights from her paper, "Human eYe Perceptual Evaluation of Generative Models," accepted as an oral presentation at NeurIPS 2019 Conference. Sharon Zhou is a CS PhD student advised by Andrew Ng, working on generative models and the inductive biases of neural networks, as well as applications of ML to climate change and healthcare. She was previously an ML product manager at Google and various ML startups. She was the first Harvard graduate to major in CS and Classics and in her spare time composes poetry and plays with generative models. Paper Abstract: Generative models often use human evaluations to measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained embeddings. However, up until now, direct human evaluation strategies have been ad-hoc, neither standardized nor validated. Our work establishes a gold standard human benchmark for generative realism. We construct Human eYe Perceptual Evaluation (HYPE) a human benchmark that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) able to produce separable model performances, and (4) efficient in cost and time. We introduce two variants: one that measures visual perception under adaptive time constraints to determine the threshold at which a model's outputs appear real (e.g. 250ms), and the other a less expensive variant that measures human error rate on fake and real images sans time constraints. We test HYPE across six state-of-the-art generative adversarial networks and two sampling techniques on conditional and unconditional image generation using four datasets: CelebA, FFHQ, CIFAR-10, and ImageNet. We find that HYPE can track model improvements across training epochs, and we confirm via bootstrap sampling that HYPE rankings are consistent and replicable. Poster: https://drive.google.com/file/d/1_Rz1oLBd49woRwwX-v3LHgxn9XteEPkP/view Paper: https://arxiv.org/abs/1904.01121
Comments
loading...
Reaction (0) | Note (0)
    📝 No reactions and notes yet
    Be the first one to share your thoughts!
loading...
Recommended