[NeurIPS 2019 Paper Highlight] Yair Carmon @ Stanford University
CrossMind.ai logo

[NeurIPS 2019 Paper Highlight] Yair Carmon @ Stanford University

Feb 23, 2021
|
36 views
|
Details
This episode is an interview with Yair Carmon from Stanford University, discussing highlights from his paper, "Unlabeled Data Improves Adversarial Robustness" accepted at NeurIPS 2019 conference. Yair is a PhD student at Stanford University in the Electrical Engineering department, advised by John Duchi and Aaron Sidford. He obtained my B.Sc. and M.Sc. from the Technion, Israel Institute of Technology, where he was fortunate to work with Shlomo Shamai and Tsachy Weissman. Yair's research interests are in machine learning, optimization, information theory, signal processing and statistics. I particularly like understanding (and getting around) fundamental limits. Paper At A Glance: We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of Schmidt et al. that shows a sample complexity gap between standard and robust classification. We prove that unlabeled data bridges this gap: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy. Empirically, we augment CIFAR-10 with 500K unlabeled images sourced from 80 Million Tiny Images and use robust self-training to outperform state-of-the-art robust accuracies by over 5 points in (i) ℓ∞ robustness against several strong attacks via adversarial training and (ii) certified ℓ2 and ℓ∞ robustness via randomized smoothing. On SVHN, adding the dataset's own extra training set with the labels removed provides gains of 4 to 10 points, within 1 point of the gain from using the extra labels. Poster: https://drive.google.com/file/d/1M5Ja357pQ-KTQ6g_wQfm9JEbQzt17Cml/view Paper: https://arxiv.org/pdf/1905.13736.pdf

Comments
loading...