Predicting Generalization in Deep Learning (PGDL): Opening remark

NeurIPS 2020

Predicting Generalization in Deep Learning (PGDL): Opening remark

Dec 06, 2020
|
29 views
|
Details
This paper provides non-vacuous and numerically-tight generalization guarantees for deep learning, as well as theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature. We also propose new open problems and discuss the limitations of our results. Speakers: Yiding Jiang

Comments
loading...