[NeurIPS 2019 Paper Highlight] Sebastian Goldt @Institut de Physique Theorique
Aug 14, 2020

2 views
Details
This episode is an interview with Sebastian Institute de Physique Theorique, discussing highlights from his paper "Dynamics of stochastic gradient descent for twolayer neural networks in the teacherstudent setup," accepted as an oral presentation at NeurIPS 2019 Conference.
Sebastian is a postdoc Ecole normale supérieure in Paris, where I work on problems at the interface of theoretical physics and machine learning with Florent Krzakala and Lenka Zdeborová. The main theme of his current research is to understand why neural networks are able to generalize well from examples in practice, when classical learning theory would predict that they cannot. Sebastian's approach is to use concepts and tools from statistical physics to build models for the key drivers of generalization of neural networks. He's also interested in using machine learning as a tool to handle the vast data sets generated by largescale experiments, in particular in neuroscience; and as a source of novel theoretical ideas, e.g. for the thermodynamics of computation.
Paper At A Glance:
Deep neural networks achieve stellar generalization even when they have enough parameters to easily fit all their training data. We study this phenomenon by analyzing the dynamics and the performance of overparameterized twolayer neural networks in the teacherstudent setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalization error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviors have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalization in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.
Poster: https://drive.google.com/file/d/1e9m905QpxWbqGOZSxcVe1MuXpiHxRkS3/view
Paper: http://papers.nips.cc/paper/8921dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup.pdf
This episode is an interview with Sebastian Institute de Physique Theorique, discussing highlights from his paper "Dynamics of stochastic gradient descent for twolayer neural networks in the teacherstudent setup," accepted as an oral presentation at NeurIPS 2019 Conference.
Sebastian is a postdoc Ecole normale supérieure in Paris, where I work on problems at the interface of theoretical physics and machine learning with Florent Krzakala and Lenka Zdeborová. The main theme of his current research is to understand why neural networks are able to generalize well from examples in practice, when classical learning theory would predict that they cannot. Sebastian's approach is to use concepts and tools from statistical physics to build models for the key drivers of generalization of neural networks. He's also interested in using machine learning as a tool to handle the vast data sets generated by largescale experiments, in particular in neuroscience; and as a source of novel theoretical ideas, e.g. for the thermodynamics of computation.
Paper At A Glance:
Deep neural networks achieve stellar generalization even when they have enough parameters to easily fit all their training data. We study this phenomenon by analyzing the dynamics and the performance of overparameterized twolayer neural networks in the teacherstudent setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalization error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviors have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalization in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.
Poster: https://drive.google.com/file/d/1e9m905QpxWbqGOZSxcVe1MuXpiHxRkS3/view
Paper: http://papers.nips.cc/paper/8921dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup.pdf
NeurIPS 2019
Comments
loading...
Reaction(0)
My Note (0)
Reaction (0)  Note (0)
📝 No reactions and notes yet
Be the first one to share your thoughts!