AlexNet: ImageNet Classification with Deep Convolutional Neural Networks (Paper Explained)

AlexNet: ImageNet Classification with Deep Convolutional Neural Networks (Paper Explained)

Jan 30, 2021
|
493 views
Details
Abstract: We trained a large, deep convolutional neural network to classify the 1.3 million high-resolution images in the LSVRC-2010 ImageNet training set into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 39.7\% and 18.9\% which is considerably better than the previous state-of-the-art results. The neural network, which has 60 million parameters and 500,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and two globally connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of convolutional nets. To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective. Authors: Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (University of Toronto)

0:00 - Intro & Overview 2:00 - The necessity of larger models 6:20 - Why CNNs? 11:05 - ImageNet 12:05 - Model Architecture Overview 14:35 - ReLU Nonlinearities 18:45 - Multi-GPU training 21:30 - Classification Results 24:30 - Local Response Normalization 28:05 - Overlapping Pooling 32:25 - Data Augmentation 38:30 - Dropout 40:30 - More Results 43:50 - Conclusion
Comments
loading...