Learning both Weights and Connections for Efficient Neural Networks (Research Paper Walkthrough)

Learning both Weights and Connections for Efficient Neural Networks (Research Paper Walkthrough)

Dec 29, 2021
|
24 views
Details
#neuralnetworks #pruning #ai This research proposes a 3-step method for training efficient neural networks that are lightweight and can be deployed on-device yet retaining the SOTA accuracy numbers. ⏩ Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine-tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy. Sign-up for Email Subscription - https://forms.gle/duSwrYAGw6zUhoGf9 ⏩ OUTLINE: 00:00 - Like, Share, and Subscribe :) 00:40 - Sign-up to Email Subscription 0:57 - Abstract and Background 01:43 - Three-step Training Pipeline for Training Efficient Neural Networks 03:30 - Intuition behind the working and idea 06:06 - Effect of and choosing Dropout ratio for Training Efficient Neural Networks 08:09 - Iterative Pruning and Pruning Neurons ⏩ Paper Title: Learning both Weights and Connections for Efficient Neural Networks ⏩ Paper: https://arxiv.org/pdf/1506.02626.pdf ⏩ Author: Song Han, Jeff Pool, John Tran, William J. Dally ⏩ Organisation: Stanford, NVIDIA Please feel free to share out the content and subscribe to my channel :) ⏩ Subscribe - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA?sub_confirmation=1 BERT use-cases in NLP: https://www.youtube.com/watch?v=uhnKsGDyhEg&list=PLsAqq9lZFOtX-WN8lldIOI7p-p0lBzjtY ********************************************** If you want to support me financially which is totally optional and voluntary ❤️ You can consider buying me chai ( because I don't drink coffee :) ) at https://www.buymeacoffee.com/TechvizCoffee ❤️ Support using Paypal - https://www.paypal.com/paypalme/TechVizDataScience ********************************************** ⏩ Youtube - https://www.youtube.com/c/TechVizTheDataScienceGuy ⏩ LinkedIn - https://linkedin.com/in/prakhar21 ⏩ Medium - https://medium.com/@prakhar.mishra ⏩ GitHub - https://github.com/prakhar21 ⏩ Twitter - https://twitter.com/rattller ********************************************* Tools I use for making videos :) ⏩ iPad - https://tinyurl.com/y39p6pwc ⏩ Apple Pencil - https://tinyurl.com/y5rk8txn ⏩ GoodNotes - https://tinyurl.com/y627cfsa #techviz #datascienceguy #nlproc #machinelearning

00:00 - Like, Share, and Subscribe :) 00:40 - Sign-up to Email Subscription 0:57 - Abstract and Background 01:43 - Three-step Training Pipeline for Training Efficient Neural Networks 03:30 - Intuition behind the working and idea 06:06 - Effect of and choosing Dropout ratio for Training Efficient Neural Networks 08:09 - Iterative Pruning and Pruning Neurons
Comments
loading...