A Hardware Prototype Targeting Distributed Deep Learning for On-Device Inference - Crossminds
CrossMind.ai logo
Authors: Allen-Jasmin Farcas, Guihong Li, Kartikeya Bhardwaj, Radu Marculescu Description: This paper presents a hardware prototype and a framework for a new communication-aware model compression for distributed on-device inference. Our approach relies on Knowledge Distillation (KD) and achieves orders of magnitude compression ratios on a large pre-trained teacher model. The distributed hardware prototype consists of multiple student models deployed on Raspberry-Pi 3 nodes that run Wide ResNet and VGG models on the CIFAR10 dataset for real-time image classification. We observe significant reductions in memory footprint (50×), energy consumption (14×), latency (33×) and an increase in performance (12×) without any significant accuracy loss compared to the initial teacher model. This is an important step towards deploying deep learning models for IoT applications.
Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!