Abstract: Model efficiency has become increasingly important in computer vision. In this paper, we systematically study neural network architecture design choices for object detection and propose several key optimizations to improve efficiency. First, we propose a weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multiscale feature fusion; Second, we propose a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time. Based on these optimizations and better backbones, we have developed a new family of object detectors, called EfficientDet, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints. In particular, with single model and single-scale, our EfficientDet-D7 achieves state-of-the-art 55.1 AP on COCO test-dev with 77M parameters and 410B FLOPs, being 4x - 9x smaller and using 13x - 42x fewer FLOPs than previous detectors.
Authors: Mingxing Tan, Ruoming Pang, Quoc V. Le (Google AI)
MobileNetV2 and EfficientNet Video: https://crossminds.ai/video/6022f59cb2c12c68a0dfbd26/
EfficientDet: Scalable and Efficient Object Detection: https://arxiv.org/abs/1911.09070
Credits: I would like to thank Deepak Anand for his discussions on this topic. Some slides have been in part or entirely taken from Jinwon Lee’s presentation of EfficientDet on SlideShare. Thanks to Jinwon for open sourcing his presentation.
1:04 Challenges and Related Works
4:13 Model Scaling and MBConv Recap
6:50 Architecture overview
12:14 Weight Feature Fusion
16:18 EfficientDet Scaling