[CVPR 2020 Award Nominee] Momentum Contrast for Unsupervised Visual Representation Learning - Crossminds
CrossMind.ai logo

[CVPR 2020 Award Nominee] Momentum Contrast for Unsupervised Visual Representation Learning

Sep 24, 2020
|
64 views
|
Details
Authors: Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick Description: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks. Full Paper: https://arxiv.org/abs/1911.05722

Comments
loading...
Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!
loading...
Recommended