Authors: Yang Wang, Yang Cao, Zheng-Jun Zha, Jing Zhang, Zhiwei Xiong Description: State-of-the-art image classification algorithms building upon convolutional neural networks (CNNs) are commonly trained on large annotated datasets of high-quality images. When applied to low-quality images, they will suffer a significant degradation in performance, since the structural and statistical properties of pixels in the neighborhood are obstructed by image degradation. To address this problem, this paper proposes a novel deep degradation prior for low-quality image classification. It is based on statistical observations that, in the deep representation space, image patches with structural similarity have uniform distribution even if they come from different images, and the distributions of corresponding patches in low- and high-quality images have uniform margins under the same degradation condition. Therefore, we propose a feature de-drifting module (FDM) to learn the mapping relationship between deep representations of low- and high- quality images, and leverage it as a deep degradation prior (DDP) for low-quality image classification. Since the statistical properties are independent to image content, deep degradation prior can be learned on a training set of limited images without supervision of semantic labels and served in a form of “plugging-in” module of the existing classification networks to improve their performance on degraded images. Evaluations on the benchmark dataset ImageNet-C demonstrate that our proposed DDP can improve the accuracy of the pre-trained network model by more than 20% under various degradation conditions. Even under the extreme setting that only 10 images from CUB-C dataset are used for the training of DDP, our method improves the accuracy of VGG16 on ImageNet-C from 37% to 55%.