From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality

CVPR 2020

From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality

Sep 29, 2020
|
34 views
|
|
Code
Details
Authors: Zhenqiang Ying, Haoran Niu, Praful Gupta, Dhruv Mahajan, Deepti Ghadiyaram, Alan Bovik Description: Blind or no-reference (NR) perceptual picture quality prediction is a difficult, unsolved problem of great consequence to the social and streaming media industries that impacts billions of viewers daily. Unfortunately, popular NR prediction models perform poorly on real-world distorted pictures. To advance progress on this problem, we introduce the largest (by far) subjective picture quality database, containing about 40, 000 real-world distorted pictures and 120, 000 patches, on which we collected about 4M human judgments of picture quality. Using these picture and patch quality labels, we built deep region-based architectures that learn to produce state-of-the-art global picture quality predictions as well as useful local picture quality maps. Our innovations include picture quality prediction architectures that produce global-to-local inferences as well as local-to-global inferences (via feedback). The dataset and source code are available at https: //live.ece.utexas.edu/research.php.

Comments
loading...