Select, Supplement and Focus for RGB-D Saliency Detection - Crossminds logo

Select, Supplement and Focus for RGB-D Saliency Detection

Sep 29, 2020
Authors: Miao Zhang, Weisong Ren, Yongri Piao, Zhengkun Rong, Huchuan Lu Description: Depth data containing a preponderance of discriminative power in location have been proven beneficial for accurate saliency prediction. However, RGB-D saliency detection methods are also negatively influenced by randomly distributed erroneous or missing regions on the depth map or along the object boundaries. This offers the possibility of achieving more effective inference by well designed models. In this paper, we propose a new framework for accurate RGB-D saliency detection taking account of local and global complementarities from two modalities. This is achieved by designing a complimentary interaction model discriminative enough to simultaneously select useful representation from RGB and depth data, and meanwhile to refine the object boundaries. Moreover, we proposed a compensation-aware loss to further process the information not being considered in the complimentary interaction model, leading to improvement of the generalization ability for challenging scenes. Experiments on six public datasets show that our method outperforms18state-of-the-art methods.

Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!