Abstract: Consumer-level depth cameras and depth sensors embedded in mobile devices enable numerous applications, such as AR games and face identification. However, the quality of the captured depth is sometimes insufficient for 3D reconstruction, tracking and other computer vision tasks. In this paper, we propose a self-supervised depth denoising approach to denoise and refine depth coming from a low quality sensor. We record simultaneous RGB-D sequences with unzynchronized lower- and higher-quality cameras and solve a challenging problem of aligning sequences both temporally and spatially. We then learn a deep neural network to denoise the lower-quality depth using the matched higher-quality data as a source of supervision signal. We experimentally validate our method against state-of-the-art filtering-based and deep denoising techniques and show its application for 3D object reconstruction tasks where our approach leads to more detailed fused surfaces and better tracking.
Authors: Authors: Akhmedkhan Shabanov, Ilya Krotov, Nikolay Chinaev, Vsevolod Poletaev, Sergei Kozlukov, Igor Pasechnik, Bulat Yakupov, Artsiom Sanakoyeu, Vadim Lebedev, Dmitry Ulyanov (MIPT)