Shape Reconstruction by Learning Differentiable Surface Representations - Crossminds logo

Shape Reconstruction by Learning Differentiable Surface Representations

Sep 29, 2020
Authors: Jan Bednařík, Shaifali Parashar, Erhan Gündoğdu, Mathieu Salzmann, Pascal Fua Description: Generative models that produce point clouds have emerged as a powerful tool to represent 3D surfaces, and the best current ones rely on learning an ensemble of parametric representations. Unfortunately, they offer no control over the deformations of the surface patches that form the ensemble and thus fail to prevent them from either overlapping or collapsing into single points or lines. As a consequence, computing shape properties such as surface normals and curvatures becomes difficult and unreliable. In this paper, we show that we can exploit the inherent differentiability of deep networks to leverage differential surface properties during training so as to prevent patch collapse and strongly reduce patch overlap. Furthermore, this lets us reliably compute quantities such as surface normals and curvatures. We will demonstrate on several tasks that this yields more accurate surface reconstructions than the state-of-the-art methods in terms of normals estimation and amount of collapsed and overlapped patches.

Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!