SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes

SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes

Apr 08, 2021
|
179 views
|
Details
Abstract: Neural implicit surface representations have emerged as a promising paradigm to capture 3D shapes in a continuous and resolution-independent manner. However, adapting them to articulated shapes is non-trivial. Existing approaches learn a backward warp field that maps deformed to canonical points. However, this is problematic since the backward warp field is pose dependent and thus requires large amounts of data to learn. To address this, we introduce SNARF, which combines the advantages of linear blend skinning (LBS) for polygonal meshes with those of neural implicit surfaces by learning a forward deformation field without direct supervision. This deformation field is defined in canonical, pose-independent space, allowing for generalization to unseen poses. Learning the deformation field from posed meshes alone is challenging since the correspondences of deformed points are defined implicitly and may not be unique under changes of topology. We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding. We derive analytical gradients via implicit differentiation, enabling end-to-end training from 3D meshes with bone transformations. Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy. We demonstrate our method in challenging scenarios on (clothed) 3D humans in diverse and unseen poses. Authors: Xu Chen, Yufeng Zheng, Michael J. Black, Otmar Hilliges, Andreas Geiger (ETH Zurich, University of Tubingen, Max Planck Institute for Intelligent Systems, Tubingen)

Comments
loading...