Deep Generative Modeling for Scene Synthesis via Hybrid Representations

SIGGRAPH 2020

Deep Generative Modeling for Scene Synthesis via Hybrid Representations

Jan 10, 2021
|
108 views
|
Details
This is the invited Siggraph 2020 presentation for TOG paper: Deep Generative Modeling for Scene Synthesis via Hybrid Representations. Abstract: We present a deep generative scene modeling technique for indoor environments. Our goal is to train a generative model using a feed-forward neural network that maps a prior distribution (e.g., a normal distribution) to the distribution of primary objects in indoor scenes. We introduce a 3D object arrangement representation that models the locations and orientations of objects, based on their size and shape attributes. Moreover, our scene representation is applicable for 3D objects with different multiplicities (repetition counts), selected from a database. We show a principled way to train this model by combining discriminator losses for both a 3D object arrangement representation and a 2D image-based representation. We demonstrate the effectiveness of our scene representation and the deep learning method on benchmark datasets. We also show the applications of this generative model in scene interpolation and scene completion.

Comments
loading...