Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
CrossMind.ai logo

Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement

Feb 23, 2021
|
34 views
|
Details
Abstract: Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent subspaces. From the unsupervised disentanglement perspective, we rethink content and style and propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction, which serves as a data bias. The corresponding model inductive bias is introduced by our proposed C-S disentanglement Module (C-S DisMo), which assigns different and independent roles to content and style when approximating the real data distributions. Specifically, each content embedding from the dataset, which encodes the most dominant factors for image reconstruction, is assumed to be sampled from a shared distribution across the dataset. The style embedding for a particular image, encoding the remaining factors, is used to customize the shared distribution through an affine transformation. The experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement, which is comparable or even better than supervised methods. We verify the effectiveness of our method by downstream tasks: domain translation and single-view 3D reconstruction. Authors: Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng (Hong Kong University of Science and Technology)

Comments
loading...