HeadGAN: Video-and-Audio-Driven Talking Head Synthesis
CrossMind.ai logo

HeadGAN: Video-and-Audio-Driven Talking Head Synthesis

Mar 29, 2021
|
41 views
|
Details
Abstract: Recent attempts to solve the problem of talking head synthesis using a single reference image have shown promising results. However, most of them fail to meet the identity preservation problem, or perform poorly in terms of photo-realism, especially in extreme head poses. We propose HeadGAN, a novel reenactment approach that conditions synthesis on 3D face representations, which can be extracted from any driving video and adapted to the facial geometry of any source. We improve the plausibility of mouth movements, by utilising audio features as a complementary input to the Generator. Quantitative and qualitative experiments demonstrate the merits of our approach. Authors: Michail Christos Doukas, Stefanos Zafeiriou, Viktoriia Sharmanska (Imperial College London, Huawei Technologies UK)

Comments
loading...