[ECCV-2020] Cross-Identity Motion Transfer for Arbitrary Objects through Pose-Attentive Video Reass-

ECCV 2020

[ECCV-2020] Cross-Identity Motion Transfer for Arbitrary Objects through Pose-Attentive Video Reass-

Aug 18, 2020
|
101 views
|
Details
We propose an attention-based networks for transferring motions between arbitrary objects. Given a source image(s) and a driving video, our networks animate the subject in the source images according to the motion in the driving video. In our attention mechanism, dense similarities between the learned keypoints in the source and the driving images are computed in order to retrieve the appearance information from the source images. Taking a different approach from the well-studied warping based models, our attention-based model has several advantages. By reassembling non-locally searched pieces from the source contents, our approach can produce more realistic outputs. Furthermore, our system can make use of multiple observations of the source appearance (e.g. front and sides of faces) to make the results more accurate. To reduce the training-testing discrepancy of the self-supervised learning, a novel cross-identity training scheme is additionally introduced. With the training scheme, our networks is trained to transfer motions between different subjects, as in the real testing scenario. Experimental results validate that our method produces visually pleasing results in various object domains, showing better performances compared to previous works. Reference: Cross-Identity Motion Transfer for Arbitrary Objects through Pose-Attentive Video Reassembling Proc. European Conference on Computer Vision Subin Jeon*, Seonghyeon Nam*, Seoung Wug Oh*, and Seon Joo Kim** *: Yonsei University **: Yonsei University, Facebook Links: Project page: http://snam.ml/research/panet Arxiv: https://arxiv.org/abs/2007.08786

Comments
loading...