Blind Video Temporal Consistency via Deep Video Prior [NeurIPS 2020]

NeurIPS 2020

Blind Video Temporal Consistency via Deep Video Prior [NeurIPS 2020]

Nov 08, 2020
|
38 views
|
|
Code
Details
Paper Abstract: Applying image processing algorithms independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior. Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. Our source codes are publicly available at github.com/ChenyangLEI/deep-video-prior. Paper Authors: Chenyang Lei, Yazhou Xing, Qifeng Chen

Comments
loading...