NeurIPS 2020 Blind Video Temporal Consistency via Deep Video Prior
Applying image processing algorithms independently to each video frame oftenleads to temporal inconsistency in the resulting video. To address this issue, wepresent a novel and general approach for blind video temporal consistency. Ourmethod is only trained on a pair of original and processed videos directly insteadof a large dataset. Unlike most previous methods that enforce temporal consistencywith optical flow, we show that temporal consistency can be achieved by traininga convolutional network on a video with the Deep Video Prior. Moreover, acarefully designed iteratively reweighted training strategy is proposed to address thechallenging multimodal inconsistency problem. We demonstrate the effectivenessof our approach on 7 computer vision tasks on videos. Extensive quantitative andperceptual experiments show that our approach obtains superior performance thanstate-of-the-art methods on blind video temporal consistency. Our source codes arepublicly available atgithub.com/ChenyangLEI/deep-video-prior.