papers AI Learner
The Github is limit! Click to go to the new site.

Automatic Temporally Coherent Video Colorization

2019-04-21
Harrish Thasarathan, Kamyar Nazeri, Mehran Ebrahimi

Abstract

Greyscale image colorization for applications in image restoration has seen significant improvements in recent years. Many of these techniques that use learning-based methods struggle to effectively colorize sparse inputs. With the consistent growth of the anime industry, the ability to colorize sparse input such as line art can reduce significant cost and redundant work for production studios by eliminating the in-between frame colorization process. Simply using existing methods yields inconsistent colors between related frames resulting in a flicker effect in the final video. In order to successfully automate key areas of large-scale anime production, the colorization of line arts must be temporally consistent between frames. This paper proposes a method to colorize line art frames in an adversarial setting, to create temporally coherent video of large anime by improving existing image to image translation methods. We show that by adding an extra condition to the generator and discriminator, we can effectively create temporally consistent video sequences from anime line arts. Code and models available at: https://github.com/Harry-Thasarathan/TCVC

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.09527

PDF

http://arxiv.org/pdf/1904.09527


Similar Posts

Comments