Abstract
Adversarial training has been highly successful for single-image super-resolution, as it yields realistic and highly detailed results. Despite this success, current state-of-the-art methods for video super-resolution still favor simpler norms such as $L_2$ over adversarial loss functions. The averaging nature of direct vector norms as loss functions easily leads to temporal smoothness and coherence caused by an undesirable lack of spatial detail in the generated images. In our work, we instead propose an adversarial training for video super-resolution that leads to temporally coherent solutions without sacrificing spatial detail. Our work focuses on novel loss formulations for video super-resolution, the power of which we demonstrate based on an established generator framework. We show that temporal adversarial learning is the key to achieving photo-realistic and temporally coherent detail. Besides the spatio-temporal discriminator, we propose a novel Ping-Pong loss that can effectively remove temporal artifacts in recurrent networks without reducing perceptual quality. Quantifying the temporal coherence for video super-resolution tasks has also not been addressed previously. We propose a first set of metrics to evaluate the accuracy as well as the perceptual quality of the temporal evolution. A series of user studies also confirm the ranking achieved via these metrics. Overall, our method outperforms previous work by yielding more detailed images with natural temporal changes.
Abstract (translated by Google)
URL
https://arxiv.org/abs/1811.09393