papers AI Learner
The Github is limit! Click to go to the new site.

Implicit Pairs for Boosting Unpaired Image-to-Image Translation

2019-04-15
Yiftach Ginger, Dov Danon, Hadar Averbuch-Elor, Daniel Cohen-Or

Abstract

In image-to-image translation the goal is to learn a mapping from one image domain to another. Supervised approaches learn the mapping from paired samples. However, collecting large sets of image pairs is often prohibitively expensive or infeasible. In our work, we show that even training on the pairs implicitly, boosts the performance of unsupervised techniques by over 14% across several measurements. We illustrate that the injection of implicit pairs into unpaired sets strengthens the mapping between the two domains and improves the compatibility of their distributions. Furthermore, we show that for this purpose the implicit pairs can be pseudo-pairs, i.e., paired samples which only approximate a real pair. We demonstrate the effect of the approximated implicit samples on image-to-image translation problems, where such pseudo-pairs can be synthesized in one direction, but not in the other. We further show that pseudo-pairs are significantly more effective as implicit pairs in an unpaired setting, than directly using them explicitly in a paired setting.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.06913

PDF

http://arxiv.org/pdf/1904.06913


Similar Posts

Comments