papers AI Learner
The Github is limit! Click to go to the new site.

CerfGAN: A Compact, Effective, Robust, and Fast Model for Unsupervised Multi-Domain Image-to-Image Translation

2019-01-15
Xiao Liu, Shengchuan Zhang, Hong Liu, Xin Liu, Cheng Deng, Rongrong Ji

Abstract

In this paper, we aim at solving the multi-domain image-to-image translation problem with a unified model in an unsupervised manner. The most successful work in this area refers to StarGAN, which works well in tasks like face attribute modulation. However, StarGAN is unable to match multiple translation mappings when encountering general translations with very diverse domain shifts. On the other hand, StarGAN adopts an Encoder-Decoder-Discriminator (EDD) architecture, where the model is time-consuming and unstable to train. To this end, we propose a Compact, effective, robust, and fast GAN model, termed CerfGAN, to solve the above problem. In principle, CerfGAN contains a novel component, i.e., a multi-class discriminator (MCD), which gives the model an extremely powerful ability to match multiple translation mappings. To stabilize the training process, MCD also plays a role of the encoder in CerfGAN, which saves a lot of computation and memory costs. We perform extensive experiments to verify the effectiveness of the proposed method. Quantitatively, CerfGAN is demonstrated to handle a serial of image-to-image translation tasks including style transfer, season transfer, face hallucination, etc, where the input images are sampled from diverse domains. The comparisons to several recently proposed approaches demonstrate the superiority and novelty of the proposed method.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1805.10871

PDF

http://arxiv.org/pdf/1805.10871


Similar Posts

Comments