Abstract
Popular deep domain adaptation methods have mainly focused on learning discriminative and domain-invariant features of different domains. In this work, we present a novel approach inspired by human cognitive processes where receptive fields learned from other vision tasks are recruited to recognize new objects. First, representations of the source and target domains are obtained by the variational auto-encoder (VAE) respectively. Then we construct networks with cross-grafted representation stacks (CGRS). There, it recruits the different level representations learned by sliced receptive field, which projects the self-domain latent encodings to a new association space. Finally, we employ the generative adversarial networks (GAN) to pull the associations from the target to the source, mapped to the known label space. This adaptation process contains three phases, information encoding, association generation, and label alignment. Experiments results demonstrate the CGRS bridges the domain gap well, and the proposed model outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1902.06328