papers AI Learner
The Github is limit! Click to go to the new site.

Domain Adaptation for Neural Networks by Parameter Augmentation

2016-07-01
Yusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka

Abstract

We propose a simple domain adaptation method for neural networks in a supervised setting. Supervised domain adaptation is a way of improving the generalization performance on the target domain by using the source domain dataset, assuming that both of the datasets are labeled. Recently, recurrent neural networks have been shown to be successful on a variety of NLP tasks such as caption generation; however, the existing domain adaptation techniques are limited to (1) tune the model parameters by the target dataset after the training by the source dataset, or (2) design the network to have dual output, one for the source domain and the other for the target domain. Reformulating the idea of the domain adaptation technique proposed by Daume (2007), we propose a simple domain adaptation method, which can be applied to neural networks trained with a cross-entropy loss. On captioning datasets, we show performance improvements over other domain adaptation methods.

Abstract (translated by Google)

我们提出了一个简单的领域适应方法神经网络在一个监督的设置。监督域自适应是一种通过使用源域数据集来提高目标域上的泛化性能的方法,假定两个数据集都被标记。最近,循环神经网络已被证明是成功的各种NLP任务,如字幕生成;然而,现有的领域适应技术限于:(1)在源数据集的训练之后,通过目标数据集调整模型参数;或(2)将网络设计为具有双输出,一个用于源域,另一个用于源域目标域。重新形成Daume(2007)提出的领域自适应技术的思想,我们提出了一个简单的领域自适应方法,可以应用于交叉熵损失训练的神经网络。在字幕数据集上,我们展示了其他领域适应方法的性能改进。

URL

https://arxiv.org/abs/1607.00410

PDF

https://arxiv.org/pdf/1607.00410


Similar Posts

Comments