papers AI Learner
The Github is limit! Click to go to the new site.

Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks

2015-09-23
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer

Abstract

Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.

Abstract (translated by Google)

递归神经网络可以被训练以产生一些输入的令牌序列,例如最近在机器翻译和图像字幕中的结果。目前的训练方法包括在给定当前(重复)状态和前一个令牌的情况下,最大化序列中每个令牌的可能性。在推断中,未知的先前的标记然后被由模型自身产生的标记代替。训练和推理之间的这种差异会产生可以沿着生成的序列快速积累的错误。我们提出了一个课程学习策略,从一个完全指导的方案,使用真正的上一个令牌,转向一个主要使用生成的令牌的较少指导方案,从而轻松地将训练过程改变。对几个序列预测任务的实验表明,这种方法产生了显着的改进。此外,它成功地用于我们赢得2015年的MSCOCO图像字幕挑战。

URL

https://arxiv.org/abs/1506.03099

PDF

https://arxiv.org/pdf/1506.03099


Similar Posts

Comments