papers AI Learner
The Github is limit! Click to go to the new site.

Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization

2017-10-03
Wei Zhang, Bowen Zhou

Abstract

Learning to remember long sequences remains a challenging task for recurrent neural networks. Register memory and attention mechanisms were both proposed to resolve the issue with either high computational cost to retain memory differentiability, or by discounting the RNN representation learning towards encoding shorter local contexts than encouraging long sequence encoding. Associative memory, which studies the compression of multiple patterns in a fixed size memory, were rarely considered in recent years. Although some recent work tries to introduce associative memory in RNN and mimic the energy decay process in Hopfield nets, it inherits the shortcoming of rule-based memory updates, and the memory capacity is limited. This paper proposes a method to learn the memory update rule jointly with task objective to improve memory capacity for remembering long sequences. Also, we propose an architecture that uses multiple such associative memory for more complex input encoding. We observed some interesting facts when compared to other RNN architectures on some well-studied sequence learning tasks.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1709.06493

PDF

https://arxiv.org/pdf/1709.06493


Similar Posts

Comments