papers AI Learner
The Github is limit! Click to go to the new site.

Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks

2015-05-30
Kai Sheng Tai, Richard Socher, Christopher D. Manning

Abstract

Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).

Abstract (translated by Google)
URL

https://arxiv.org/abs/1503.00075

PDF

https://arxiv.org/pdf/1503.00075


Similar Posts

Comments