papers AI Learner
The Github is limit! Click to go to the new site.

Syntactic Recurrent Neural Network for Authorship Attribution

2019-02-26
Fereshteh Jafariakinabad, Sansiri Tarnpradab, Kien A. Hua

Abstract

Writing style is a combination of consistent decisions at different levels of language production including lexical, syntactic, and structural associated to a specific author (or author groups). While lexical-based models have been widely explored in style-based text classification, relying on context makes the model less scalable when dealing with heterogeneous data comprised of various topics. On the other hand, syntactic models which are context-independent, are more robust against topic variance. In this paper, we introduce a syntactic recurrent neural network to encode the syntactic patterns of a document in a hierarchical structure. The model first learns the syntactic representation of sentences from the sequence of part-of-speech tags. For this purpose, we exploit both convolutional filters and long short-term memories to investigate the short-term and long-term dependencies of part-of-speech tags in the sentences. Subsequently, the syntactic representations of sentences are aggregated into document representation using recurrent neural networks. Our experimental results on PAN 2012 dataset for authorship attribution task shows that syntactic recurrent neural network outperforms the lexical model with the identical architecture by approximately 14% in terms of accuracy.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.09723

PDF

http://arxiv.org/pdf/1902.09723


Similar Posts

Comments