Abstract
Tradition tweet classification models for crisis response focus on convolutional layers and domain-specific word embeddings. In this paper, we study the application of different neural networks with general-purpose and domain-specific word embeddings to investigate their ability to improve the performance of tweet classification models. We evaluate four tweet classification models on CrisisNLP dataset and obtain comparable results which indicates that general-purpose word embedding such as GloVe can be used instead of domain-specific word embedding especially with Bi-LSTM where results reported the highest performance of 62.04% F1 score.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1903.11024