papers AI Learner
The Github is limit! Click to go to the new site.

An Evaluation of Neural Machine Translation Models on Historical Spelling Normalization

2018-08-04
Gongbo Tang, Fabienne Cap, Eva Pettersson, Joakim Nivre

Abstract

In this paper, we apply different NMT models to the problem of historical spelling normalization for five languages: English, German, Hungarian, Icelandic, and Swedish. The NMT models are at different levels, have different attention mechanisms, and different neural network architectures. Our results show that NMT models are much better than SMT models in terms of character error rate. The vanilla RNNs are competitive to GRUs/LSTMs in historical spelling normalization. Transformer models perform better only when provided with more training data. We also find that subword-level models with a small subword vocabulary are better than character-level models for low-resource languages. In addition, we propose a hybrid method which further improves the performance of historical spelling normalization.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1806.05210

PDF

https://arxiv.org/pdf/1806.05210


Similar Posts

Comments