papers AI Learner
The Github is limit! Click to go to the new site.

Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation

2019-01-02
Cristina Garbacea, Samuel Carton, Shiyan Yan, Qiaozhu Mei

Abstract

Recent advances in deep learning have resulted in a resurgence in the popularity of natural language generation (NLG). Many deep learning based models, including recurrent neural networks and generative adversarial networks, have been proposed and applied to generating various types of text. Despite the fast development of methods, how to better evaluate the quality of these natural language generators remains a significant challenge. We conduct an in-depth empirical study to evaluate the existing evaluation methods for natural language generation. We compare human-based evaluators with a variety of automated evaluation procedures, including discriminative evaluators that measure how well the generated text can be distinguished from human-written text, as well as text overlap metrics that measure how similar the generated text is to human-written references. We measure to what extent these different evaluators agree on the ranking of a dozen of state-of-the-art generators for online product reviews. We find that human evaluators do not correlate well with discriminative evaluators, leaving a bigger question of whether adversarial accuracy is the correct objective for natural language generation. In general, distinguishing machine-generated text is a challenging task even for human evaluators, and their decisions tend to correlate better with text overlap metrics. We also find that diversity is an intriguing metric that is indicative of the assessments of different evaluators.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1901.00398

PDF

https://arxiv.org/pdf/1901.00398


Similar Posts

Comments