Abstract
We describe the setting and results of the ConvAI2 NeurIPS competition that aims to further the state-of-the-art in open-domain chatbots. Some key takeaways from the competition are: (i) pretrained Transformer variants are currently the best performing models on this task, (ii) but to improve performance on multi-turn conversations with humans, future systems must go beyond single word metrics like perplexity to measure the performance across sequences of utterances (conversations) – in terms of repetition, consistency and balance of dialogue acts (e.g. how many questions asked vs. answered).
Abstract (translated by Google)
URL
http://arxiv.org/abs/1902.00098