Abstract
We propose a novel dialogue modeling framework which learns binary hashcodes as compressed text representations, allowing for efficient similarity search; unlike traditional deep learning models, it handles well relatively small datasets, while also scaling to large ones. We also derive a novel lower bound on mutual information, or infogain, used as a model-selection criterion favoring representations with better alignment between the utterances of the collaborative dialogue participants, as well as higher predictability of the generated responses. As demonstrated on three real-life datasets, the proposed approach significantly outperforms several state-of-art neural network based dialogue systems, both in terms of computational efficiency, reducing training time from days or weeks to hours, and the response quality, achieving an order of magnitude improvement over competitors in frequency of being chosen as the best model by human evaluators.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1804.10188