papers AI Learner
The Github is limit! Click to go to the new site.

Bootstrapping Generators from Noisy Data

2019-03-18
Laura Perez-Beltrachini, Mirella Lapata

Abstract

A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and associated texts. In this paper we aim to bootstrap generators from large scale datasets where the data (e.g., DBPedia facts) and related texts (e.g., Wikipedia abstracts) are loosely aligned. We tackle this challenging task by introducing a special-purpose content selection mechanism. We use multi-instance learning to automatically discover correspondences between data and text pairs and show how these can be used to enhance the content signal while training an encoder-decoder architecture. Experimental results demonstrate that models trained with content-specific objectives improve upon a vanilla encoder-decoder which solely relies on soft attention.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1804.06385

PDF

http://arxiv.org/pdf/1804.06385


Similar Posts

Comments