papers AI Learner
The Github is limit! Click to go to the new site.

Amortized Context Vector Inference for Sequence-to-Sequence Networks

2019-01-04
Kyriacos Tolias, Ioannis Kourouklides, Sotirios Chatzis

Abstract

Neural attention (NA) has become a key component of sequence-to-sequence models that yield state-of-the-art performance in as hard tasks as abstractive document summarization (ADS) and video captioning (VC). NA mechanisms perform inference of context vectors; these constitute weighted sums of deterministic input sequence encodings, adaptively sourced over long temporal horizons. Inspired from recent work in the field of amortized variational inference (AVI), in this work we consider treating the context vectors generated by soft-attention (SA) models as latent variables, with approximate finite mixture model posteriors inferred via AVI. We posit that this formulation may yield stronger generalization capacity, in line with the outcomes of existing applications of AVI to deep networks. To illustrate our method, we implement it and experimentally evaluate it considering challenging ADS, VC, and MT benchmarks. This way, we exhibit its improved effectiveness over state-of-the-art alternatives.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1805.09039

PDF

https://arxiv.org/pdf/1805.09039


Similar Posts

Comments