papers AI Learner
The Github is limit! Click to go to the new site.

What is the Role of Recurrent Neural Networks in an Image Caption Generator?

2017-08-25
Marc Tanti, Albert Gatt, Kenneth P. Camilleri

Abstract

In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary generation' component. This view suggests that the image features should be injected’ into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can instead be viewed as only encoding the previously generated words. This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged’ with the image features at a later stage. This paper compares these two architectures. We find that, in general, late merging outperforms injection, suggesting that RNNs are better viewed as encoders, rather than generators.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1708.02043

PDF

https://arxiv.org/pdf/1708.02043


Similar Posts

Comments