papers AI Learner
The Github is limit! Click to go to the new site.

Distinctive-attribute Extraction for Image Captioning

2018-07-25
Boeun Kim, Young Han Lee, Hyedong Jung, Choongsang Cho

Abstract

Image captioning, an open research issue, has been evolved with the progress of deep neural networks. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are employed to compute image features and generate natural language descriptions in the research. In previous works, a caption involving semantic description can be generated by applying additional information into the RNNs. In this approach, we propose a distinctive-attribute extraction (DaE) which explicitly encourages significant meanings to generate an accurate caption describing the overall meaning of the image with their unique situation. Specifically, the captions of training images are analyzed by term frequency-inverse document frequency (TF-IDF), and the analyzed semantic information is trained to extract distinctive-attributes for inferring captions. The proposed scheme is evaluated on a challenge data, and it improves an objective performance while describing images in more detail.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1807.09434

PDF

https://arxiv.org/pdf/1807.09434


Similar Posts

Comments