papers AI Learner
The Github is limit! Click to go to the new site.

UAN: Unified Attention Network for Convolutional Neural Networks

2019-01-16
Tony Joseph, Konstantinos G. Derpanis, Faisal Z. Qureshi

Abstract

We propose a new architecture that learns to attend to different Convolutional Neural Networks (CNN) layers (i.e., different levels of abstraction) and different spatial locations (i.e., specific layers within a given feature map) in a sequential manner to perform the task at hand. Specifically, at each Recurrent Neural Network (RNN) timestep, a CNN layer is selected and its output is processed by a spatial soft-attention mechanism. We refer to this architecture as the Unified Attention Network (UAN), since it combines the “what” and “where” aspects of attention, i.e., “what” level of abstraction to attend to, and “where” should the network look at. We demonstrate the effectiveness of this approach on two computer vision tasks: (i) image-based camera pose and orientation regression and (ii) indoor scene classification. We evaluate our method on standard benchmarks for camera localization (Cambridge, 7-Scene, and TUM-LSI datasets) and for scene classification (MIT-67 indoor dataset), and show that our method improves upon the results of previous methods. Empirically, we show that combining “what” and “where” aspects of attention improves network performance on both tasks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.05376

PDF

http://arxiv.org/pdf/1901.05376


Similar Posts

Comments