papers AI Learner
The Github is limit! Click to go to the new site.

Decomposed Attention: Self-Attention with Linear Complexities

2019-03-23
Zhuoran Shen, Mingyuan Zhang, Shuai Yi, Junjie Yan, Haiyu Zhao

Abstract

Recent works have been applying self-attention to various fields in computer vision and natural language processing. However, the memory and computational demands of existing self-attention operations grow quadratically with the spatiotemporal size of the input. This prohibits the application of self-attention on large inputs, e.g., long sequences, high-definition images, or large videos. To remedy this drawback, this paper proposes a novel decomposed attention (DA) module with substantially less memory and computational consumption. The resource-efficiency allows more widespread and flexible application. Empirical evaluations on object recognition demonstrated the effectiveness of these advantages. DA-augmented models achieved state-of-the-art performance for object recognition on MS-COCO 2017 and significant improvement for image classification on ImageNet. Further, the resource-efficiency of DA democratizes self-attention to fields where the prohibitively high costs have been preventing its application. The state-of-the-art result for stereo depth estimation on the Scene Flow dataset exemplified this.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1812.01243

PDF

http://arxiv.org/pdf/1812.01243


Similar Posts

Comments