papers AI Learner
The Github is limit! Click to go to the new site.

Cross-Modal Message Passing for Two-stream Fusion

2019-04-30
Dong Wang, Yuan Yuan, Qi Wang

Abstract

Processing and fusing information among multi-modal is a very useful technique for achieving high performance in many computer vision problems. In order to tackle multi-modal information more effectively, we introduce a novel framework for multi-modal fusion: Cross-modal Message Passing (CMMP). Specifically, we propose a cross-modal message passing mechanism to fuse two-stream network for action recognition, which composes of an appearance modal network (RGB image) and a motion modal (optical flow image) network. The objectives of individual networks in this framework are two-fold: a standard classification objective and a competing objective. The classification object ensures that each modal network predicts the true action category while the competing objective encourages each modal network to outperform the other one. We quantitatively show that the proposed CMMP fuses the traditional two-stream network more effectively, and outperforms all existing two-stream fusion method on UCF-101 and HMDB-51 datasets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.13072

PDF

http://arxiv.org/pdf/1904.13072


Similar Posts

Comments