papers AI Learner
The Github is limit! Click to go to the new site.

AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures

2019-05-30
Michael S. Ryoo, AJ Piergiovanni, Mingxing Tan, Anelia Angelova

Abstract

Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to a third dimension (using a limited number of space-time modules such as 3D convolutions) or by introducing a handcrafted two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream space-time convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.13209

PDF

http://arxiv.org/pdf/1905.13209


Similar Posts

下一篇

Comments