papers AI Learner
The Github is limit! Click to go to the new site.

Spatiotemporal Feature Learning for Event-Based Vision

2019-03-16
Rohan Ghosh, Anupam Gupta, Siyi Tang, Alcimar Soares, Nitish Thakor

Abstract

Unlike conventional frame-based sensors, event-based visual sensors output information through spikes at a high temporal resolution. By only encoding changes in pixel intensity, they showcase a low-power consuming, low-latency approach to visual information sensing. To use this information for higher sensory tasks like object recognition and tracking, an essential simplification step is the extraction and learning of features. An ideal feature descriptor must be robust to changes involving (i) local transformations and (ii) re-appearances of a local event pattern. To that end, we propose a novel spatiotemporal feature representation learning algorithm based on slow feature analysis (SFA). Using SFA, smoothly changing linear projections are learnt which are robust to local visual transformations. In order to determine if the features can learn to be invariant to various visual transformations, feature point tracking tasks are used for evaluation. Extensive experiments across two datasets demonstrate the adaptability of the spatiotemporal feature learner to translation, scaling and rotational transformations of the feature points. More importantly, we find that the obtained feature representations are able to exploit the high temporal resolution of such event-based cameras in generating better feature tracks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.06923

PDF

http://arxiv.org/pdf/1903.06923


Similar Posts

Comments