papers AI Learner
The Github is limit! Click to go to the new site.

Joining Sound Event Detection and Localization Through Spatial Segregation

2019-03-29
Ivo Trowitzsch, Christopher Schymura, Dorothea Kolossa, Klaus Obermayer

Abstract

Identification and localization of sounds are both integral parts of computational auditory scene analysis. Although each can be solved separately, the goal of forming coherent auditory objects and achieving a comprehensive spatial scene understanding suggests pursuing a joint solution of the two problems. This work presents an approach that robustly binds localization with the detection of sound events in a binaural robotic system. Both tasks are joined through the use of spatial stream segregation which produces probabilistic time-frequency masks for individual sources attributable to separate locations, enabling segregated sound event detection operating on these streams. We use simulations of a comprehensive suite of test scenes with multiple co-occurring sound sources, and propose performance measures for systematic investigation of the impact of scene complexity on this segregated detection of sound types. Analyzing the effect of head orientation, we show how a robot can facilitate high performance through optimal head rotation. Furthermore, we investigate the performance of segregated detection given possible localization error as well as error in the estimation of number of active sources. Our analysis demonstrates that the proposed approach is an effective method to obtain joint sound event location and type information under a wide range of conditions.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.00055

PDF

http://arxiv.org/pdf/1904.00055


Similar Posts

Comments