Abstract
Robust semantic segmentation of VHR remote sensing images from UAV sensors is critical for earth observation, land use, land cover or mapping applications. Several factors such as shadows, weather disruption and camera shakes making this problem highly challenging, especially only using RGB images. In this paper, we propose the use of multi-modality data including NIR, RGB and DSM to increase robustness of segmentation in blurred or partially damaged VHR remote sensing images. By proposing a cascaded dense encoder-decoder network and the SELayer based fusion and assembling techniques, the proposed RobustDenseNet achieves steady performance when the image quality is decreasing, compared with the state-of-the-art semantic segmentation model.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1903.02702