papers AI Learner
The Github is limit! Click to go to the new site.

Model Vulnerability to Distributional Shifts over Image Transformation Sets

2019-03-28
Riccardo Volpi, Vittorio Murino

Abstract

We are concerned with the vulnerability of computer vision models to distributional shifts. We cast this problem in terms of combinatorial optimization, evaluating the regions in the input space where a (black-box) model is more vulnerable. This is carried out by combining image transformations from a given set and standard search algorithms. We embed this idea in a training procedure, where we define new data augmentation rules over iterations, accordingly to the image transformations that the current model is most vulnerable to. An empirical evaluation on classification and semantic segmentation problems suggests that the devised algorithm allows to train models more robust against content-preserving image transformations, and in general, against distributional shifts.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.11900

PDF

http://arxiv.org/pdf/1903.11900


Similar Posts

Comments