Abstract
Multi-instance learning (MIL) deals with tasks where data consist of set of bags and each bag is represented by a set of instances. Only the bag labels are observed but the label for each instance is not available. Previous MIL studies typically assume that the training and test samples follow the same distribution, which is often violated in real-world applications. Existing methods address distribution changes by re-weighting the training data with the density ratio between the training and test samples. However, models are often trained without prior knowledge of the test distribution which renders existing methods inapplicable. Inspired by a connection between MIL and causal inference, we propose a novel framework for addressing distribution change in MIL without relying on the test distribution. Experimental results validate the effectiveness of our approach.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1902.05066