Abstract
Differential privacy has recently developed as a standard to ensure data privacy in machine learning. However, to meet such standard, noise is usually introduced into the original data to disambiguate the learning algorithms, which inevitably leads to a deterioration in the predicting performance. In this paper, motivated by the success of improving predicting performance by ensemble learning, we propose to enhance privacy-preserving logistic regression by stacking. We show that this can be done either by sample-based or feature-based partitioning. However, we prove that when privacy-budgets are the same, feature-based partitioning requires fewer samples than sample-based one, thus likely has better empirical performance. Moreover, we prove that predicting performance can be further boosted for feature-based partitioning when feature importance is known. Finally, we not only demonstrate the effectiveness of our method on two benchmark data sets, i.e., MNIST and NEWS20, but also apply it into a real application of cross-organizational diabetes prediction from RUIJIN data set, where privacy is of significant concern.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1811.09491