papers AI Learner
The Github is limit! Click to go to the new site.

The Connection between DNNs and Classic Classifiers: Generalize, Memorize, or Both?

2019-02-06
Gilad Cohen, Guillermo Sapiro, Raja Giryes

Abstract

This work studies the relationship between the classification performed by deep neural networks (DNNs) and the decision of various classic classifiers, namely $k$-nearest neighbors ($k$-NN), support vector machines (SVM), and logistic regression (LR). This is studied at various layers of the network, providing us with new insights on the ability of DNNs to both memorize the training data and generalize to new data at the same time, where $k$-NN serves as the ideal estimator that perfectly memorizes the data. First, we show that DNNs’ generalization improves gradually along their layers and that memorization of non-generalizing networks happens only at the last layers. We also observe that the behavior of DNNs compared to the linear classifiers SVM and LR is quite the same on the training and test data regardless of whether the network generalizes. On the other hand, the similarity to $k$-NN holds only at the absence of overfitting. This suggests that the $k$-NN behavior of the network on new data is a good sign of generalization. Moreover, this allows us to use existing $k$-NN theory for DNNs.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1805.06822

PDF

http://arxiv.org/pdf/1805.06822


Similar Posts

Comments