papers AI Learner
The Github is limit! Click to go to the new site.

Paradox in Deep Neural Networks: Similar yet Different while Different yet Similar

2019-03-12
Arash Akbarinia, Karl R. Gegenfurtner

Abstract

Machine learning is advancing towards a data-science approach, implying a necessity to a line of investigation to divulge the knowledge learnt by deep neuronal networks. Limiting the comparison among networks merely to a predefined intelligent ability, according to ground truth, does not suffice, it should be associated with innate similarity of these artificial entities. Here, we analysed multiple instances of an identical architecture trained to classify objects in static images (CIFAR and ImageNet data sets). We evaluated the performance of the networks under various distortions and compared it to the intrinsic similarity between their constituent kernels. While we expected a close correspondence between these two measures, we observed a puzzling phenomenon. Pairs of networks whose kernels’ weights are over 99.9% correlated can exhibit significantly different performances, yet other pairs with no correlation can reach quite compatible levels of performance. We show implications of this for transfer learning, and argue its importance in our general understanding of what intelligence is, whether natural or artificial.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.04772

PDF

http://arxiv.org/pdf/1903.04772


Similar Posts

Comments