papers AI Learner
The Github is limit! Click to go to the new site.

Training neural networks to have brain-like representations improves object recognition performance

2019-05-25
Callie Federer, Haoyan Xu, Alona Fyshe, Joel Zylberberg

Abstract

The current state-of-the-art object recognition algorithms, deep convolutional neural networks (DCNNs), are inspired by the architecture of the mammalian visual system [8], and capable of human-level performance on many tasks [15]. However, even these algorithms make errors. As DCNNs improve at object recognition tasks, they develop representations in their hidden layers that become more similar to those observed in the mammalian brains [24]. This led us to hypothesize that teaching DCNNs to achieve even more brain-like representations could improve their performance. To test this, we trained DCNNs on a composite task, wherein networks were trained to: a) classify images of objects; while b) having intermediate representations that resemble those observed in neural recordings from monkey visual cortex. Compared with DCNNs trained purely for object categorization, DCNNs trained on the composite task had better object recognition performance. Our results outline a new way to regularize object recognition networks, using transfer learning strategies in which the brain serves as a teacher for training DCNNs.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.10679

PDF

http://arxiv.org/pdf/1905.10679


Similar Posts

Comments