papers AI Learner
The Github is limit! Click to go to the new site.

Application-level Studies of Cellular Neural Network-based Hardware Accelerators

2019-02-28
Qiuwen Lou, Indranil Palit, Tang Li, Andras Horvath, Michael Niemier, X. Sharon Hu

Abstract

As cost and performance benefits associated with Moore’s Law scaling slow, researchers are studying alternative architectures (e.g., based on analog and/or spiking circuits) and/or computational models (e.g., convolutional and recurrent neural networks) to perform application-level tasks faster, more energy efficiently, and/or more accurately. We investigate cellular neural network (CeNN)-based co-processors at the application-level for these metrics. While it is well-known that CeNNs can be well-suited for spatio-temporal information processing, few (if any) studies have quantified the energy/delay/accuracy of a CeNN-friendly algorithm and compared the CeNN-based approach to the best von Neumann algorithm at the application level. We present an evaluation framework for such studies. As a case study, a CeNN-friendly target-tracking algorithm was developed and mapped to an array architecture developed in conjunction with the algorithm. We compare the energy, delay, and accuracy of our architecture/algorithm (assuming all overheads) to the most accurate von Neumann algorithm (Struck). Von Neumann CPU data is measured on an Intel i5 chip. The CeNN approach is capable of matching the accuracy of Struck, and can offer approximately 1000x improvements in energy-delay product.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.06649

PDF

http://arxiv.org/pdf/1903.06649


Similar Posts

Comments