papers AI Learner
The Github is limit! Click to go to the new site.

Energy-Efficient Inference Accelerator for Memory-Augmented Neural Networks on an FPGA

2019-02-11
Seongsik Park, Jaehee Jang, Seijoon Kim, Sungroh Yoon

Abstract

Memory-augmented neural networks (MANNs) are designed for question-answering tasks. It is difficult to run a MANN effectively on accelerators designed for other neural networks (NNs), in particular on mobile devices, because MANNs require recurrent data paths and various types of operations related to external memory access. We implement an accelerator for MANNs on a field-programmable gate array (FPGA) based on a data flow architecture. Inference times are also reduced by inference thresholding, which is a data-based maximum inner-product search specialized for natural language tasks. Measurements on the bAbI data show that the energy efficiency of the accelerator (FLOPS/kJ) was higher than that of an NVIDIA TITAN V GPU by a factor of about 125, increasing to 140 with inference thresholding

Abstract (translated by Google)
URL

https://arxiv.org/abs/1805.07978

PDF

https://arxiv.org/pdf/1805.07978


Similar Posts

Comments