2016 EIEEfficientInferenceEngineonCo
- (Han et al., 2016) ⇒ Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. (2016). “EIE: Efficient Inference Engine on Compressed Deep Neural Network.” In: Proceedings of the 43rd International Symposium on Computer Architecture (ISCA 2016). IEEE:7551397, arXiv e-print: 1602.01528 ISBN:978-1-4673-8947-1 DOI:10.1145/3007787.3001163 DOI:10.1109/ISCA.2016.30
Subject Headings: Compressed Deep Neural Network.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222016%22+EIE%3A+Efficient+Inference+Engine+on+Compressed+Deep+Neural+Network
- http://dl.acm.org/citation.cfm?id=3007787.3001163&preflayout=flat#citedby
- (Han et al., 2016) ⇒ Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark Horowitz, and Bill Dally (2016, August). "Deep compression and EIE: Efficient inference engine on compressed deep neural network". In: Hot Chips 28 Symposium (HCS) (poster version), 2016 IEEE (pp. 1-6). IEEE. (Poster)
Quotes
Author Keywords
Abstract
State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120x energy saving, Exploiting sparsity saves 10x, Weight sharing gives 8x, Skipping zero activations from ReLU saves another 3x. Evaluated on nine DNN benchmarks, EIE is 189x and 13x faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88x104 frames / sec with a power dissipation of only 600mW. It is 24, 000x and 3, 400x more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9x, 19x and 3x better throughput, energy efficiency and area efficiency.
Figures
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2016 EIEEfficientInferenceEngineonCo | Song Han William J. Dally Huizi Mao Xingyu Liu Jing Pu Ardavan Pedram Mark A. Horowitz | EIE: Efficient Inference Engine on Compressed Deep Neural Network | 10.1145/3007787.3001163 | 2016 |