DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for
Compute-in-Memory Accelerators for On-chip Training
release_aavpvm27bnhubljl4hjeslp6za
by
Xiaochen Peng, Shanshi Huang, Hongwu Jiang, Anni Lu, Shimeng Yu
2020
Abstract
DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (CIM)
accelerators for deep neural networks, with hierarchical design options from
device-level, to circuit-level and up to algorithm-level. A python wrapper is
developed to interface NeuroSim with a popular machine learning platform:
Pytorch, to support flexible network structures. The framework provides
automatic algorithm-to-hardware mapping, and evaluates chip-level area, energy
efficiency and throughput for training or inference, as well as
training/inference accuracy with hardware constraints. Our prior work
(DNN+NeuroSim V1.1) was developed to estimate the impact of reliability in
synaptic devices, and analog-to-digital converter (ADC) quantization loss on
the accuracy and hardware performance of inference engines. In this work, we
further investigated the impact of the analog emerging non-volatile memory
non-ideal device properties for on-chip training. By introducing the
nonlinearity, asymmetry, device-to-device and cycle-to-cycle variation of
weight update into the python wrapper, and peripheral circuits for error/weight
gradient computation in NeuroSim core, we benchmarked CIM accelerators based on
state-of-the-art SRAM and eNVM devices for VGG-8 on CIFAR-10 dataset, revealing
the crucial specs of synaptic devices for on-chip training. The proposed
DNN+NeuroSim V2.0 framework is available on GitHub.
In text/plain
format
Archived Files and Locations
application/pdf 4.5 MB
file_6cuq4pajovexponvxlomkrwvcy
|
arxiv.org (repository) web.archive.org (webarchive) |
2003.06471v1
access all versions, variants, and formats of this works (eg, pre-prints)