Towards Efficient Training for Neural Network Quantization release_os6rhnm7cna2ld74xopfnxgr3e

by Qing Jin, Linjie Yang, Zhenyu Liao

Released as a article .

2019  

Abstract

Quantization reduces computation costs of neural networks but suffers from performance degeneration. Is this accuracy drop due to the reduced capacity, or inefficient training during the quantization procedure? After looking into the gradient propagation process of neural networks by viewing the weights and intermediate activations as random variables, we discover two critical rules for efficient training. Recent quantization approaches violates the two rules and results in degenerated convergence. To deal with this problem, we propose a simple yet effective technique, named scale-adjusted training (SAT), to comply with the discovered rules and facilitates efficient training. We also analyze the quantization error introduced in calculating the gradient in the popular parameterized clipping activation (PACT) technique. Through SAT together with gradient-calibrated PACT, quantized models obtain comparable or even better performance than their full-precision counterparts, achieving state-of-the-art accuracy with consistent improvement over previous quantization methods on a wide spectrum of models including MobileNet-V1/V2 and PreResNet-50.
In text/plain format

Archived Files and Locations

application/pdf  676.2 kB
file_7vajjepe7zdczj2fwnpbhwqllq
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-12-21
Version   v1
Language   en ?
arXiv  1912.10207v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 47e4b170-1786-438c-86e1-bbd8d1621fae
API URL: JSON