Thanks for Nothing: Predicting Zero-Valued Activations with Lightweight Convolutional Neural Networks
release_4f4vh6y6o5at3a7c7zwbxxwsr4
by
Gil Shomron, Ron Banner, Moran Shkolnik, Uri Weiser
2020
Abstract
Convolutional neural networks (CNNs) introduce state-of-the-art results for
various tasks with the price of high computational demands. Inspired by the
observation that spatial correlation exists in CNN output feature maps (ofms),
we propose a method to dynamically predict whether ofm activations are
zero-valued or not according to their neighboring activation values, thereby
avoiding zero-valued activations and reducing the number of convolution
operations. We implement the zero activation predictor (ZAP) with a lightweight
CNN, which imposes negligible overheads and is easy to deploy on existing
models. ZAPs are trained by mimicking hidden layer ouputs; thereby, enabling a
parallel and label-free training. Furthermore, without retraining, each ZAP can
be tuned to a different operating point trading accuracy for MAC reduction.
In text/plain
format
Archived Files and Locations
application/pdf 1.2 MB
file_tbtohnu27vh5xbkmyvgpwcfv3a
|
arxiv.org (repository) web.archive.org (webarchive) |
1909.07636v3
access all versions, variants, and formats of this works (eg, pre-prints)