Progressive Gradient Pruning for Classification, Detection and
DomainAdaptation
release_u245y2qjenbkjk7upsolqsox4q
by
Le Thanh Nguyen-Meidine, Eric Granger, Madhu Kiran, Louis-Antoine
Blais-Morin, Marco Pedersoli
2020
Abstract
Although deep neural networks (NNs) have achievedstate-of-the-art accuracy in
many visual recognition tasks,the growing computational complexity and energy
con-sumption of networks remains an issue, especially for ap-plications on
platforms with limited resources and requir-ing real-time processing. Filter
pruning techniques haverecently shown promising results for the compression
andacceleration of convolutional NNs (CNNs). However, thesetechniques involve
numerous steps and complex optimisa-tions because some only prune after
training CNNs, whileothers prune from scratch during training by
integratingsparsity constraints or modifying the loss function.In this paper we
propose a new Progressive GradientPruning (PGP) technique for iterative filter
pruning dur-ing training. In contrast to previous progressive
pruningtechniques, it relies on a novel filter selection criterion thatmeasures
the change in filter weights, uses a new hard andsoft pruning strategy and
effectively adapts momentum ten-sors during the backward propagation pass.
Experimentalresults obtained after training various CNNs on image datafor
classification, object detection and domain adaptationbenchmarks indicate that
the PGP technique can achievea better trade-off between classification accuracy
and net-work (time and memory) complexity than PSFP and otherstate-of-the-art
filter pruning techniques.
In text/plain
format
Archived Files and Locations
application/pdf 455.5 kB
file_2wsdkn2xizbrhboonfpsk23hmu
|
arxiv.org (repository) web.archive.org (webarchive) |
1906.08746v4
access all versions, variants, and formats of this works (eg, pre-prints)