Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks release_gj5nvzqtkfba7iwy4fzp4anm64

by Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre

Released as a article .

2019  

Abstract

As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics. Parallel to that, adversarial machine learning has risen recently with an impressive and significant attention, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task. We show that quantization does not offer any robust protection, results in severe form of gradient masking and advance some hypotheses to explain it. However, we experimentally observe poor transferability capacities which we explain by quantization value shift phenomenon and gradient misalignment and explore how these results can be exploited with an ensemble-based defense.
In text/plain format

Archived Files and Locations

application/pdf  2.0 MB
file_q6bsyuw4sfe6tfggxxepe7cmla
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-09-27
Version   v1
Language   en ?
arXiv  1909.12741v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: abffb829-8c79-423e-89db-9d8d8922666d
API URL: JSON