Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization
release_imm343kfinf4bgdbt4fvflm3we
by
Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang
2020
Abstract
Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks,
where hidden features (patterns) trained to a normal model, which is only
activated by some specific input (called triggers), trick the model into
producing unexpected behavior. In this paper, we create covert and scattered
triggers for backdoor attacks, invisible backdoors, where triggers can fool
both DNN models and human inspection. We apply our invisible backdoors through
two state-of-the-art methods of embedding triggers for backdoor attacks. The
first approach on Badnets embeds the trigger into DNNs through steganography.
The second approach of a trojan attack uses two types of additional
regularization terms to generate the triggers with irregular shape and size. We
use the Attack Success Rate and Functionality to measure the performance of our
attacks. We introduce two novel definitions of invisibility for human
perception; one is conceptualized by the Perceptual Adversarial Similarity
Score (PASS) and the other is Learned Perceptual Image Patch Similarity
(LPIPS). We show that the proposed invisible backdoors can be fairly effective
across various DNN models as well as four datasets MNIST, CIFAR-10, CIFAR-100,
and GTSRB, by measuring their attack success rates for the adversary,
functionality for the normal users, and invisibility scores for the
administrators. We finally argue that the proposed invisible backdoor attacks
can effectively thwart the state-of-the-art trojan backdoor detection
approaches, such as Neural Cleanse and TABOR.
In text/plain
format
Archived Files and Locations
application/pdf 1.8 MB
file_lnye7ruh6zdgbnulxyjnfbfzju
|
arxiv.org (repository) web.archive.org (webarchive) |
1909.02742v2
access all versions, variants, and formats of this works (eg, pre-prints)