Can Adversarial Weight Perturbations Inject Neural Backdoors?
release_jxf5od5najhenokkdbaak36siq
by
Siddhant Garg, Adarsh Kumar, Vibhor Goel, Yingyu Liang
2020
Abstract
Adversarial machine learning has exposed several security hazards of neural
models and has become an important research topic in recent times. Thus far,
the concept of an "adversarial perturbation" has exclusively been used with
reference to the input space referring to a small, imperceptible change which
can cause a ML model to err. In this work we extend the idea of "adversarial
perturbations" to the space of model weights, specifically to inject backdoors
in trained DNNs, which exposes a security risk of using publicly available
trained models. Here, injecting a backdoor refers to obtaining a desired
outcome from the model when a trigger pattern is added to the input, while
retaining the original model predictions on a non-triggered input. From the
perspective of an adversary, we characterize these adversarial perturbations to
be constrained within an ℓ_∞ norm around the original model
weights. We introduce adversarial perturbations in the model weights using a
composite loss on the predictions of the original model and the desired trigger
through projected gradient descent. We empirically show that these adversarial
weight perturbations exist universally across several computer vision and
natural language processing tasks. Our results show that backdoors can be
successfully injected with a very small average relative change in model weight
values for several applications.
In text/plain
format
Archived Files and Locations
application/pdf 767.1 kB
file_klrbtyicgrfyjm3quecpjdagei
|
arxiv.org (repository) web.archive.org (webarchive) |
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar