Can Adversarial Weight Perturbations Inject Neural Backdoors? release_jxf5od5najhenokkdbaak36siq

by Siddhant Garg, Adarsh Kumar, Vibhor Goel, Yingyu Liang

Released as a article .

2020  

Abstract

Adversarial machine learning has exposed several security hazards of neural models and has become an important research topic in recent times. Thus far, the concept of an "adversarial perturbation" has exclusively been used with reference to the input space referring to a small, imperceptible change which can cause a ML model to err. In this work we extend the idea of "adversarial perturbations" to the space of model weights, specifically to inject backdoors in trained DNNs, which exposes a security risk of using publicly available trained models. Here, injecting a backdoor refers to obtaining a desired outcome from the model when a trigger pattern is added to the input, while retaining the original model predictions on a non-triggered input. From the perspective of an adversary, we characterize these adversarial perturbations to be constrained within an ℓ_∞ norm around the original model weights. We introduce adversarial perturbations in the model weights using a composite loss on the predictions of the original model and the desired trigger through projected gradient descent. We empirically show that these adversarial weight perturbations exist universally across several computer vision and natural language processing tasks. Our results show that backdoors can be successfully injected with a very small average relative change in model weight values for several applications.
In text/plain format

Archived Files and Locations

application/pdf  767.1 kB
file_klrbtyicgrfyjm3quecpjdagei
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   accepted
Date   2020-09-21
Version   v2
Language   en ?
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 03dec565-1408-4bfd-acf1-08da2f223379
API URL: JSON