Appending Adversarial Frames for Universal Video Attack release_j7rhlsao3zarzombc2fkvzkyeq

by Zhikai Chen, Lingxi Xie, Shanmin Pang, Yong He, Qi Tian

Released as a article .

2019  

Abstract

There have been many efforts in attacking image classification models with adversarial perturbations, but the same topic on video classification has not yet been thoroughly studied. This paper presents a novel idea of video-based attack, which appends a few dummy frames (e.g., containing the texts of `thanks for watching') to a video clip and then adds adversarial perturbations only on these new frames. Our approach enjoys three major benefits, namely, a high success rate, a low perceptibility, and a strong ability in transferring across different networks. These benefits mostly come from the common dummy frame which pushes all samples towards the boundary of classification. On the other hand, such attacks are easily to be concealed since most people would not notice the abnormality behind the perturbed video clips. We perform experiments on two popular datasets with six state-of-the-art video classification models, and demonstrate the effectiveness of our approach in the scenario of universal video attacks.
In text/plain format

Archived Files and Locations

application/pdf  1.6 MB
file_roxlicqqlbemvjnmbhvakpkana
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-12-10
Version   v1
Language   en ?
arXiv  1912.04538v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: fd9c7a0d-0c3a-47de-a0fa-88168adbb3a2
API URL: JSON