Can 3D Adversarial Logos Cloak Humans? release_ogjwgmrnzfbxbcm3ckrc3wu6k4

by Tianlong Chen, Yi Wang, Jingyang Zhou, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang

Released as a article .

2020  

Abstract

With the trend of adversarial attacks, researchers attempt to fool trained object detectors in 2D scenes. Among many of them, an intriguing new form of attack with potential real-world usage is to append adversarial patches (e.g. logos) to images. Nevertheless, much less have we known about adversarial attacks from 3D rendering views, which is essential for the attack to be persistently strong in the physical world. This paper presents a new 3D adversarial logo attack: we construct an arbitrary shape logo from a 2D texture image and map this image into a 3D adversarial logo via a texture mapping called logo transformation. The resulting 3D adversarial logo is then viewed as an adversarial texture enabling easy manipulation of its shape and position. This greatly extends the versatility of adversarial training for computer graphics synthesized imagery. Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering. In addition, and unlike existing adversarial patches, our new 3D adversarial logo is shown to fool state-of-the-art deep object detectors robustly under model rotations, leading to one step further for realistic attacks in the physical world. Our codes are available at https://github.com/TAMU-VITA/3D_Adversarial_Logo.
In text/plain format

Archived Files and Locations

application/pdf  8.7 MB
file_in2ly726nnbx5fx4cm3inhchba
web.archive.org (webarchive)
arxiv.org (repository)
Read Archived PDF
Archived
Type  article
Stage   submitted
Date   2020-06-25
Version   v1
Language   en ?
arXiv  2006.14655v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 090cc70a-be89-4fbf-8a88-1a7521f644c3
API URL: JSON