On the Robustness of Semantic Segmentation Models to Adversarial Attacks
release_j6hybr267fg7pcr6sygiycqw5i
by
Anurag Arnab, Ondrej Miksik, Philip H.S. Torr
2017
Abstract
Deep Neural Networks (DNNs) have demonstrated exceptional performance on most
recognition tasks such as image classification and segmentation. However, they
have also been shown to be vulnerable to adversarial examples. This phenomenon
has recently attracted a lot of attention but it has not been extensively
studied on multiple, large-scale datasets and structured prediction tasks such
as semantic segmentation which often require more specialised networks with
additional components such as CRFs, dilated convolutions, skip-connections and
multiscale processing. In this paper, we present what to our knowledge is the
first rigorous evaluation of adversarial attacks on modern semantic
segmentation models, using two large-scale datasets. We analyse the effect of
different network architectures, model capacity and multiscale processing, and
show that many observations made on the task of classification do not always
transfer to this more complex task. Furthermore, we show how mean-field
inference in deep structured models, multiscale processing (and more generally,
input transformations) naturally implement recently proposed adversarial
defenses. Our observations will aid future efforts in understanding and
defending against adversarial examples. Moreover, in the shorter term, we show
how to effectively benchmark robustness and show which segmentation models
should currently be preferred in safety-critical applications due to their
inherent robustness.
In text/plain
format
Archived Files and Locations
application/pdf 4.9 MB
file_ebzjjba72zh3th4detwj6m3cw4
|
arxiv.org (repository) web.archive.org (webarchive) |
1711.09856v1
access all versions, variants, and formats of this works (eg, pre-prints)