Revisiting Batch Normalization for Improving Corruption Robustness
release_63q7gaqz55blfn6nw2lqnvljmm
by
Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
2021
Abstract
The performance of DNNs trained on clean images has been shown to decrease
when the test images have common corruptions. In this work, we interpret
corruption robustness as a domain shift and propose to rectify batch
normalization (BN) statistics for improving model robustness. This is motivated
by perceiving the shift from the clean domain to the corruption domain as a
style shift that is represented by the BN statistics. We find that simply
estimating and adapting the BN statistics on a few (32 for instance)
representation samples, without retraining the model, improves the corruption
robustness by a large margin on several benchmark datasets with a wide range of
model architectures. For example, on ImageNet-C, statistics adaptation improves
the top1 accuracy of ResNet50 from 39.2% to 48.7%. Moreover, we find that this
technique can further improve state-of-the-art robust models from 58.1% to
63.3%.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2010.03630v4
access all versions, variants, and formats of this works (eg, pre-prints)