Group Whitening: Balancing Learning Efficiency and Representational Capacity release_2n4ikylhejbuzokotujzvjtpgq

by Lei Huang, Yi Zhou, Li Liu, Fan Zhu, Ling Shao

Released as a article .

2020  

Abstract

Batch normalization (BN) is an important technique commonly incorporated into deep learning models to perform standardization within mini-batches. The merits of BN in improving a model's learning efficiency can be further amplified by applying whitening, while its drawbacks in estimating population statistics for inference can be avoided through group normalization (GN). This paper proposes group whitening (GW), which exploits the advantages of the whitening operation and avoids the disadvantages of normalization within mini-batches. In addition, we analyze the constraints imposed on features by normalization, and show how the batch size (group number) affects the performance of batch (group) normalized networks, from the perspective of model's representational capacity. This analysis provides theoretical guidance for applying GW in practice. Finally, we apply the proposed GW to ResNet and ResNeXt architectures and conduct experiments on the ImageNet and COCO benchmarks. Results show that GW consistently improves the performance of different architectures, with absolute gains of 1.02% ∼ 1.49% in top-1 accuracy on ImageNet and 1.82% ∼ 3.21% in bounding box AP on COCO.
In text/plain format

Archived Files and Locations

application/pdf  819.9 kB
file_af5r7gujijglxiv5ujgyz3cffa
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-11-24
Version   v3
Language   en ?
arXiv  2009.13333v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 6e132cb8-a5b3-483c-972e-7c60c8985214
API URL: JSON