Group Whitening: Balancing Learning Efficiency and Representational Capacity
release_2n4ikylhejbuzokotujzvjtpgq
by
Lei Huang, Yi Zhou, Li Liu, Fan Zhu, Ling Shao
2020
Abstract
Batch normalization (BN) is an important technique commonly incorporated into
deep learning models to perform standardization within mini-batches. The merits
of BN in improving a model's learning efficiency can be further amplified by
applying whitening, while its drawbacks in estimating population statistics for
inference can be avoided through group normalization (GN). This paper proposes
group whitening (GW), which exploits the advantages of the whitening operation
and avoids the disadvantages of normalization within mini-batches. In addition,
we analyze the constraints imposed on features by normalization, and show how
the batch size (group number) affects the performance of batch (group)
normalized networks, from the perspective of model's representational capacity.
This analysis provides theoretical guidance for applying GW in practice.
Finally, we apply the proposed GW to ResNet and ResNeXt architectures and
conduct experiments on the ImageNet and COCO benchmarks. Results show that GW
consistently improves the performance of different architectures, with absolute
gains of 1.02% ∼ 1.49% in top-1 accuracy on ImageNet and 1.82%
∼ 3.21% in bounding box AP on COCO.
In text/plain
format
Archived Files and Locations
application/pdf 819.9 kB
file_af5r7gujijglxiv5ujgyz3cffa
|
arxiv.org (repository) web.archive.org (webarchive) |
2009.13333v3
access all versions, variants, and formats of this works (eg, pre-prints)