Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning
release_cgocv67jubdtliof6jowbilxri
by
Zhiqiang Shen and Zechun Liu and Zhuang Liu and Marios Savvides and Trevor Darrell and Eric Xing
2021
Abstract
In supervised learning, smoothing label or prediction distribution in neural
network training has been proven useful in preventing the model from being
over-confident, and is crucial for learning more robust visual representations.
This observation motivates us to explore ways to make predictions flattened in
unsupervised learning. Considering that human-annotated labels are not adopted
in unsupervised learning, we introduce a straightforward approach to perturb
input image space in order to soften the output prediction space indirectly,
meanwhile, assigning new label values in the unsupervised frameworks
accordingly. Despite its conceptual simplicity, we show empirically that with
the simple solution -- Unsupervised image mixtures (Un-Mix), we can learn more
robust visual representations from the transformed input. Extensive experiments
are conducted on CIFAR-10, CIFAR-100, STL-10, Tiny ImageNet and standard
ImageNet with popular unsupervised methods SimCLR, BYOL, MoCo V1&V2, etc. Our
proposed image mixture and label assignment strategy can obtain consistent
improvement by 1~3% following exactly the same hyperparameters and training
procedures of the base methods.
In text/plain
format
Archived Files and Locations
application/pdf 9.9 MB
file_v2eoui4dhvhdtcv3zjabwjjeei
|
arxiv.org (repository) web.archive.org (webarchive) |
2003.05438v2
access all versions, variants, and formats of this works (eg, pre-prints)