Distill on the Go: Online knowledge distillation in self-supervised learning
release_m4dghcsoazganc34v54c2w4y34
by
Prashant Bhat, Elahe Arani, Bahram Zonooz
2021
Abstract
Self-supervised learning solves pretext prediction tasks that do not require
annotations to learn feature representations. For vision tasks, pretext tasks
such as predicting rotation, solving jigsaw are solely created from the input
data. Yet, predicting this known information helps in learning representations
useful for downstream tasks. However, recent works have shown that wider and
deeper models benefit more from self-supervised learning than smaller models.
To address the issue of self-supervised pre-training of smaller models, we
propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using
single-stage online knowledge distillation to improve the representation
quality of the smaller models. We employ deep mutual learning strategy in which
two models collaboratively learn from each other to improve one another.
Specifically, each model is trained using self-supervised learning along with
distillation that aligns each model's softmax probabilities of similarity
scores with that of the peer model. We conduct extensive experiments on
multiple benchmark datasets, learning objectives, and architectures to
demonstrate the potential of our proposed method. Our results show significant
performance gain in the presence of noisy and limited labels and generalization
to out-of-distribution data.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2104.09866v1
access all versions, variants, and formats of this works (eg, pre-prints)