Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks
release_gkmrz3z4dja55j5uycsrqcmgcm
by
Nathaniel Simard, Guillaume Lagrange
2021
Abstract
Recent work on few-shot learning <cit.> showed that
quality of learned representations plays an important role in few-shot
classification performance. On the other hand, the goal of self-supervised
learning is to recover useful semantic information of the data without the use
of class labels. In this work, we exploit the complementarity of both paradigms
via a multi-task framework where we leverage recent self-supervised methods as
auxiliary tasks. We found that combining multiple tasks is often beneficial,
and that solving them simultaneously can be done efficiently. Our results
suggest that self-supervised auxiliary tasks are effective data-dependent
regularizers for representation learning. Our code is available at:
<https://github.com/nathanielsimard/improving-fs-ssl>.
In text/plain
format
Archived Files and Locations
application/pdf 571.2 kB
file_nzgzntlepvhudmsi5sjskav7o4
|
arxiv.org (repository) web.archive.org (webarchive) |
2101.09825v1
access all versions, variants, and formats of this works (eg, pre-prints)