Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks release_gkmrz3z4dja55j5uycsrqcmgcm

by Nathaniel Simard, Guillaume Lagrange

Released as a article .

2021  

Abstract

Recent work on few-shot learning <cit.> showed that quality of learned representations plays an important role in few-shot classification performance. On the other hand, the goal of self-supervised learning is to recover useful semantic information of the data without the use of class labels. In this work, we exploit the complementarity of both paradigms via a multi-task framework where we leverage recent self-supervised methods as auxiliary tasks. We found that combining multiple tasks is often beneficial, and that solving them simultaneously can be done efficiently. Our results suggest that self-supervised auxiliary tasks are effective data-dependent regularizers for representation learning. Our code is available at: <https://github.com/nathanielsimard/improving-fs-ssl>.
In text/plain format

Archived Files and Locations

application/pdf  571.2 kB
file_nzgzntlepvhudmsi5sjskav7o4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-01-24
Version   v1
Language   en ?
arXiv  2101.09825v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 86d2b3c6-f440-4828-acb1-1dfd52694464
API URL: JSON