Predicting What You Already Know Helps: Provable Self-Supervised Learning release_ulpr5splhzft7bbdkjubwvxwb4

by Jason D. Lee, Qi Lei, Nikunj Saunshi, Jiacheng Zhuo

Released as a article .

2021  

Abstract

Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data to learn useful semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words in text; yet predicting this known information helps in learning representations effective for downstream prediction tasks. We posit a mechanism exploiting the statistical connections between certain reconstruction-based pretext tasks that guarantee to learn a good representation. Formally, we quantify how the approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task by just training a linear layer on top of the learned representation. We prove the linear layer yields small approximation error even for complex ground truth function class and will drastically reduce labeled sample complexity. Next, we show a simple modification of our method leads to nonlinear CCA, analogous to the popular SimSiam algorithm, and show similar guarantees for nonlinear CCA.
In text/plain format

Archived Files and Locations

application/pdf  1.7 MB
file_yhxmiywqjvcl7opmcoiiowqzsu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-11-14
Version   v2
Language   en ?
arXiv  2008.01064v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 7cc24c8a-580b-44c4-bc06-6db39f080d21
API URL: JSON