Online Continual Learning Via Candidates Voting release_gj5x52v74fe5tijmi4r4zsws3q

by Jiangpeng He, Fengqing Zhu

Released as a article .

2021  

Abstract

Continual learning in online scenario aims to learn a sequence of new tasks from data stream using each data only once for training, which is more realistic than in offline mode assuming data from new task are all available. However, this problem is still under-explored for the challenging class-incremental setting in which the model classifies all classes seen so far during inference. Particularly, performance struggles with increased number of tasks or additional classes to learn for each task. In addition, most existing methods require storing original data as exemplars for knowledge replay, which may not be feasible for certain applications with limited memory budget or privacy concerns. In this work, we introduce an effective and memory-efficient method for online continual learning under class-incremental setting through candidates selection from each learned task together with prior incorporation using stored feature embeddings instead of original data as exemplars. Our proposed method implemented for image classification task achieves the best results under different benchmark datasets for online continual learning including CIFAR-10, CIFAR-100 and CORE-50 while requiring much less memory resource compared with existing works.
In text/plain format

Archived Files and Locations

application/pdf  19.1 MB
file_icumwokt7ndohhl5xi24tij6uu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-10-17
Version   v1
Language   en ?
arXiv  2110.08855v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: e1a2dd86-87e5-4afb-a880-91a35b9bf2ba
API URL: JSON