ProSelfLC: Progressive Self Label Correction for Target Revising in Label Noise
release_6ncjg7rij5c6rl7gk6fcriqowq
by
Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M. Robertson
2020
Abstract
In this work, we address robust deep learning under label noise
(semi-supervised learning) from the perspective of target revising. We make
three main contributions. First, we present a comprehensive mathematical study
on existing target modification techniques, including Pseudo-Label [1], label
smoothing [2], bootstrapping [3], knowledge distillation [4], confidence
penalty [5], and joint optimisation [6]. Consequently, we reveal their
relationships and drawbacks. Second, we propose ProSelfLC, a progressive and
adaptive self label correction method, endorsed by learning time and predictive
confidence. It addresses the disadvantages of existing algorithms and embraces
many practical merits: (1) It is end-to-end trainable; (2) Given an example,
ProSelfLC has the ability to revise an one-hot target by adding the information
about its similarity structure, and correcting its semantic class; (3) No
auxiliary annotations, or extra learners are required. Our proposal is designed
according to the well-known expertise: deep neural networks learn simple
meaningful patterns before fitting noisy patterns [7-9], and entropy
regularisation principle [10, 11]. Third, label smoothing, confidence penalty
and naive label correction perform on par with the state-of-the-art in our
implementation. This probably indicates they were not benchmarked properly in
prior work. Furthermore, our ProSelfLC outperforms them significantly.
In text/plain
format
Archived Files and Locations
application/pdf 541.7 kB
file_22qrkrelwnhr3l75pqabj74you
|
arxiv.org (repository) web.archive.org (webarchive) |
2005.03788v1
access all versions, variants, and formats of this works (eg, pre-prints)