On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration release_vnj4eroc7jfkpgy7q6lc2hzwp4

by Kanil Patel, William Beluch, Dan Zhang, Michael Pfeiffer, Bin Yang

Released as a article .

2020  

Abstract

Uncertainty estimates help to identify ambiguous, novel, or anomalous inputs, but the reliable quantification of uncertainty has proven to be challenging for modern deep networks. In order to improve uncertainty estimation, we propose On-Manifold Adversarial Data Augmentation or OMADA, which specifically attempts to generate the most challenging examples by following an on-manifold adversarial attack path in the latent space of an autoencoder-based generative model that closely approximates decision boundaries between two or more classes. On a variety of datasets as well as on multiple diverse network architectures, OMADA consistently yields more accurate and better calibrated classifiers than baseline models, and outperforms competing approaches such as Mixup, as well as achieving similar performance to (at times better than) post-processing calibration methods such as temperature scaling. Variants of OMADA can employ different sampling schemes for ambiguous on-manifold examples based on the entropy of their estimated soft labels, which exhibit specific strengths for generalization, calibration of predicted uncertainty, or detection of out-of-distribution inputs.
In text/plain format

Archived Files and Locations

application/pdf  2.6 MB
file_sm4545xddvhvjdx5dipssvph7u
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-03-07
Version   v2
Language   en ?
arXiv  1912.07458v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 828d4c07-edc3-45a1-8e62-5a4099d261e6
API URL: JSON