Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks
release_imebytp2zvdufgwaiunr6ylp6q
by
Ismail Alarab, Simant Prakoonwit
2021
Abstract
We propose a novel method to capture data points near decision boundary in
neural network that are often referred to a specific type of uncertainty. In
our approach, we sought to perform uncertainty estimation based on the idea of
adversarial attack method. In this paper, uncertainty estimates are derived
from the input perturbations, unlike previous studies that provide
perturbations on the model's parameters as in Bayesian approach. We are able to
produce uncertainty with couple of perturbations on the inputs. Interestingly,
we apply the proposed method to datasets derived from blockchain. We compare
the performance of model uncertainty with the most recent uncertainty methods.
We show that the proposed method has revealed a significant outperformance over
other methods and provided less risk to capture model uncertainty in machine
learning.
In text/plain
format
Archived Files and Locations
application/pdf 517.4 kB
file_osfdhfz2xfchvahkrnfgms25du
|
arxiv.org (repository) web.archive.org (webarchive) |
2107.07618v1
access all versions, variants, and formats of this works (eg, pre-prints)