Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks release_imebytp2zvdufgwaiunr6ylp6q

by Ismail Alarab, Simant Prakoonwit

Released as a article .

2021  

Abstract

We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty. In our approach, we sought to perform uncertainty estimation based on the idea of adversarial attack method. In this paper, uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters as in Bayesian approach. We are able to produce uncertainty with couple of perturbations on the inputs. Interestingly, we apply the proposed method to datasets derived from blockchain. We compare the performance of model uncertainty with the most recent uncertainty methods. We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.
In text/plain format

Archived Files and Locations

application/pdf  517.4 kB
file_osfdhfz2xfchvahkrnfgms25du
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-07-15
Version   v1
Language   en ?
arXiv  2107.07618v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: fe9b7ff6-66a5-4e84-b2f0-27b176a36265
API URL: JSON