Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering
release_ucfpcnbm4ngpdlbkoemsxfxor4
by
Mohammed Oualid Attaoui, Hazem Fahmy, Fabrizio Pastore, Lionel Briand
2022
Abstract
Deep neural networks (DNNs) have demonstrated superior performance over
classical machine learning to support many features in safety-critical systems.
Although DNNs are now widely used in such systems (e.g., self driving cars),
there is limited progress regarding automated support for functional safety
analysis in DNN-based systems. For example, the identification of root causes
of errors, to enable both risk analysis and DNN retraining, remains an open
problem. In this paper, we propose SAFE, a black-box approach to automatically
characterize the root causes of DNN errors. SAFE relies on a transfer learning
model pre-trained on ImageNet to extract the features from error-inducing
images. It then applies a density-based clustering algorithm to detect
arbitrary shaped clusters of images modeling plausible causes of error. Last,
clusters are used to effectively retrain and improve the DNN. The black-box
nature of SAFE is motivated by our objective not to require changes or even
access to the DNN internals to facilitate adoption. Experimental results show
the superior ability of SAFE in identifying different root causes of DNN errors
based on case studies in the automotive domain. It also yields significant
improvements in DNN accuracy after retraining, while saving significant
execution time and memory when compared to alternatives.
In text/plain
format
Archived Files and Locations
application/pdf 4.5 MB
file_wuypqwpylzczbot5tn6zozq6em
|
arxiv.org (repository) web.archive.org (webarchive) |
2201.05077v3
access all versions, variants, and formats of this works (eg, pre-prints)