Explainable Deep Learning Methods in Medical Imaging Diagnosis: A Survey release_ngd7yb3z7fhkxkuttjm73u75wi

by Cristiano Patrício, João C. Neves, Luís F. Teixeira

Released as a article .

2022  

Abstract

The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are also discussed.
In text/plain format

Archived Files and Locations

application/pdf  4.4 MB
file_avju2y4etfdp7nma4dfrlcxeby
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-06-13
Version   v2
Language   en ?
arXiv  2205.04766v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 0deadde9-f3c6-4e9e-b5b9-7a283f839207
API URL: JSON