Interpretation of Deep Temporal Representations by Selective Visualization of Internally Activated Units release_5aoruk22avhijhyapms7iyigoi

by Sohee Cho, Ginkyeng Lee, Jaesik Choi

Released as a article .

2020  

Abstract

Recently deep neural networks demonstrate competitive performances in classification and regression tasks for many temporal or sequential data. However, it is still hard to understand the classification mechanisms of temporal deep neural networks. In this paper, we propose two new frameworks to visualize temporal representations learned from deep neural networks. Given input data and output, our algorithm interprets the decision of temporal neural network by extracting highly activated periods and visualizes a sub-sequence of input data which contributes to activate the units. Furthermore, we characterize such sub-sequences with clustering and calculate the uncertainty of the suggested type and actual data. We also suggest Layer-wise Relevance from the output of a unit, not from the final output, with backward Monte-Carlo dropout to show the relevance scores of each input point to activate units with providing a visual representation of the uncertainty about this impact.
In text/plain format

Archived Files and Locations

application/pdf  2.3 MB
file_a45yecsohrdk3nc3ejy6xzhi2y
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-04-27
Version   v1
Language   en ?
arXiv  2004.12538v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: addb5bfc-718d-4493-a4a5-e2c1eea69d44
API URL: JSON