Distilling Knowledge from Deep Networks with Applications to Healthcare Domain release_j4mq2mhhl5bnjmlwihsjptp6qq

by Zhengping Che, Sanjay Purushotham, Robinder Khemani, Yan Liu

Released as a article .

2015  

Abstract

Exponential growth in Electronic Healthcare Records (EHR) has resulted in new opportunities and urgent needs for discovery of meaningful data-driven representations and patterns of diseases in Computational Phenotyping research. Deep Learning models have shown superior performance for robust prediction in computational phenotyping tasks, but suffer from the issue of model interpretability which is crucial for clinicians involved in decision-making. In this paper, we introduce a novel knowledge-distillation approach called Interpretable Mimic Learning, to learn interpretable phenotype features for making robust prediction while mimicking the performance of deep learning models. Our framework uses Gradient Boosting Trees to learn interpretable features from deep learning models such as Stacked Denoising Autoencoder and Long Short-Term Memory. Exhaustive experiments on a real-world clinical time-series dataset show that our method obtains similar or better performance than the deep learning models, and it provides interpretable phenotypes for clinical decision making.
In text/plain format

Archived Files and Locations

application/pdf  742.1 kB
file_kxrmx4pydjc35gl6uqpucoweb4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2015-12-11
Version   v1
Language   en ?
arXiv  1512.03542v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 2140b6c5-19cf-43ba-89ee-95991b1a264b
API URL: JSON