DeepTag: inferring all-cause diagnoses from clinical notes in
under-resourced medical domain
release_zesuyl6vvjh5ddteb24vbbq56u
by
Allen Nie, Ashley Zehnder, Rodney L. Page, Arturo L. Pineda, Manuel A.
Rivas, Carlos D. Bustamante, James Zou
2018
Abstract
Large scale veterinary clinical records can become a powerful resource for
patient care and research. However, clinicians lack the time and resource to
annotate patient records with standard medical diagnostic codes and most
veterinary visits are captured in free text notes. The lack of standard coding
makes it challenging to use the clinical data to improve patient care. It is
also a major impediment to cross-species translational research, which relies
on the ability to accurately identify patient cohorts with specific diagnostic
criteria in humans and animals. In order to reduce the coding burden for
veterinary clinical practice and aid translational research, we have developed
a deep learning algorithm, DeepTag, which automatically infers diagnostic codes
from veterinary free text notes. DeepTag is trained on a newly curated dataset
of 112,558 veterinary notes manually annotated by experts. DeepTag extends
multi-task LSTM with an improved hierarchical objective that captures the
semantic structures between diseases. To foster human-machine collaboration,
DeepTag also learns to abstain in examples when it is uncertain and defers them
to human experts, resulting in improved performance. DeepTag accurately infers
disease codes from free text even in challenging cross-hospital settings where
the text comes from different clinical settings than the ones used for
training. It enables automated disease annotation across a broad range of
clinical diagnoses with minimal pre-processing. The technical framework in this
work can be applied in other medical domains that currently lack medical coding
resources.
In text/plain
format
Archived Files and Locations
application/pdf 388.8 kB
file_kqarmwrj55aqxhfyfuvcuzx764
|
arxiv.org (repository) web.archive.org (webarchive) |
1806.10722v1
access all versions, variants, and formats of this works (eg, pre-prints)