Learning from Past Mistakes: Improving Automatic Speech Recognition
Output via Noisy-Clean Phrase Context Modeling
release_g54avqptmfa3xl3gr73mqkicwa
by
Prashanth Gurunath Shivakumar, Haoqi Li, Kevin Knight, Panayiotis
Georgiou
2019
Abstract
Automatic speech recognition (ASR) systems often make unrecoverable errors
due to subsystem pruning (acoustic, language and pronunciation models); for
example pruning words due to acoustics using short-term context, prior to
rescoring with long-term context based on linguistics. In this work we model
ASR as a phrase-based noisy transformation channel and propose an error
correction system that can learn from the aggregate errors of all the
independent modules constituting the ASR and attempt to invert those. The
proposed system can exploit long-term context using a neural network language
model and can better choose between existing ASR output possibilities as well
as re-introduce previously pruned or unseen (out-of-vocabulary) phrases. It
provides corrections under poorly performing ASR conditions without degrading
any accurate transcriptions; such corrections are greater on top of
out-of-domain and mismatched data ASR. Our system consistently provides
improvements over the baseline ASR, even when baseline is further optimized
through recurrent neural network language model rescoring. This demonstrates
that any ASR improvements can be exploited independently and that our proposed
system can potentially still provide benefits on highly optimized ASR. Finally,
we present an extensive analysis of the type of errors corrected by our system.
In text/plain
format
Archived Files and Locations
application/pdf 424.3 kB
file_yut4dpr4xjgplfkswm7qy2mb6m
|
arxiv.org (repository) web.archive.org (webarchive) |
1802.02607v2
access all versions, variants, and formats of this works (eg, pre-prints)