Towards Zero-Shot Learning for Automatic Phonemic Transcription
release_ax4mqalpnjaw7jonbgxsednxrm
by
Xinjian Li, Siddharth Dalmia, David Mortensen, Juncheng Li, Alan Black, Florian Metze
2020 Volume 34, Issue 05, p8261-8268
Abstract
Automatic phonemic transcription tools are useful for low-resource language documentation. However, due to the lack of training sets, only a tiny fraction of languages have phonemic transcription tools. Fortunately, multilingual acoustic modeling provides a solution given limited audio training data. A more challenging problem is to build phonemic transcribers for languages with zero training data. The difficulty of this task is that phoneme inventories often differ between the training languages and the target language, making it infeasible to recognize unseen phonemes. In this work, we address this problem by adopting the idea of zero-shot learning. Our model is able to recognize unseen phonemes in the target language without any training data. In our model, we decompose phonemes into corresponding articulatory attributes such as vowel and consonant. Instead of predicting phonemes directly, we first predict distributions over articulatory attributes, and then compute phoneme distributions with a customized acoustic model. We evaluate our model by training it using 13 languages and testing it using 7 unseen languages. We find that it achieves 7.7% better phoneme error rate on average over a standard multilingual model.
In application/xml+jats
format
Archived Files and Locations
application/pdf 1.3 MB
file_fm23dkzserb55cwgf5hgjzhn2y
|
aaai.org (web) web.archive.org (webarchive) |
article-journal
Stage
published
Date 2020-04-03
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar