TTS-by-TTS: TTS-driven Data Augmentation for Fast and High-Quality Speech Synthesis release_lpoc5hsnfzctdhi2dm7srw2boe

by Min-Jae Hwang, Ryuichi Yamamoto, Eunwoo Song, Jae-Min Kim

Released as a article .

2020  

Abstract

In this paper, we propose a text-to-speech (TTS)-driven data augmentation method for improving the quality of a non-autoregressive (AR) TTS system. Recently proposed non-AR models, such as FastSpeech 2, have successfully achieved fast speech synthesis system. However, their quality is not satisfactory, especially when the amount of training data is insufficient. To address this problem, we propose an effective data augmentation method using a well-designed AR TTS system. In this method, large-scale synthetic corpora including text-waveform pairs with phoneme duration are generated by the AR TTS system and then used to train the target non-AR model. Perceptual listening test results showed that the proposed method significantly improved the quality of the non-AR TTS system. In particular, we augmented five hours of a training database to 179 hours of a synthetic one. Using these databases, our TTS system consisting of a FastSpeech 2 acoustic model with a Parallel WaveGAN vocoder achieved a mean opinion score of 3.74, which is 40% higher than that achieved by the conventional method.
In text/plain format

Archived Files and Locations

application/pdf  339.7 kB
file_db2gd3c7rjbuxghq6cvlj55sre
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-10-26
Version   v1
Language   en ?
arXiv  2010.13421v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 3419a2b9-ff6a-44b5-83fe-9ef26630f0d2
API URL: JSON