Stacked Acoustic-and-Textual Encoding: Integrating the Pre-trained Models into Speech Translation Encoders
release_xxwk3w3egvenhllukgiktatdqu
by
Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, shen huang, Qi Ju, Tong Xiao, Jingbo Zhu
2021
Abstract
Encoder pre-training is promising in end-to-end Speech Translation (ST),
given the fact that speech-to-translation data is scarce. But ST encoders are
not simple instances of Automatic Speech Recognition (ASR) or Machine
Translation (MT) encoders. For example, we find ASR encoders lack the global
context representation, which is necessary for translation, whereas MT encoders
are not designed to deal with long but locally attentive acoustic sequences. In
this work, we propose a Stacked Acoustic-and-Textual Encoding (SATE) method for
speech translation. Our encoder begins with processing the acoustic sequence as
usual, but later behaves more like an MT encoder for a global representation of
the input sequence. In this way, it is straightforward to incorporate the
pre-trained models into the system. Also, we develop an adaptor module to
alleviate the representation inconsistency between the pre-trained ASR encoder
and MT encoder, and a multi-teacher knowledge distillation method to preserve
the pre-training knowledge. Experimental results on the LibriSpeech En-Fr and
MuST-C En-De show that our method achieves the state-of-the-art performance of
18.3 and 25.2 BLEU points. To our knowledge, we are the first to develop an
end-to-end ST system that achieves comparable or even better BLEU performance
than the cascaded ST counterpart when large-scale ASR and MT data is available.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2105.05752v1
access all versions, variants, and formats of this works (eg, pre-prints)