Self-Attentive Residual Decoder for Neural Machine Translation
release_3xckkyesdjdxpehz3ou5huviie
by
Lesly Miculicich Werlen, Nikolaos Pappas, Dhananjay Ram, Andrei
Popescu-Belis
2018
Abstract
Neural sequence-to-sequence networks with attention have achieved remarkable
performance for machine translation. One of the reasons for their effectiveness
is their ability to capture relevant source-side contextual information at each
time-step prediction through an attention mechanism. However, the target-side
context is solely based on the sequence model which, in practice, is prone to a
recency bias and lacks the ability to capture effectively non-sequential
dependencies among words. To address this limitation, we propose a
target-side-attentive residual recurrent network for decoding, where attention
over previous words contributes directly to the prediction of the next word.
The residual learning facilitates the flow of information from the distant past
and is able to emphasize any of the previously translated words, hence it gains
access to a wider context. The proposed model outperforms a neural MT baseline
as well as a memory and self-attention network on three language pairs. The
analysis of the attention learned by the decoder confirms that it emphasizes a
wider context, and that it captures syntactic-like structures.
In text/plain
format
Archived Files and Locations
application/pdf 796.0 kB
file_yfijyv7jdvgyxckbzttw3i75fe
|
arxiv.org (repository) web.archive.org (webarchive) |
1709.04849v4
access all versions, variants, and formats of this works (eg, pre-prints)