Chunk-based Nearest Neighbor Machine Translation
release_g27gc3pmlbfzdcngrr3dcw7whm
by
Pedro Henrique Martins and Zita Marinho and André F. T. Martins
2022
Abstract
Semi-parametric models, which augment generation with retrieval, have led to
impressive results in language modeling and machine translation, due to their
ability to leverage information retrieved from a datastore of examples. One of
the most prominent approaches, kNN-MT, has an outstanding performance on
domain adaptation by retrieving tokens from a domain-specific datastore
<cit.>. However, kNN-MT requires retrieval for every
single generated token, leading to a very low decoding speed (around 8 times
slower than a parametric model). In this paper, we introduce a
chunk-based kNN-MT model which retrieves chunks of tokens from the
datastore, instead of a single token. We propose several strategies for
incorporating the retrieved chunks into the generation process, and for
selecting the steps at which the model needs to search for neighbors in the
datastore. Experiments on machine translation in two settings, static domain
adaptation and “on-the-fly” adaptation, show that the chunk-based kNN-MT
model leads to a significant speed-up (up to 4 times) with only a small drop in
translation quality.
In text/plain
format
Archived Files and Locations
application/pdf 596.1 kB
file_xah35c4qxjefrgkwexpxljz3w4
|
arxiv.org (repository) web.archive.org (webarchive) |
2205.12230v1
access all versions, variants, and formats of this works (eg, pre-prints)