From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables release_jy4t6fdp3bfsph7y3c6bvrqc4i

by Krtin Kumar, Peyman Passban, Mehdi Rezagholizadeh, Yiu Sing Lau, Qun Liu

Released as a article .

2022  

Abstract

Embedding matrices are key components in neural natural language processing (NLP) models that are responsible to provide numerical representations of input tokens.[In this paper words and subwords are referred to as tokens and the term embedding only refers to embeddings of inputs.] In this paper, we analyze the impact and utility of such matrices in the context of neural machine translation (NMT). We show that detracting syntactic and semantic information from word embeddings and running NMT systems with random embeddings is not as damaging as it initially sounds. We also show how incorporating only a limited amount of task-specific knowledge from fully-trained embeddings can boost the performance NMT systems. Our findings demonstrate that in exchange for negligible deterioration in performance, any NMT model can be run with partially random embeddings. Working with such structures means a minimal memory requirement as there is no longer need to store large embedding tables, which is a significant gain in industrial and on-device settings. We evaluated our embeddings in translating English into German and French and achieved a 5.3x compression rate. Despite having a considerably smaller architecture, our models in some cases are even able to outperform state-of-the-art baselines.
In text/plain format

Archived Files and Locations

application/pdf  527.7 kB
file_hdrtpmehkfhnxhen6spo5qyrjq
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-04-18
Version   v2
Language   en ?
arXiv  2104.08677v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 3ff8e8df-6573-4b0d-952d-851b4e4440d2
API URL: JSON