Memformer: A Memory-Augmented Transformer for Sequence Modeling
release_hueckemqr5hmncnlmabhk6djz4
by
Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou Yu
2022
Abstract
Transformers have reached remarkable success in sequence modeling. However,
these models have efficiency issues as they need to store all the history
token-level representations as memory. We present Memformer, an efficient
neural network for sequence modeling, that utilizes an external dynamic memory
to encode and retrieve past information. Our model achieves linear time
complexity and constant memory space complexity when processing long sequences.
We also propose a new optimization scheme, memory replay back-propagation
(MRBP), which promotes long-range back-propagation through time with a
significantly reduced memory requirement. Experimental results show that
Memformer has achieved comparable performance compared to the baselines by
using 8.1x less memory space and 3.2x faster on inference. Analysis of the
attention pattern shows that our external memory slots can encode and retain
important information through timesteps.
In text/plain
format
Archived Files and Locations
application/pdf 1.4 MB
file_gqxlsltdd5b6vgq47osp423g2y
|
arxiv.org (repository) web.archive.org (webarchive) |
2010.06891v2
access all versions, variants, and formats of this works (eg, pre-prints)