Coherence boosting: When your pretrained language model is not paying enough attention
release_ggittgqw5farnksaz7ggswlp7i
by
Nikolay Malkin, Zhen Wang, Nebojsa Jojic
2022
Abstract
Long-range semantic coherence remains a challenge in automatic language
generation and understanding. We demonstrate that large language models have
insufficiently learned the effect of distant words on next-token prediction. We
present coherence boosting, an inference procedure that increases a LM's focus
on a long context. We show the benefits of coherence boosting with pretrained
models by distributional analyses of generated ordinary text and dialog
responses. It is also found that coherence boosting with state-of-the-art
models for various zero-shot NLP tasks yields performance gains with no
additional training.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2110.08294v2
access all versions, variants, and formats of this works (eg, pre-prints)