Coherence boosting: When your pretrained language model is not paying enough attention release_ggittgqw5farnksaz7ggswlp7i

by Nikolay Malkin, Zhen Wang, Nebojsa Jojic

Released as a article .

2022  

Abstract

Long-range semantic coherence remains a challenge in automatic language generation and understanding. We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present coherence boosting, an inference procedure that increases a LM's focus on a long context. We show the benefits of coherence boosting with pretrained models by distributional analyses of generated ordinary text and dialog responses. It is also found that coherence boosting with state-of-the-art models for various zero-shot NLP tasks yields performance gains with no additional training.
In text/plain format

Archived Content

There are no accessible files associated with this release. You could check other releases for this work for an accessible version.

"Dark" Preservation Only
Save Paper Now!

Know of a fulltext copy of on the public web? Submit a URL and we will archive it

Type  article
Stage   submitted
Date   2022-03-16
Version   v2
Language   en ?
arXiv  2110.08294v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 40f6bf82-0464-4c4e-b5a6-8f38637728a5
API URL: JSON