Investigating Memorization of Conspiracy Theories in Text Generation
release_iio5qilndbdtlbqatmfxevfc4y
by
Sharon Levy, Michael Saxon, William Yang Wang
2021
Abstract
The adoption of natural language generation (NLG) models can leave
individuals vulnerable to the generation of harmful information memorized by
the models, such as conspiracy theories. While previous studies examine
conspiracy theories in the context of social media, they have not evaluated
their presence in the new space of generative language models. In this work, we
investigate the capability of language models to generate conspiracy theory
text. Specifically, we aim to answer: can we test pretrained generative
language models for the memorization and elicitation of conspiracy theories
without access to the model's training data? We highlight the difficulties of
this task and discuss it in the context of memorization, generalization, and
hallucination. Utilizing a new dataset consisting of conspiracy theory topics
and machine-generated conspiracy theories helps us discover that many
conspiracy theories are deeply rooted in the pretrained language models. Our
experiments demonstrate a relationship between model parameters such as size
and temperature and their propensity to generate conspiracy theory text. These
results indicate the need for a more thorough review of NLG applications before
release and an in-depth discussion of the drawbacks of memorization in
generative language models.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2101.00379v3
access all versions, variants, and formats of this works (eg, pre-prints)