Semantic-Enhanced Explainable Finetuning for Open-Domain Dialogues
release_4xog3ok7p5hprm3dlbj5y776vm
by
Chen Henry Wu, Yinhe Zheng, Yida Wang, Zhenyu Yang, Minlie Huang
2021
Abstract
In this paper, we propose to combine pretrained language models with the
modular dialogue paradigm for open-domain dialogue modeling. Our method,
semantic-enhanced finetuning, instantiates conversation understanding,
planning, and response generation as a language model finetuning task. At
inference, we disentangle semantic and token variations by specifying sampling
methods and constraints for each module separately. For training and
evaluation, we present X-Weibo, a Chinese multi-turn open-domain dialogue
dataset with automatic annotation for emotions, DAs, and topical words.
Experiments show that semantic-enhanced finetuning outperforms strong baselines
on non-semantic and semantic metrics, improves the human-evaluated relevance,
coherence, and informativeness, and exhibits considerable controllability over
semantic variables.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2106.03065v1
access all versions, variants, and formats of this works (eg, pre-prints)