Semantic-Enhanced Explainable Finetuning for Open-Domain Dialogues release_4xog3ok7p5hprm3dlbj5y776vm

by Chen Henry Wu, Yinhe Zheng, Yida Wang, Zhenyu Yang, Minlie Huang

Released as a article .

2021  

Abstract

In this paper, we propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling. Our method, semantic-enhanced finetuning, instantiates conversation understanding, planning, and response generation as a language model finetuning task. At inference, we disentangle semantic and token variations by specifying sampling methods and constraints for each module separately. For training and evaluation, we present X-Weibo, a Chinese multi-turn open-domain dialogue dataset with automatic annotation for emotions, DAs, and topical words. Experiments show that semantic-enhanced finetuning outperforms strong baselines on non-semantic and semantic metrics, improves the human-evaluated relevance, coherence, and informativeness, and exhibits considerable controllability over semantic variables.
In text/plain format

Archived Content

There are no accessible files associated with this release. You could check other releases for this work for an accessible version.

"Dark" Preservation Only
Save Paper Now!

Know of a fulltext copy of on the public web? Submit a URL and we will archive it

Type  article
Stage   submitted
Date   2021-06-06
Version   v1
Language   en ?
arXiv  2106.03065v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 56daf757-c2d4-4ff0-9cd5-3bfedd22770e
API URL: JSON