Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models release_sw2dgffakvchphom64e2ebg43y

by Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, Joelle Pineau

Released as a article .

2016  

Abstract

We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.
In text/plain format

Archived Files and Locations

application/pdf  436.0 kB
file_7voghudupvgrdd5nrupnhxup6y
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2016-04-06
Version   v3
Language   en ?
arXiv  1507.04808v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 451e9081-2279-4bac-adee-0173d07ac393
API URL: JSON