Dual Supervised Learning
release_tnaomlfynnd4fe5k6lbk5o2blu
by
Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, Tie-Yan Liu
2017
Abstract
Many supervised learning tasks are emerged in dual forms, e.g.,
English-to-French translation vs. French-to-English translation, speech
recognition vs. text to speech, and image classification vs. image generation.
Two dual tasks have intrinsic connections with each other due to the
probabilistic correlation between their models. This connection is, however,
not effectively utilized today, since people usually train the models of two
dual tasks separately and independently. In this work, we propose training the
models of two dual tasks simultaneously, and explicitly exploiting the
probabilistic correlation between them to regularize the training process. For
ease of reference, we call the proposed approach dual supervised
learning. We demonstrate that dual supervised learning can improve the
practical performances of both tasks, for various applications including
machine translation, image processing, and sentiment analysis.
In text/plain
format
Archived Files and Locations
application/pdf 1.2 MB
file_7bggplq66fcnzhgk62rqforcs4
|
arxiv.org (repository) web.archive.org (webarchive) |
1707.00415v1
access all versions, variants, and formats of this works (eg, pre-prints)