Scalable Bayesian Learning of Recurrent Neural Networks for Language
Modeling
release_txk5lx2vv5gsdfycr7atpgvwea
by
Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence
Carin
2017
Abstract
Recurrent neural networks (RNNs) have shown promising performance for
language modeling. However, traditional training of RNNs using back-propagation
through time often suffers from overfitting. One reason for this is that
stochastic optimization (used for large training sets) does not provide good
estimates of model uncertainty. This paper leverages recent advances in
stochastic gradient Markov Chain Monte Carlo (also appropriate for large
training sets) to learn weight uncertainty in RNNs. It yields a principled
Bayesian learning algorithm, adding gradient noise during training (enhancing
exploration of the model-parameter space) and model averaging when testing.
Extensive experiments on various RNN models and across a broad range of
applications demonstrate the superiority of the proposed approach over
stochastic optimization.
In text/plain
format
Archived Files and Locations
application/pdf 984.7 kB
file_5mjweielqra2vn2isxq34bsecm
|
arxiv.org (repository) web.archive.org (webarchive) |
1611.08034v2
access all versions, variants, and formats of this works (eg, pre-prints)