Radial and Directional Posteriors for Bayesian Neural Networks release_vpaguogbujdpljbve3g2tlrphy

by Changyong Oh, Kamil Adamczewski, Mijung Park

Released as a article .

2019  

Abstract

We propose a new variational family for Bayesian neural networks. We decompose the variational posterior into two components, where the radial component captures the strength of each neuron in terms of its magnitude; while the directional component captures the statistical dependencies among the weight parameters. The dependencies learned via the directional density provide better modeling performance compared to the widely-used Gaussian mean-field-type variational family. In addition, the strength of input and output neurons learned via the radial density provides a structured way to compress neural networks. Indeed, experiments show that our variational family improves predictive performance and yields compressed networks simultaneously.
In text/plain format

Archived Files and Locations

application/pdf  1.7 MB
file_mxyxjfe7qrbl7ljmv2s3hilrya
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-02-07
Version   v1
Language   en ?
arXiv  1902.02603v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: b9dd059a-8e4b-497b-98ae-c5c0ccc6f845
API URL: JSON