Inherent Weight Normalization in Stochastic Neural Networks
release_vrodwxzlv5dxxfkwom4uzgne24
by
Georgios Detorakis, Sourav Dutta, Abhishek Khanna, Matthew Jerry,
Suman Datta, Emre Neftci
2019
Abstract
Multiplicative stochasticity such as Dropout improves the robustness and
generalizability of deep neural networks. Here, we further demonstrate that
always-on multiplicative stochasticity combined with simple threshold neurons
are sufficient operations for deep neural networks. We call such models Neural
Sampling Machines (NSM). We find that the probability of activation of the NSM
exhibits a self-normalizing property that mirrors Weight Normalization, a
previously studied mechanism that fulfills many of the features of Batch
Normalization in an online fashion. The normalization of activities during
training speeds up convergence by preventing internal covariate shift caused by
changes in the input distribution. The always-on stochasticity of the NSM
confers the following advantages: the network is identical in the inference and
learning phases, making the NSM suitable for online learning, it can exploit
stochasticity inherent to a physical substrate such as analog non-volatile
memories for in-memory computing, and it is suitable for Monte Carlo sampling,
while requiring almost exclusively addition and comparison operations. We
demonstrate NSMs on standard classification benchmarks (MNIST and CIFAR) and
event-based classification benchmarks (N-MNIST and DVS Gestures). Our results
show that NSMs perform comparably or better than conventional artificial neural
networks with the same architecture.
In text/plain
format
Archived Files and Locations
application/pdf 609.6 kB
file_zdrob4b4yfeetn3zfkxbfjnwum
|
arxiv.org (repository) web.archive.org (webarchive) |
1910.12316v1
access all versions, variants, and formats of this works (eg, pre-prints)