Surrogate Gradient Learning in Spiking Neural Networks release_xx45zre2avc33gkyzcuebph774

by Emre O. Neftci, Hesham Mostafa, Friedemann Zenke

Released as a article .

2019  

Abstract

Spiking neural networks are nature's versatile solution to fault-tolerant and energy efficient signal processing. To translate these benefits into hardware, a growing number of neuromorphic spiking neural network processors attempt to emulate biological neural networks. These developments have created an imminent need for methods and tools to enable such systems to solve real-world signal processing problems. Like conventional neural networks, spiking neural networks can be trained on real, domain specific data. However, their training requires overcoming a number of challenges linked to their binary and dynamical nature. This article elucidates step-by-step the problems typically encountered when training spiking neural networks, and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting. To that end, it gives an overview of existing approaches and provides an introduction to surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.
In text/plain format

Archived Files and Locations

application/pdf  1.1 MB
file_qll4ao4s4nhkrkmjybygovuhny
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-01-28
Version   v1
Language   en ?
arXiv  1901.09948v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 0d7bd541-efee-4411-b510-ccd85be73daf
API URL: JSON