Scalable Gradients for Stochastic Differential Equations release_tflksxw3svhcboj2rungareeom

by Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David Duvenaud

Released as a article .

2020  

Abstract

The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset.
In text/plain format

Archived Files and Locations

application/pdf  2.5 MB
file_qiimwoffnje4rmpxsjinv7l4bq
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-02-24
Version   v4
Language   en ?
arXiv  2001.01328v4
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 90c34897-ee02-4f71-9e86-f7eefc0648ff
API URL: JSON