Tutorial on amortized optimization for learning to optimize over continuous domains release_tvgy2hp2i5f23cwbhuoex3gw2m

by Brandon Amos

Released as a article .

2022  

Abstract

Optimization is a ubiquitous modeling tool and is often deployed in settings which repeatedly solve similar instances of the same problem. Amortized optimization methods use learning to predict the solutions to problems in these settings. This leverages the shared structure between similar problem instances. In this tutorial, we will discuss the key design choices behind amortized optimization, roughly categorizing 1) models into fully-amortized and semi-amortized approaches, and 2) learning methods into regression-based and objective-based. We then view existing applications through these foundations to draw connections between them, including for manifold optimization, variational inference, sparse coding, meta-learning, control, reinforcement learning, convex optimization, and deep equilibrium networks. This framing enables us easily see, for example, that the amortized inference in variational autoencoders is conceptually identical to value gradients in control and reinforcement learning as they both use fully-amortized models with an objective-based loss. The source code for this tutorial is available at https://www.github.com/facebookresearch/amortized-optimization-tutorial
In text/plain format

Archived Files and Locations

application/pdf  4.4 MB
file_52wqq2lulzborevrjvuvqbfcoy
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-02-21
Version   v2
Language   en ?
arXiv  2202.00665v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 898f7c16-e580-45fc-bf1e-e9e314ad3693
API URL: JSON