Truncated Back-propagation for Bilevel Optimization release_wwjtv6kdqffcbed27exojrlvva

by Amirreza Shaban, Ching-An Cheng, Nathan Hatch, Byron Boots

Released as a article .

2018  

Abstract

Bilevel optimization has been recently revisited for designing and analyzing algorithms in hyperparameter tuning and meta learning tasks. However, due to its nested structure, evaluating exact gradients for high-dimensional problems is computationally challenging. One heuristic to circumvent this difficulty is to use the approximate gradient given by performing truncated back-propagation through the iterative optimization procedure that solves the lower-level problem. Although promising empirical performance has been reported, its theoretical properties are still unclear. In this paper, we analyze the properties of this family of approximate gradients and establish sufficient conditions for convergence. We validate this on several hyperparameter tuning and meta learning tasks. We find that optimization with the approximate gradient computed using few-step back-propagation often performs comparably to optimization with the exact gradient, while requiring far less memory and half the computation time.
In text/plain format

Archived Files and Locations

application/pdf  933.0 kB
file_yp6hbexejrd37n7zyhl65wbd2u
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-10-25
Version   v1
Language   en ?
arXiv  1810.10667v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 54f79cd2-97c2-4f5a-a147-4de9cddbcaa1
API URL: JSON