Dynamic Tensor Rematerialization
release_hhp2aqux5zc4tnqunf6rqoeut4
by
Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, Zachary Tatlock
2020
Abstract
Checkpointing enables training larger models by freeing intermediate
activations and recomputing them on demand. Previous checkpointing techniques
are difficult to generalize to dynamic models because they statically plan
recomputations offline. We present Dynamic Tensor Rematerialization (DTR), a
greedy online algorithm for heuristically checkpointing arbitrary models. DTR
is extensible and general: it is parameterized by an eviction policy and only
collects lightweight metadata on tensors and operators. Though DTR has no
advance knowledge of the model or training task, we prove it can train an
N-layer feedforward network on an Ω(√(N)) memory budget with only
O(N) tensor operations. Moreover, we identify a general eviction
heuristic and show how it allows DTR to automatically provide favorable
checkpointing performance across a variety of models and memory budgets.
In text/plain
format
Archived Files and Locations
application/pdf 1.3 MB
file_pvmqixpdujap7pr54s4obnrgqi
|
arxiv.org (repository) web.archive.org (webarchive) |
2006.09616v1
access all versions, variants, and formats of this works (eg, pre-prints)