CRUM: Checkpoint-Restart Support for CUDA's Unified Memory
release_jozz2xwhczhknepgojuszrkm5q
by
Rohan Garg, Apoore Mohan, Michael Sullivan, Gene Cooperman
2018
Abstract
Unified Virtual Memory (UVM) was recently introduced on recent NVIDIA GPUs.
Through software and hardware support, UVM provides a coherent shared memory
across the entire heterogeneous node, migrating data as appropriate. The older
CUDA programming style is akin to older large-memory UNIX applications which
used to directly load and unload memory segments. Newer CUDA programs have
started taking advantage of UVM for the same reasons of superior
programmability that UNIX applications long ago switched to assuming the
presence of virtual memory. Therefore, checkpointing of UVM will become
increasingly important, especially as NVIDIA CUDA continues to gain wider
popularity: 87 of the top 500 supercomputers in the latest listings are
GPU-accelerated, with a current trend of ten additional GPU-based
supercomputers each year.
A new scalable checkpointing mechanism, CRUM (Checkpoint-Restart for Unified
Memory), is demonstrated for hybrid CUDA/MPI computations across multiple
computer nodes. CRUM supports a fast, forked checkpointing, which mostly
overlaps the CUDA computation with storage of the checkpoint image in stable
storage. The runtime overhead of using CRUM is 6% on average, and the time for
forked checkpointing is seen to be a factor of up to 40 times less than
traditional, synchronous checkpointing.
In text/plain
format
Archived Files and Locations
application/pdf 817.4 kB
file_qwhyo5gadvajxk733aroxt227y
|
arxiv.org (repository) web.archive.org (webarchive) |
1808.00117v1
access all versions, variants, and formats of this works (eg, pre-prints)