Optimization-based Block Coordinate Gradient Coding
release_4lfpbhaep5gxffhrignjv2ow3m
by
Qi Wang, Ying Cui, Chenglin Li, Junni Zou, Hongkai Xiong
2021
Abstract
Existing gradient coding schemes introduce identical redundancy across the
coordinates of gradients and hence cannot fully utilize the computation results
from partial stragglers. This motivates the introduction of diverse
redundancies across the coordinates of gradients. This paper considers a
distributed computation system consisting of one master and N workers
characterized by a general partial straggler model and focuses on solving a
general large-scale machine learning problem with L model parameters. We show
that it is sufficient to provide at most N levels of redundancies for
tolerating 0, 1,⋯, N-1 stragglers, respectively. Consequently, we
propose an optimal block coordinate gradient coding scheme based on a
stochastic optimization problem that optimizes the partition of the L
coordinates into N blocks, each with identical redundancy, to minimize the
expected overall runtime for collaboratively computing the gradient. We obtain
an optimal solution using a stochastic projected subgradient method and propose
two low-complexity approximate solutions with closed-from expressions, for the
stochastic optimization problem. We also show that under a shifted-exponential
distribution, for any L, the expected overall runtimes of the two approximate
solutions and the minimum overall runtime have sub-linear multiplicative gaps
in N. To the best of our knowledge, this is the first work that optimizes the
redundancies of gradient coding introduced across the coordinates of gradients.
In text/plain
format
Archived Files and Locations
application/pdf 1.1 MB
file_vloxeuyn2nec3hufqipclxaflu
|
arxiv.org (repository) web.archive.org (webarchive) |
2109.08933v1
access all versions, variants, and formats of this works (eg, pre-prints)