Faster Matchings via Learned Duals
release_yoa4cdwcx5exldrye3mk3kr4vm
by
Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii
2021
Abstract
A recent line of research investigates how algorithms can be augmented with
machine-learned predictions to overcome worst case lower bounds. This area has
revealed interesting algorithmic insights into problems, with particular
success in the design of competitive online algorithms. However, the question
of improving algorithm running times with predictions has largely been
unexplored.
We take a first step in this direction by combining the idea of
machine-learned predictions with the idea of "warm-starting" primal-dual
algorithms. We consider one of the most important primitives in combinatorial
optimization: weighted bipartite matching and its generalization to
b-matching. We identify three key challenges when using learned dual
variables in a primal-dual algorithm. First, predicted duals may be infeasible,
so we give an algorithm that efficiently maps predicted infeasible duals to
nearby feasible solutions. Second, once the duals are feasible, they may not be
optimal, so we show that they can be used to quickly find an optimal solution.
Finally, such predictions are useful only if they can be learned, so we show
that the problem of learning duals for matching has low sample complexity. We
validate our theoretical findings through experiments on both real and
synthetic data. As a result we give a rigorous, practical, and empirically
effective method to compute bipartite matchings.
In text/plain
format
Archived Files and Locations
application/pdf 1.4 MB
file_nh27e2j43vfgfevvf4fppifutu
|
arxiv.org (repository) web.archive.org (webarchive) |
2107.09770v1
access all versions, variants, and formats of this works (eg, pre-prints)