GIANT: Globally Improved Approximate Newton Method for Distributed Optimization release_zpuwkxg5ijclvoevhlewobiyvi

by Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney

Released as a article .

2018  

Abstract

For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, which is sent to the main driver. The main driver, then, averages all the ANT directions received from workers to form a Globally Improved ANT (GIANT) direction. GIANT is highly communication efficient and naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. Theoretically, we show that GIANT enjoys an improved convergence rate as compared with first-order methods and existing distributed Newton-type methods. Further, and in sharp contrast with many existing distributed Newton-type methods, as well as popular first-order methods, a highly advantageous practical feature of GIANT is that it only involves one tuning parameter. We conduct large-scale experiments on a computer cluster and, empirically, demonstrate the superior performance of GIANT.
In text/plain format

Archived Files and Locations

application/pdf  2.1 MB
file_22pri2o34zdwxnsjo6n2ftvboi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-05-09
Version   v3
Language   en ?
arXiv  1709.03528v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 44e45995-8761-4a8d-bfcd-36dfeab5d360
API URL: JSON