Byzantine-Robust Learning on Heterogeneous Datasets via Resampling release_okt7knxa55dljd6mnfcu277wka

by Lie He, Sai Praneeth Karimireddy, Martin Jaggi

Released as a article .

2020  

Abstract

In Byzantine robust distributed optimization, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages to the server. While this problem has received significant attention recently, most current defenses assume that the workers have identical data. For realistic cases when the data across workers is heterogeneous (non-iid), we design new attacks which circumvent these defenses leading to significant loss of performance. We then propose a simple resampling scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost. We theoretically and experimentally validate our approach, showing that combining resampling with existing robust algorithms is effective against challenging attacks.
In text/plain format

Archived Files and Locations

application/pdf  636.1 kB
file_y5slveiddfaglbfkanrvbwntoi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-06-23
Version   v2
Language   en ?
arXiv  2006.09365v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 68e59686-3912-4f2d-ba0b-0698cf0b2f87
API URL: JSON