Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters release_unsnfr3ys5bijemoyf2zpodbee

by Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

Released as a article .

2021  

Abstract

Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance. We show that current approaches, which typically assume that regularization hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters, modelling the attack as a minimax bilevel optimization problem. This allows to formulate optimal attacks, select hyperparameters and evaluate robustness under worst case conditions. We apply this formulation to logistic regression using L_2 regularization, empirically show the limitations of previous strategies and evidence the benefits of using L_2 regularization to dampen the effect of poisoning attacks.
In text/plain format

Archived Files and Locations

application/pdf  1.1 MB
file_xp64nfgpanchtf47th6xd2d5z4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-05-23
Version   v1
Language   en ?
arXiv  2105.10948v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 9c6a9151-0423-48a8-acd3-3c5f78fd723f
API URL: JSON