Limitations of adversarial robustness: strong No Free Lunch Theorem release_dyvfk7by55d23ifrpp37vhssxu

by Elvis Dohmatob

Released as a article .

2018  

Abstract

This manuscript presents some new impossibility results on adversarial robustness in machine learning, a very important yet largely open problem. We show that if conditioned on a class label the data distribution satisfies the W_2 Talagrand transportation-cost inequality (for example, this condition is satisfied if the conditional distribution has density which is log-concave; is the uniform measure on a compact Riemannian manifold with positive Ricci curvature, any classifier can be adversarially fooled with high probability once the perturbations are slightly greater than the natural noise level in the problem. We call this result The Strong "No Free Lunch" Theorem as some recent results (Tsipras et al. 2018, Fawzi et al. 2018, etc.) on the subject can be immediately recovered as very particular cases. Our theoretical bounds are demonstrated on both simulated and real data (MNIST). We conclude the manuscript with some speculation on possible future research directions.
In text/plain format

Archived Files and Locations

application/pdf  614.3 kB
file_ao6hecps5zfnhaguqcsauqjti4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-10-11
Version   v2
Language   en ?
arXiv  1810.04065v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 1755f4d7-be05-421e-8a20-9d6866be6543
API URL: JSON