Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems release_5nfuqvt5bbcm5f7bsearuxkhey

by Stefan Tiegel

Released as a article .

2022  

Abstract

We show hardness of improperly learning halfspaces in the agnostic model based on worst-case lattice problems, e.g., approximating shortest vectors within polynomial factors. In particular, we show that under this assumption there is no efficient algorithm that outputs any binary hypothesis, not necessarily a halfspace, achieving misclassfication error better than 1/2 - ϵ even if the optimal misclassification error is as small is as small as δ. Here, ϵ can be smaller than the inverse of any polynomial in the dimension and δ as small as exp(-Ω(log^1-c(d))), where 0 < c < 1 is an arbitrary constant and d is the dimension. Previous hardness results [Daniely16] of this problem were based on average-case complexity assumptions, specifically, variants of Feige's random 3SAT hypothesis. Our work gives the first hardness for this problem based on a worst-case complexity assumption. It is inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst-case lattice problems.
In text/plain format

Archived Files and Locations

application/pdf  229.5 kB
file_qx2kekfpofbgznql462gjl4yty
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-07-28
Version   v1
Language   en ?
arXiv  2207.14030v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 5093eaf8-cd43-4f67-b741-043338564919
API URL: JSON