In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors release_q4dnmmag6vf37osyc4ueolflhu

by Jeffrey Negrea, Gintare Karolina Dziugaite, Daniel M. Roy

Released as a article .

2020  

Abstract

We propose to study the generalization error of a learned predictor ĥ in terms of that of a surrogate (potentially randomized) predictor that is coupled to ĥ and designed to trade empirical risk for control of generalization error. In the case where ĥ interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We also show that replacing ĥ by its conditional distribution with respect to an arbitrary σ-field is a convenient way to derandomize. We study two examples, inspired by the work of Nagarajan and Kolter (2019) and Bartlett et al. (2019), where the learned classifier ĥ interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of ĥ in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.
In text/plain format

Archived Files and Locations

application/pdf  682.7 kB
file_wf5bj4mng5h2nld4cl3np7mumu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-02-27
Version   v2
Language   en ?
arXiv  1912.04265v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d519f3da-50d7-4fe5-8c31-c126e6a55f7e
API URL: JSON