A unifying representer theorem for inverse problems and machine learning release_m636lxldbrcfva4v6a3x2gnaxm

by Michael Unser

Released as a article .

2020  

Abstract

The standard approach for dealing with the ill-posedness of the training problem in machine learning and/or the reconstruction of a signal from a limited number of measurements is regularization. The method is applicable whenever the problem is formulated as an optimization task. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behavior. The effect of regularization is very well understood when the penalty involves a Hilbertian norm. Another popular configuration is the use of an ℓ_1-norm (or some variant thereof) that favors sparse solutions. In this paper, we propose a higher-level formulation of regularization within the context of Banach spaces. We present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems. We then use our theorem to retrieve a number of known results in the literature—e.g., the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, the recovery of spikes—as well as a few new ones.
In text/plain format

Archived Files and Locations

application/pdf  284.9 kB
file_y36ltafn7bdatiop3arpafsbue
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-07-10
Version   v3
Language   en ?
arXiv  1903.00687v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d6e2b22a-da1b-4e07-8956-76c5bea70806
API URL: JSON