A unifying representer theorem for inverse problems and machine learning
release_m636lxldbrcfva4v6a3x2gnaxm
by
Michael Unser
2020
Abstract
The standard approach for dealing with the ill-posedness of the training
problem in machine learning and/or the reconstruction of a signal from a
limited number of measurements is regularization. The method is applicable
whenever the problem is formulated as an optimization task. The standard
strategy consists in augmenting the original cost functional by an energy that
penalizes solutions with undesirable behavior. The effect of regularization is
very well understood when the penalty involves a Hilbertian norm. Another
popular configuration is the use of an ℓ_1-norm (or some variant thereof)
that favors sparse solutions. In this paper, we propose a higher-level
formulation of regularization within the context of Banach spaces. We present a
general representer theorem that characterizes the solutions of a remarkably
broad class of optimization problems. We then use our theorem to retrieve a
number of known results in the literature—e.g., the celebrated representer
theorem of machine leaning for RKHS, Tikhonov regularization, representer
theorems for sparsity promoting functionals, the recovery of spikes—as well
as a few new ones.
In text/plain
format
Archived Files and Locations
application/pdf 284.9 kB
file_y36ltafn7bdatiop3arpafsbue
|
arxiv.org (repository) web.archive.org (webarchive) |
1903.00687v3
access all versions, variants, and formats of this works (eg, pre-prints)