Sparse Multiple Kernel Learning: Support Identification via Mirror Stratifiability release_hqylib6razf3jemekmwlxlp25i

by Guillaume Garrigos, Lorenzo Rosasco, Silvia Villa

Abstract

In statistical machine learning, kernel methods allow to consider infinite dimensional feature spaces with a computational cost that only depends on the number of observations. This is usually done by solving an optimization problem depending on a data fit term and a suitable regularizer. In this paper we consider feature maps which are the concatenation of a fixed, possibly large, set of simpler feature maps. The penalty is a sparsity inducing one, promoting solutions depending only on a small subset of the features. The group lasso problem is a special case of this more general setting. We show that one of the most popular optimization algorithms to solve the regularized objective function, the forward-backward splitting method, allows to perform feature selection in a stable manner. In particular, we prove that the set of relevant features is identified by the algorithm after a finite number of iterations if a suitable qualification condition holds. The main tools used in the proofs are the notions of stratification and mirror stratifiability.
In text/plain format

Released as a article
Version v1
Release Date 2018-03-02
Primary Language en (lookup)

Known Files and URLs

application/pdf  380.0 kB
sha1:958e70ed5b797626e476...
web.archive.org (webarchive)
iris.unige.it (web)
Read Full Text
Type  article
Stage   submitted
Date   2018-03-02
Version   v1
Work Entity
grouping other versions (eg, pre-print) and variants of this release
Cite This Release
Fatcat Bits

State is "active". Revision:
3e63a1d5-fdcd-42b0-906a-6ad5fbcf3fa9
As JSON object via API