Kernel Robust Bias-Aware Prediction under Covariate Shift release_g5xhvjrh4bajfbxzneqg7id5rm

by Anqi Liu, Rizal Fathony, Brian D. Ziebart

Released as a article .

2017  

Abstract

Under covariate shift, training (source) data and testing (target) data differ in input space distribution, but share the same conditional label distribution. This poses a challenging machine learning task. Robust Bias-Aware (RBA) prediction provides the conditional label distribution that is robust to the worstcase logarithmic loss for the target distribution while matching feature expectation constraints from the source distribution. However, employing RBA with insufficient feature constraints may result in high certainty predictions for much of the source data, while leaving too much uncertainty for target data predictions. To overcome this issue, we extend the representer theorem to the RBA setting, enabling minimization of regularized expected target risk by a reweighted kernel expectation under the source distribution. By applying kernel methods, we establish consistency guarantees and demonstrate better performance of the RBA classifier than competing methods on synthetically biased UCI datasets as well as datasets that have natural covariate shift.
In text/plain format

Archived Files and Locations

application/pdf  1.2 MB
file_gv2i273xjza2fk4nrebdc2f5pa
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2017-12-28
Version   v1
Language   en ?
arXiv  1712.10050v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: dc5a3e42-9b41-4518-b597-4560e5cbe20d
API URL: JSON