Kernel Robust Bias-Aware Prediction under Covariate Shift
release_g5xhvjrh4bajfbxzneqg7id5rm
by
Anqi Liu, Rizal Fathony, Brian D. Ziebart
2017
Abstract
Under covariate shift, training (source) data and testing (target) data
differ in input space distribution, but share the same conditional label
distribution. This poses a challenging machine learning task. Robust Bias-Aware
(RBA) prediction provides the conditional label distribution that is robust to
the worstcase logarithmic loss for the target distribution while matching
feature expectation constraints from the source distribution. However,
employing RBA with insufficient feature constraints may result in high
certainty predictions for much of the source data, while leaving too much
uncertainty for target data predictions. To overcome this issue, we extend the
representer theorem to the RBA setting, enabling minimization of regularized
expected target risk by a reweighted kernel expectation under the source
distribution. By applying kernel methods, we establish consistency guarantees
and demonstrate better performance of the RBA classifier than competing methods
on synthetically biased UCI datasets as well as datasets that have natural
covariate shift.
In text/plain
format
Archived Files and Locations
application/pdf 1.2 MB
file_gv2i273xjza2fk4nrebdc2f5pa
|
arxiv.org (repository) web.archive.org (webarchive) |
1712.10050v1
access all versions, variants, and formats of this works (eg, pre-prints)