Bandits with concave rewards and convex knapsacks
release_rpmgelekqzgozjn3cg6bm2pc2e
by
Shipra Agrawal, Nikhil R. Devanur
2014
Abstract
In this paper, we consider a very general model for exploration-exploitation
tradeoff which allows arbitrary concave rewards and convex constraints on the
decisions across time, in addition to the customary limitation on the time
horizon. This model subsumes the classic multi-armed bandit (MAB) model, and
the Bandits with Knapsacks (BwK) model of Badanidiyuru et al.[2013]. We also
consider an extension of this model to allow linear contexts, similar to the
linear contextual extension of the MAB model. We demonstrate that a natural and
simple extension of the UCB family of algorithms for MAB provides a polynomial
time algorithm that has near-optimal regret guarantees for this substantially
more general model, and matches the bounds provided by Badanidiyuru et
al.[2013] for the special case of BwK, which is quite surprising. We also
provide computationally more efficient algorithms by establishing interesting
connections between this problem and other well studied problems/algorithms
such as the Blackwell approachability problem, online convex optimization, and
the Frank-Wolfe technique for convex optimization. We give examples of several
concrete applications, where this more general model of bandits allows for
richer and/or more efficient formulations of the problem.
In text/plain
format
Archived Files and Locations
application/pdf 385.5 kB
file_6qk6flya7facrbw34n572ksznq
|
arxiv.org (repository) web.archive.org (webarchive) |
1402.5758v1
access all versions, variants, and formats of this works (eg, pre-prints)