Asymptotically Optimal Algorithms for Budgeted Multiple Play Bandits release_uek4qx7oavhc7lkx4hydfifajy

by Alexander Luedtke, Emilie Kaufmann, Antoine Chambaz (MAP5 - UMR 8145)

Released as a article .

2019  

Abstract

We study a generalization of the multi-armed bandit problem with multiple plays where there is a cost associated with pulling each arm and the agent has a budget at each time that dictates how much she can expect to spend. We derive an asymptotic regret lower bound for any uniformly efficient algorithm in our setting. We then study a variant of Thompson sampling for Bernoulli rewards and a variant of KL-UCB for both single-parameter exponential families and bounded, finitely supported rewards. We show these algorithms are asymptotically optimal, both in rateand leading problem-dependent constants, including in the thick margin setting where multiple arms fall on the decision boundary.
In text/plain format

Archived Files and Locations

application/pdf  800.0 kB
file_hpxfzsiovfg2pcf5rgdrioqct4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-09-12
Version   v3
Language   en ?
arXiv  1606.09388v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 7858689d-605f-48e5-847e-f5e993786072
API URL: JSON