pith. machine review for the scientific record. sign in

arxiv: 1110.6084 · v3 · submitted 2011-10-27 · 🧮 math.ST · cs.LG· stat.ML· stat.TH

Recognition: unknown

The multi-armed bandit problem with covariates

Authors on Pith no claims yet
classification 🧮 math.ST cs.LGstat.MLstat.TH
keywords problembanditpolicymulti-armedstaticabseadaptivelybounds
0
0 comments X
read the original abstract

We consider a multi-armed bandit problem in a setting where each arm produces a noisy reward realization which depends on an observable random covariate. As opposed to the traditional static multi-armed bandit problem, this setting allows for dynamically changing rewards that better describe applications where side information is available. We adopt a nonparametric model where the expected rewards are smooth functions of the covariate and where the hardness of the problem is captured by a margin parameter. To maximize the expected cumulative reward, we introduce a policy called Adaptively Binned Successive Elimination (abse) that adaptively decomposes the global problem into suitably "localized" static bandit problems. This policy constructs an adaptive partition using a variant of the Successive Elimination (se) policy. Our results include sharper regret bounds for the se policy in a static bandit problem and minimax optimal regret bounds for the abse policy in the dynamic problem.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.