pith. machine review for the scientific record. sign in

arxiv: 1205.4217 · v2 · submitted 2012-05-18 · 📊 stat.ML · cs.LG

Recognition: unknown

Thompson Sampling: An Asymptotically Optimal Finite Time Analysis

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords analysisbeenbernoullicaseoptimalsamplingthompsonaccompanied
0
0 comments X
read the original abstract

The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem had been open since 1933. In this paper we answer it positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret. The proof is accompanied by a numerical comparison with other optimal policies, experiments that have been lacking in the literature until now for the Bernoulli case.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Spectral bandits

    stat.ML 2026-04 unverdicted novelty 7.0

    Spectral bandits achieve scalable regret in graph-structured recommendation by using an effective dimension to learn good policies from few node evaluations.

  2. Budgeted Online Influence Maximization

    cs.LG 2026-04 unverdicted novelty 7.0

    A new algorithm for online influence maximization under a total budget constraint using the independent cascade model and edge-level semi-bandit feedback, with improved regret bounds for both budgeted and cardinality ...

  3. Covariance-adapting algorithm for semi-bandits with application to sparse rewards

    stat.ML 2026-04 unverdicted novelty 7.0

    A covariance-adapting algorithm for semi-bandits achieves asymptotically tight regret bounds under a new sub-exponential distribution family, with direct application to sparse rewards.