pith. machine review for the scientific record. sign in

arxiv: 1102.2670 · v1 · submitted 2011-02-14 · 💻 cs.AI

Recognition: unknown

Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems

Authors on Pith no claims yet
classification 💻 cs.AI
keywords boundproblemsconfidencebanditsleastlinearonlinesets
0
0 comments X
read the original abstract

The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bound holds for small sample sizes, and at the same time the worst case bound is improved by a logarithmic factor and the constant is improved.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. CLT-Optimal Parameter Error Bounds for Linear System Identification

    stat.ML 2026-04 unverdicted novelty 8.0

    Sharper finite-sample OLS parameter error bounds for linear dynamical systems are obtained via a novel second-order martingale decomposition that achieves instance-optimal CLT rates up to constants in Frobenius norm.

  2. Data-dependent Exploration for Online Reinforcement Learning from Human Feedback

    cs.LG 2026-05 unverdicted novelty 6.0

    DEPO uses historical data to build a data-dependent uncertainty bonus for exploration in online RLHF, yielding an adaptive regret bound and stronger empirical performance than baselines.