Recognition: unknown
Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems
read the original abstract
The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bound holds for small sample sizes, and at the same time the worst case bound is improved by a logarithmic factor and the constant is improved.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
CLT-Optimal Parameter Error Bounds for Linear System Identification
Sharper finite-sample OLS parameter error bounds for linear dynamical systems are obtained via a novel second-order martingale decomposition that achieves instance-optimal CLT rates up to constants in Frobenius norm.
-
Data-dependent Exploration for Online Reinforcement Learning from Human Feedback
DEPO uses historical data to build a data-dependent uncertainty bonus for exploration in online RLHF, yielding an adaptive regret bound and stronger empirical performance than baselines.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.