Recognition: 2 theorem links
· Lean TheoremThe Sample Complexity of Multiple Change Point Identification under Bandit Feedback
Pith reviewed 2026-05-14 17:44 UTC · model grok-4.3
The pith
Sample complexity for locating multiple change points under bandit feedback is governed jointly by jump sizes and their relative positions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
For general δ and η, the sample complexity is jointly governed by the jumps and the relative positions of the change points.
What carries the argument
Two-phase adaptive algorithm that first detects intervals containing the change points and then refines each location to precision η, with non-asymptotic upper and lower bounds that explicitly incorporate both jump magnitudes and inter-change-point distances.
If this is right
- Upper bounds on total queries scale with both the smallest jump size and the smallest separation between change points.
- Matching lower bounds confirm that ignoring the relative positions cannot yield a tight guarantee outside the asymptotic regime.
- The algorithm meets the target precision η and failure probability δ using a budget that reflects the joint dependence.
- As δ approaches zero the position dependence weakens and the earlier jump-only asymptotics are recovered.
Where Pith is reading between the lines
- When change points lie close together, the refinement phase must allocate extra queries to separate them even if the jumps are large.
- The derived bounds suggest concrete stopping rules for sequential sampling strategies in streaming data settings.
- Relaxing the known-number assumption would require an additional model-selection phase whose cost could be bounded using the same joint dependence.
Load-bearing premise
The number of change points is known in advance and the function is exactly piecewise constant with sharp jumps of fixed but unknown magnitudes.
What would settle it
An empirical or theoretical demonstration that, for fixed jump magnitudes, δ, and η, decreasing the minimal distance between two change points strictly increases the minimal number of samples required to meet the detection guarantee.
Figures
read the original abstract
We study multiple change point localization under bandit feedback. An unknown piecewise-constant function on a compact interval can be queried sequentially at adaptively chosen inputs, and each query returns a noisy evaluation of the function. The goal is to identify a prescribed number of discontinuities, known as change points, within a target precision $\eta$ and confidence level $1-\delta$, while using as few samples as possible. We propose an adaptive algorithm that first detects intervals likely to contain change points and then refines their locations to precision $\eta$. We establish non-asymptotic upper bounds on its sample budget, together with corresponding lower bounds. Prior work shows that jump magnitudes alone determine the asymptotic sample complexity as $\delta\to 0$. We reveal that this picture is incomplete beyond this regime. We demonstrate, both empirically and theoretically, that for general $\delta$ and $\eta$, the complexity is jointly governed by the jumps and the relative positions of the change points.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper studies multiple change point localization under bandit feedback for an unknown piecewise-constant function on a compact interval. Queries return noisy evaluations at adaptively chosen points. The goal is to identify a known number K of change points to precision η with probability 1-δ using minimal samples. A two-phase adaptive algorithm is proposed: a detection phase that isolates intervals likely containing the change points, followed by a refinement phase that localizes each to precision η. Non-asymptotic upper and lower bounds are derived on the sample complexity. The central claim is that, for general finite δ and η, this complexity is jointly determined by the jump magnitudes and the relative positions of the change points, in contrast to the asymptotic regime δ→0 where only jumps matter.
Significance. If the matching bounds hold, the result supplies a more precise finite-sample characterization of bandit change-point identification than prior asymptotic analyses. The explicit dependence on positions is a substantive addition with implications for algorithm design when δ and η are not vanishingly small. The combination of non-asymptotic analysis, matching lower bounds, and empirical validation constitutes a solid contribution to the literature on sequential change-point detection.
major comments (2)
- [§4, Theorem 4.1] §4 (Algorithm description) and Theorem 4.1: the upper bound derivation for the detection phase assumes that the minimal separation between change points is known or lower-bounded; the stated position dependence for general η appears to require an explicit quantitative relation between this separation and the sample allocation in the refinement phase, which is not fully spelled out.
- [§5] §5 (Lower bounds): the information-theoretic construction for the position-dependent lower bound is presented for the known-K, exactly piecewise-constant case with sharp jumps. Because the central claim concerns general δ, η, the proof should contain an explicit reduction showing how an adversary can force additional samples solely by repositioning the change points while keeping jump sizes fixed; without this step the lower bound risks being loose for the claimed regime.
minor comments (2)
- [§2] Notation for the minimal jump size Δ and the position vector should be introduced once in §2 and used consistently; several later equations reuse Δ without redefinition.
- [Figure 2] Figure 2 (empirical results): the x-axis scaling for different position configurations is not labeled uniformly; add a caption note clarifying whether the plotted sample counts are normalized by the jump-only baseline.
Simulated Author's Rebuttal
We thank the referee for the careful reading and constructive feedback. The comments highlight opportunities to strengthen the presentation of the non-asymptotic bounds. We address each major comment below and will incorporate clarifications and proof details in the revision.
read point-by-point responses
-
Referee: [§4, Theorem 4.1] §4 (Algorithm description) and Theorem 4.1: the upper bound derivation for the detection phase assumes that the minimal separation between change points is known or lower-bounded; the stated position dependence for general η appears to require an explicit quantitative relation between this separation and the sample allocation in the refinement phase, which is not fully spelled out.
Authors: The detection phase (Algorithm 1) is fully adaptive and does not require the minimal separation to be known in advance; sample allocation is driven by successive hypothesis tests on empirical means without an explicit Δ_min input. However, the proof of Theorem 4.1 does invoke a lower bound on separation (implicitly Δ > 2η to guarantee non-overlapping refinement intervals) when bounding the number of samples needed to isolate candidate intervals. We will revise the theorem statement to make this quantitative relation explicit, add a short lemma relating the per-interval sample budget in the refinement phase to the minimal separation, and update the proof sketch in §4 to display the dependence on relative positions for general fixed η. This will not change the algorithm or the bound form. revision: yes
-
Referee: [§5] §5 (Lower bounds): the information-theoretic construction for the position-dependent lower bound is presented for the known-K, exactly piecewise-constant case with sharp jumps. Because the central claim concerns general δ, η, the proof should contain an explicit reduction showing how an adversary can force additional samples solely by repositioning the change points while keeping jump sizes fixed; without this step the lower bound risks being loose for the claimed regime.
Authors: We agree that an explicit reduction step would make the position dependence fully rigorous for arbitrary fixed δ and η. The current construction already encodes position dependence through the KL divergence terms that arise when change-point locations are closer (causing signal overlap in the detection phase), but the adversary argument is only sketched. We will add a short lemma in §5 that constructs two instances differing only in change-point spacing (jumps fixed) and shows that any algorithm must incur an extra Ω(log(1/δ)/η) samples in the detection phase when spacing drops below a threshold determined by η; this reduction will be stated for general δ, η without altering the known-K piecewise-constant setting. revision: yes
Circularity Check
No significant circularity; bounds derived from first-principles analysis of detection and refinement phases
full rationale
The paper establishes non-asymptotic upper and lower bounds on sample complexity via a two-phase adaptive algorithm (interval detection followed by refinement to precision η) under the known-K piecewise-constant model. These bounds are obtained through direct analysis of the algorithm's concentration properties and information-theoretic lower bounds, without any equation reducing the claimed complexity expression to a quantity defined in terms of itself, without fitted parameters renamed as predictions, and without load-bearing self-citations or imported uniqueness theorems. The joint dependence on jump magnitudes and relative positions for general δ, η emerges directly from the derivation rather than by construction from the inputs.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption The unknown function is piecewise constant with a known number K of discontinuities.
- domain assumption Each query returns a noisy observation whose noise is sub-Gaussian with known variance proxy.
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearWe establish non-asymptotic upper bounds on its sample budget, together with corresponding lower bounds... H(m)_detect = max 1/E_i² ... H(N)_localize = sum 1/Δ_{(i)}²
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclearPrior work shows that jump magnitudes alone determine the asymptotic sample complexity as δ→0
Reference graph
Works this paper leans on
-
[1]
Bernoulli , volume=
Adaptive sensing performance lower bounds for sparse signal detection and support estimation , author=. Bernoulli , volume=. 2014 , publisher=
2014
-
[2]
The 28th International Conference on Artificial Intelligence and Statistics , year=
Fixed-Budget Change Point Identification in Piecewise Constant Bandits , author=. The 28th International Conference on Artificial Intelligence and Statistics , year=
-
[3]
Proceedings of the 42nd International Conference on Machine Learning , pages =
Fixed-Confidence Multiple Change Point Identification under Bandit Feedback , author =. Proceedings of the 42nd International Conference on Machine Learning , pages =. 2025 , volume =
2025
-
[4]
Conference on Learning Theory , pages=
Optimal best arm identification with fixed confidence , author=. Conference on Learning Theory , pages=. 2016 , organization=
2016
-
[5]
Bandits , author=
Safety Aware Changepoint Detection for Piecewise i.i.d. Bandits , author=. The 38th Conference on Uncertainty in Artificial Intelligence , year=
-
[6]
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics , pages =
Nearly Optimal Adaptive Procedure with Change Detection for Piecewise-Stationary Bandit , author =. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics , pages =. 2019 , volume =
2019
-
[7]
IEEE Journal on Selected Areas in Information Theory , volume=
Quickest change detection with controlled sensing , author=. IEEE Journal on Selected Areas in Information Theory , volume=. 2024 , publisher=
2024
-
[8]
Journal of Machine Learning Research , volume=
Efficient change-point detection for tackling piecewise-stationary bandits , author=. Journal of Machine Learning Research , volume=
-
[9]
Technometrics , volume=
Bandit change-point detection for real-time monitoring high-dimensional data under sampling control , author=. Technometrics , volume=. 2023 , publisher=
2023
-
[10]
The Annals of Statistics , volume=
Change-point estimation under adaptive sampling , author=. The Annals of Statistics , volume=
-
[11]
Statistica Sinica , pages=
Sequential analysis: some classical problems and new challenges , author=. Statistica Sinica , pages=. 2001 , publisher=
2001
-
[12]
1993 , publisher=
Detection of abrupt changes: theory and application , author=. 1993 , publisher=
1993
-
[13]
ACM Transactions on Modeling, Analysis and Simulation , year=
Modeling, Analysis and Simulation of Switched Processing Systems , author=. ACM Transactions on Modeling, Analysis and Simulation , year=
-
[14]
npj Computational Materials , volume=
Autonomy in materials research: a case study in carbon nanotube growth , author=. npj Computational Materials , volume=. 2016 , publisher=
2016
-
[15]
Asian Conference on Machine Learning , pages=
Active change-point detection , author=. Asian Conference on Machine Learning , pages=. 2019 , organization=
2019
-
[16]
Dielectric and Thermodynamic Study on the Liquid Crystal Dimer -(4-Cyanobiphenyl-4'-oxy)- -(1-pyreniminebenzylidene-4'-oxy) undecane (CBO11O
Sebasti. Dielectric and Thermodynamic Study on the Liquid Crystal Dimer -(4-Cyanobiphenyl-4'-oxy)- -(1-pyreniminebenzylidene-4'-oxy) undecane (CBO11O. The Journal of Physical Chemistry B , volume=. 2011 , publisher=
2011
-
[17]
The Annals of Statistics , volume=
Optimal change-point detection and localization , author=. The Annals of Statistics , volume=. 2023 , publisher=
2023
-
[18]
Biometrika , pages=
Inference About the Change-Point in a Sequence of Random Variables , author=. Biometrika , pages=. 1970 , publisher=
1970
-
[19]
Biometrics , pages=
A cluster analysis method for grouping means in the analysis of variance , author=. Biometrics , pages=. 1974 , publisher=
1974
-
[20]
The Annals of Statistics , volume=
Wild binary segmentation for multiple change-point detection , author=. The Annals of Statistics , volume=
-
[21]
Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=
High dimensional change point estimation via sparse projection , author=. Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=. 2018 , publisher=
2018
-
[22]
Electronic Journal of Statistics , volume=
Optimal multiple change-point detection for high-dimensional data , author=. Electronic Journal of Statistics , volume=. 2023 , publisher=
2023
-
[23]
Journal of machine learning research , volume=
A kernel multiple change-point algorithm via model selection , author=. Journal of machine learning research , volume=
-
[24]
International conference on algorithmic learning theory , pages=
On upper-confidence bound policies for switching bandit problems , author=. International conference on algorithmic learning theory , pages=. 2011 , organization=
2011
-
[25]
Sequential analysis , volume=
Asymptotic optimality theory for active quickest detection with unknown postchange parameters , author=. Sequential analysis , volume=. 2023 , publisher=
2023
-
[26]
Technometrics , volume=
An adaptive sampling strategy for online high-dimensional process monitoring , author=. Technometrics , volume=. 2015 , publisher=
2015
-
[27]
International Conference on Machine Learning , pages=
An optimal algorithm for the thresholding bandit problem , author=. International Conference on Machine Learning , pages=. 2016 , organization=
2016
-
[28]
The Annals of Statistics , volume=
Sequential methods for design-adaptive estimation of discontinuities in regression curves and surfaces , author=. The Annals of Statistics , volume=. 2003 , publisher=
2003
-
[29]
Proceedings of the 38th International Conference on Machine Learning , pages =
Problem Dependent View on Structured Thresholding Bandit Problems , author =. Proceedings of the 38th International Conference on Machine Learning , pages =. 2021 , editor =
2021
-
[30]
Lecture Notes for ECE563 (UIUC) and , volume=
Lecture notes on information theory , author=. Lecture Notes for ECE563 (UIUC) and , volume=. 2014 , publisher=
2014
-
[31]
Conference on Learning Theory , pages=
The influence of shape constraints on the thresholding bandit problem , author=. Conference on Learning Theory , pages=. 2020 , organization=
2020
-
[32]
Knowledge and information systems , volume=
A survey of methods for time series change point detection , author=. Knowledge and information systems , volume=. 2017 , publisher=
2017
-
[33]
Annals of Statistics , volume=
On estimation of isotonic piecewise constant signals , author=. Annals of Statistics , volume=. 2020 , publisher=
2020
-
[34]
Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=
Multiscale change point inference , author=. Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=. 2014 , publisher=
2014
-
[35]
International Conference on Machine Learning , pages=
Problem dependent view on structured thresholding bandit problems , author=. International Conference on Machine Learning , pages=. 2021 , organization=
2021
-
[36]
International Conference on Machine Learning , pages=
Multiple identifications in multi-armed bandits , author=. International Conference on Machine Learning , pages=. 2013 , organization=
2013
-
[37]
, author=
PAC subset selection in stochastic multi-armed bandits. , author=. ICML , volume=
-
[38]
Advances in Neural Information Processing Systems , volume=
Verification based solution for structured mab problems , author=. Advances in Neural Information Processing Systems , volume=
-
[39]
Electronic Journal of Statistics , volume=
Univariate mean change point detection: Penalization, CUSUM and optimality , author=. Electronic Journal of Statistics , volume=
-
[40]
arXiv preprint arXiv:2011.01857 , year=
A review on minimax rates in change point detection and localisation , author=. arXiv preprint arXiv:2011.01857 , year=
-
[41]
Signal processing , volume=
Selective review of offline change point detection methods , author=. Signal processing , volume=. 2020 , publisher=
2020
-
[42]
Journal of Machine Learning Research , volume=
Optimal clustering with bandit feedback , author=. Journal of Machine Learning Research , volume=
-
[43]
International Conference on Machine Learning , pages=
Clustering Items through Bandit Feedback: Finding the Right Feature out of Many , author=. International Conference on Machine Learning , pages=. 2025 , organization=
2025
-
[44]
36th International Conference on Algorithmic Learning Theory , year=
Clustering with bandit feedback: breaking down the computation/information gap , author=. 36th International Conference on Algorithmic Learning Theory , year=
-
[45]
Biometrika , volume=
Seeded binary segmentation: a general methodology for fast and optimal changepoint detection , author=. Biometrika , volume=. 2023 , publisher=
2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.