pith. machine review for the scientific record. sign in

arxiv: 2601.05245 · v2 · submitted 2026-01-08 · 💻 cs.LG · math.ST· stat.ML· stat.TH

Recognition: unknown

Optimal Lower Bounds for Online Multicalibration

Authors on Pith no claims yet
classification 💻 cs.LG math.STstat.MLstat.TH
keywords boundslowermulticalibrationboundgrouponlineuppercalibration
0
0 comments X
read the original abstract

We prove tight lower bounds for online multicalibration, establishing an information-theoretic separation from marginal calibration. In the general setting where group functions can depend on both context and the learner's predictions, we prove an $\Omega(T^{2/3})$ lower bound on expected multicalibration error using just three disjoint binary groups. This matches the upper bounds of Noarov et al. (2025) up to logarithmic factors and exceeds the $O(T^{2/3-\varepsilon})$ upper bound for marginal calibration (Dagan et al., 2025), thereby separating the two problems. We then turn to lower bounds for the more difficult case of group functions that may depend on context but not on the learner's predictions. In this case, we establish an $\widetilde{\Omega}(T^{2/3})$ lower bound for online multicalibration via an $O(\log^3 T)$-sized group family constructed from an orthonormal basis, again matching upper bounds up to logarithmic factors.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Instance-Adaptive Online Multicalibration

    cs.LG 2026-05 conditional novelty 8.0

    A single online multicalibration algorithm adaptively refines a dyadic grid and achieves instance-dependent rates: O(T^{2/3}) worst-case, O(sqrt T) for marginal stochastic data, and O(sqrt(JT)) for J-piecewise station...

  2. An Efficient Black-Box Reduction from Online Learning to Multicalibration, and a New Route to $\Phi$-Regret Minimization

    cs.LG 2026-04 unverdicted novelty 8.0

    Black-box reductions from no-regret online learning to multicalibration and from multicalibration to Phi-regret minimization are established, resolving the main open question in Garg et al. (SODA '24).

  3. The Sample Complexity of Multicalibration

    cs.LG 2026-04 unverdicted novelty 7.0

    Multicalibration has minimax sample complexity Θ̃(ε^{-3}) when the number of groups is at most ε^{-κ} for fixed κ>0, versus Θ̃(ε^{-2}) for marginal calibration, with a sharp threshold at κ=0.