Black-box reductions from no-regret online learning to multicalibration and from multicalibration to Phi-regret minimization are established, resolving the main open question in Garg et al. (SODA '24).
arXiv preprint arXiv:2310.17651 , year=
2 Pith papers cite this work. Polarity classification is still indexing.
years
2026 2verdicts
UNVERDICTED 2representative citing papers
Risk-controlled post-processing yields a threshold-structured policy that follows the baseline except where an oracle fallback sharply reduces conditional violation risk, achieving O(log n/n) expected excess risk in i.i.d. settings and exact risk control under exchangeability.
citing papers explorer
-
An Efficient Black-Box Reduction from Online Learning to Multicalibration, and a New Route to $\Phi$-Regret Minimization
Black-box reductions from no-regret online learning to multicalibration and from multicalibration to Phi-regret minimization are established, resolving the main open question in Garg et al. (SODA '24).
-
Risk-Controlled Post-Processing of Decision Policies
Risk-controlled post-processing yields a threshold-structured policy that follows the baseline except where an oracle fallback sharply reduces conditional violation risk, achieving O(log n/n) expected excess risk in i.i.d. settings and exact risk control under exchangeability.