Zeroth-order methods achieve the same expected convergence rate as first-order methods without extra dimension dependence by treating them as input-to-state stable systems with controllable perturbations.
arXiv preprint arXiv:2505.02281 , year=
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
math.OC 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
Robust learning problems are formulated as quasar-convex optimization, and HiPPA is proposed as an inexact high-order proximal method with global and superlinear convergence guarantees.
citing papers explorer
-
From Cursed to Competitive: Closing the ZO-FO Gap via Input-to-State Stability
Zeroth-order methods achieve the same expected convergence rate as first-order methods without extra dimension dependence by treating them as input-to-state stable systems with controllable perturbations.
-
Robust Learning Meets Quasar-Convex Optimization: Inexact High-Order Proximal-Point Methods
Robust learning problems are formulated as quasar-convex optimization, and HiPPA is proposed as an inexact high-order proximal method with global and superlinear convergence guarantees.