Establishes O(d² δ^{-3} ε^{-3}) SZO complexity to reach (δ,ε)-Goldstein stationary points in non-smooth non-convex stochastic zeroth-order optimization with decision-dependent distributions, plus improved rates for smooth and Hessian-Lipschitz cases.
Performative prediction with bandit feedback: Learning through reparameterization
2 Pith papers cite this work. Polarity classification is still indexing.
fields
math.OC 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
Exponentially-shifted Gaussian smoothing yields zeroth-order gradient estimators with linear dimension dependence, enabling improved complexity bounds for stochastic optimization including decision-dependent regimes.
citing papers explorer
-
Stochastic Non-Smooth Non-Convex Optimization with Decision-Dependent Distributions
Establishes O(d² δ^{-3} ε^{-3}) SZO complexity to reach (δ,ε)-Goldstein stationary points in non-smooth non-convex stochastic zeroth-order optimization with decision-dependent distributions, plus improved rates for smooth and Hessian-Lipschitz cases.
-
Complexity Guarantees for Zeroth-order Methods via Exponentially-shifted Gaussian Smoothing: Mitigating Dimension-dependence and Incorporating Decision-dependence
Exponentially-shifted Gaussian smoothing yields zeroth-order gradient estimators with linear dimension dependence, enabling improved complexity bounds for stochastic optimization including decision-dependent regimes.