pith. machine review for the scientific record. sign in

arxiv: 1802.02246 · v1 · submitted 2018-02-06 · 🧮 math.OC

Recognition: unknown

Approximation Methods for Bilevel Programming

Authors on Pith no claims yet
classification 🧮 math.OC
keywords objectiveunderapproximationbilevelprogrammingassumptionclasscomplexity
0
0 comments X
read the original abstract

In this paper, we study a class of bilevel programming problem where the inner objective function is strongly convex. More specifically, under some mile assumptions on the partial derivatives of both inner and outer objective functions, we present an approximation algorithm for solving this class of problem and provide its finite-time convergence analysis under different convexity assumption on the outer objective function. We also present an accelerated variant of this method which improves the rate of convergence under convexity assumption. Furthermore, we generalize our results under stochastic setting where only noisy information of both objective functions is available. To the best of our knowledge, this is the first time that such (stochastic) approximation algorithms with established iteration complexity (sample complexity) are provided for bilevel programming.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 13 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. IGT-OMD: Implicit Gradient Transport for Decision-Focused Learning under Delayed Feedback

    cs.LG 2026-05 unverdicted novelty 7.0

    IGT-OMD reduces gradient transport error from quadratic to linear in delay length for delayed bilevel optimization and achieves sublinear regret with adaptive steps.

  2. A Barrier-Metric First-Order Method for Linearly Constrained Bilevel Optimization

    math.OC 2026-05 unverdicted novelty 7.0

    A barrier-smoothed first-order method achieves stationarity rates of tilde O(K to the -2/3) deterministic and tilde O(K to the -2/5) stochastic for linearly constrained bilevel optimization.

  3. Second-Order Bilevel Optimization with Accelerated Convergence Rates

    math.OC 2026-05 unverdicted novelty 7.0

    Second-order bilevel methods achieve Õ(ε^{-1.5}) iteration complexity for second-order stationary points, faster than first-order approaches, with a lazy variant improving computational efficiency by √d.

  4. On the Stability and Generalization of First-order Bilevel Minimax Optimization

    cs.LG 2026-04 unverdicted novelty 7.0

    Provides the first systematic generalization analysis via algorithmic stability for single-timescale and two-timescale stochastic gradient descent-ascent in bilevel minimax problems.

  5. Fine-grained Analysis of Stability and Generalization for Stochastic Bilevel Optimization

    cs.LG 2026-04 unverdicted novelty 7.0

    Derives upper bounds on on-average argument stability for single- and two-timescale SGD in bilevel optimization under NC-NC, C-C, and SC-SC regimes, linking stability directly to generalization gaps.

  6. CHAL: Council of Hierarchical Agentic Language

    cs.AI 2026-05 unverdicted novelty 6.0

    CHAL is a multi-agent dialectic system that performs structured belief optimization over defeasible domains using Bayesian-inspired graph representations and configurable meta-cognitive value system hyperparameters.

  7. BROS: Bias-Corrected Randomized Subspaces for Memory-Efficient Single-Loop Bilevel Optimization

    cs.LG 2026-05 unverdicted novelty 6.0

    BROS achieves memory-efficient single-loop stochastic bilevel optimization with O(ε^{-2}) sample complexity by performing updates in randomized subspaces and using Rademacher bi-probe correction for unbiased estimation.

  8. BROS: Bias-Corrected Randomized Subspaces for Memory-Efficient Single-Loop Bilevel Optimization

    cs.LG 2026-05 unverdicted novelty 6.0

    BROS achieves the same O(ε^{-2}) sample complexity as exact single-loop SBO methods while cutting peak memory by up to 44.9% through randomized subspaces and bias-corrected Hessian estimation.

  9. Penalty-Based First-Order Methods for Bilevel Optimization with Minimax and Constrained Lower-Level Problems

    math.OC 2026-05 unverdicted novelty 6.0

    Penalty-based first-order methods find ε-KKT points in bilevel minimax problems with Õ(ε^{-4}) deterministic and Õ(ε^{-9}) stochastic oracle complexity, improving prior bounds for constrained lower-level cases via Lag...

  10. Interactive Inverse Reinforcement Learning of Interaction Scenarios via Bi-level Optimization

    cs.LG 2026-05 unverdicted novelty 6.0

    Interactive IRL is cast as bi-level optimization with an inner loop learning expert rewards and an outer loop learning interaction policies, solved by the convergent BISIRL algorithm.

  11. S2MAM: Semi-supervised Meta Additive Model for Robust Estimation and Variable Selection

    cs.LG 2026-04 unverdicted novelty 6.0

    S2MAM is a new semi-supervised model that uses bilevel optimization to automatically identify informative variables, update similarity matrices, and provide interpretable predictions with theoretical guarantees.

  12. Representation-Guided Parameter-Efficient LLM Unlearning

    cs.CL 2026-04 unverdicted novelty 6.0

    REGLU guides LoRA-based unlearning via representation subspaces and orthogonal regularization to outperform prior methods on forget-retain trade-off in LLM benchmarks.

  13. A Distributed Bilevel Framework for the Macroscopic Optimization of Multi-Agent Systems

    math.OC 2026-04 unverdicted novelty 6.0

    A distributed bilevel algorithm optimizes emergent macroscopic behavior in multi-agent systems by combining local exponential-family state estimation with hypergradient microscopic updates and proves convergence via t...