pith. machine review for the scientific record. sign in

arxiv: 2604.27355 · v1 · submitted 2026-04-30 · 🧮 math.OC · cs.SY· eess.SY

Recognition: unknown

Over-Approximating Minimizer Sets of Constrained Convex Programs with Parametric Uncertainty via Reachability Analysis

Authors on Pith no claims yet

Pith reviewed 2026-05-07 08:12 UTC · model grok-4.3

classification 🧮 math.OC cs.SYeess.SY
keywords convex optimizationparametric uncertaintyprojected gradient descentreachability analysisover-approximationminimizer setssystem-level synthesisuncertain dynamical systems
0
0 comments X

The pith

Projected gradient descent on uncertain convex problems yields certified outer approximations to the set of minimizers via reachability analysis.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes a method to compute certified outer approximations to the set of solutions of a parameterized strongly convex optimization problem whose cost depends on uncertain bounded parameters. By modeling the projected gradient descent iterates as an uncertain dynamical system with constant but unknown parameters, the forward reachable sets of this system serve as outer approximations to the optimizer set. These approximations include explicit error bounds that decay exponentially with the number of iterations. The authors apply system-level synthesis to optimize the step-size sequence, which reduces the conservativeness of the bounds. This approach enables tractable over-approximations for high-dimensional decision variables where enumerating the full minimizer set directly is intractable.

Core claim

By treating the cost parameter as constant but unknown, the projected gradient descent iterates are interpreted as an uncertain dynamical system whose forward reachable sets provide outer approximations of the optimizer set, with an explicit error bound that decays exponentially with the iteration count. System-level synthesis is applied on the projected gradient descent dynamics to optimize the step-size sequence and obtain improved reachable-set over-approximations.

What carries the argument

Forward reachable sets of the projected gradient descent dynamics modeled as an uncertain dynamical system with fixed but unknown cost parameters, which outer-approximate the minimizer set.

If this is right

  • The reachable sets contain every possible optimizer for parameters inside the given bounded uncertainty set.
  • The distance from the reachable set to the true optimizer set shrinks exponentially as more projected gradient descent iterations are performed.
  • System-level synthesis yields step-size sequences that produce tighter over-approximations than fixed or heuristic step sizes.
  • The method scales to constrained convex programs with high-dimensional decision variables while remaining computationally tractable.
  • The resulting over-approximations are less conservative than those from existing baseline techniques on the same class of problems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same reachability framing could be applied to other first-order methods such as accelerated gradient descent to obtain analogous certified bounds.
  • In online or receding-horizon control settings the exponentially tightening bounds could be used to certify robust feasible actions at each time step.
  • The approach naturally suggests a trade-off between iteration count and bound tightness that could be optimized jointly with the step-size sequence in future extensions.

Load-bearing premise

The underlying optimization problem is strongly convex so that each fixed parameter value yields a unique minimizer to which projected gradient descent converges exponentially fast.

What would settle it

Take a low-dimensional strongly convex program with explicitly known bounded parameters and true minimizer set, run the reachable-set method for a chosen iteration count, and verify whether the computed outer set contains the true minimizer set while the observed distance to the true set respects the stated exponential error bound.

Figures

Figures reproduced from arXiv: 2604.27355 by Antoine P. Leeman, Brendan Gould, Chih-Yuan Chiu, Glen Chou, Kyriakos G. Vamvoudakis, Samuel Coogan.

Figure 1
Figure 1. Figure 1: Robust over-approximation of the parameterized solution set of (51). view at source ↗
Figure 3
Figure 3. Figure 3: Robust over-approximation of the parameterized minimizer set view at source ↗
read the original abstract

We study the set of solutions to a parameterized, strongly convex optimization problem whose cost depends on uncertain, bounded parameters. We compute a certified outer approximation of the corresponding set of optimizers, using convergence properties of the projected gradient descent (PGD) algorithm for convex programs. Concretely, by treating the cost parameter as constant but unknown, we interpret the PGD iterates as an uncertain dynamical system and analyze its forward reachable sets. Since PGD converges exponentially to the unique optimizer for each fixed parameter, these reachable sets provide outer approximations of the optimizer set, with an explicit error bound that decays exponentially with the iteration count. We apply system-level synthesis (SLS) on the PGD dynamics to optimize the step-size sequence and obtain reachable-set over-approximations. Our method outperforms existing baselines in over-approximating, with low conservativeness, the minimizer sets of convex programs with uncertain costs and high-dimensional decision variables.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims to compute certified outer approximations of the set of minimizers for parameterized strongly convex constrained optimization problems with bounded uncertain parameters. It models the iterates of projected gradient descent (PGD) as an uncertain dynamical system whose forward reachable sets over-approximate the optimizer set, leveraging the exponential convergence of PGD to each fixed-parameter minimizer to obtain an explicit error bound that decays exponentially in the iteration count. System-level synthesis (SLS) is applied to the PGD dynamics to optimize the step-size sequence and tighten the reachable-set over-approximations. The method is reported to outperform baselines with low conservativeness, especially for high-dimensional decision variables.

Significance. If the central technical step of applying SLS to the PGD iteration map is placed on a rigorous footing, the work would supply a novel, certifiably convergent procedure for outer-approximating minimizer sets under parametric uncertainty. The explicit exponential error bound and the use of SLS for step-size design are attractive features that could be useful in robust optimization and control applications. The approach correctly invokes standard exponential convergence of PGD rather than deriving it from the target set itself.

major comments (2)
  1. [Method section describing SLS application to PGD dynamics] The PGD map x ↦ proj_C(x − α ∇f(x, θ)) is nonlinear for general strongly convex f (gradient and projection are both nonlinear unless f is quadratic and C is affine). SLS was developed for LTI systems. The manuscript applies SLS directly to the PGD dynamics (see the paragraph beginning “We apply system-level synthesis (SLS) on the PGD dynamics” and the subsequent derivation of the optimized step-size sequence) without stating whether (i) the problem class is restricted to quadratic objectives, (ii) a linear over-approximation or contraction embedding that preserves the exponential error bound is supplied, or (iii) SLS is extended to nonlinear uncertain maps. This gap is load-bearing for the claimed explicit exponential decay of the over-approximation error and for the optimized step sizes.
  2. [Theorem stating the explicit error bound] The main error-bound claim (explicit exponential decay with iteration count) is stated to follow from the reachable-set analysis. Because the bound is obtained via the SLS-optimized step sizes, the same justification gap identified above renders the quantitative guarantee unverified for the stated class of non-quadratic problems.
minor comments (2)
  1. [Abstract] The abstract asserts outperformance “with low conservativeness” but does not quantify the metric (e.g., volume ratio or Hausdorff distance) or report the dimensions at which the comparison was performed; a short table or sentence in the abstract would improve clarity.
  2. [Problem formulation paragraph] Notation for the uncertain parameter set Θ and the projection operator is introduced without an explicit reference to the standing assumption that Θ is compact and convex; adding a single sentence in the problem-statement paragraph would remove ambiguity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thorough review and insightful comments on our manuscript. The points raised regarding the application of system-level synthesis (SLS) to the projected gradient descent (PGD) dynamics are well-taken and highlight the need for greater precision in describing the problem class and the technical assumptions. We will revise the manuscript accordingly to strengthen the presentation and ensure the claims are rigorously supported.

read point-by-point responses
  1. Referee: [Method section describing SLS application to PGD dynamics] The PGD map x ↦ proj_C(x − α ∇f(x, θ)) is nonlinear for general strongly convex f (gradient and projection are both nonlinear unless f is quadratic and C is affine). SLS was developed for LTI systems. The manuscript applies SLS directly to the PGD dynamics (see the paragraph beginning “We apply system-level synthesis (SLS) on the PGD dynamics” and the subsequent derivation of the optimized step-size sequence) without stating whether (i) the problem class is restricted to quadratic objectives, (ii) a linear over-approximation or contraction embedding that preserves the exponential error bound is supplied, or (iii) SLS is extended to nonlinear uncertain maps. This gap is load-bearing for the claimed explicit exponential decay of the over-approximation error and for the optimized step sizes.

    Authors: We agree that the manuscript does not explicitly restrict the problem class or provide a linear over-approximation for general strongly convex objectives. The derivation assumes a setting where the PGD map is affine, which holds for quadratic objectives with affine constraints. To address this, we will revise the manuscript to explicitly restrict the problem class to quadratic objectives with affine constraints, under which the PGD iteration map is affine (LTI with parametric uncertainty). This preserves the exponential convergence and allows direct application of SLS. We will update the abstract, introduction, and method sections to reflect this scope. revision: yes

  2. Referee: [Theorem stating the explicit error bound] The main error-bound claim (explicit exponential decay with iteration count) is stated to follow from the reachable-set analysis. Because the bound is obtained via the SLS-optimized step sizes, the same justification gap identified above renders the quantitative guarantee unverified for the stated class of non-quadratic problems.

    Authors: With the revision to restrict to quadratic objectives with affine constraints, the PGD map is affine, the system is LTI, and the SLS-based analysis directly supports the explicit exponential error bound. We will update the theorem and proof to note the restriction explicitly, ensuring the guarantee is verified for the stated class. revision: yes

Circularity Check

0 steps flagged

No circularity; derivation rests on standard external PGD convergence and independent SLS framework

full rationale

The paper interprets PGD iterates as an uncertain dynamical system to generate reachable-set over-approximations of the minimizer set, relying on the fact that PGD converges exponentially to the unique optimizer for each fixed parameter value. This convergence property is a standard result from convex optimization theory and is invoked without re-derivation from the paper's own target set or fitted quantities. System-level synthesis is applied to the PGD dynamics to optimize step sizes, but this draws on the established SLS literature rather than any self-citation chain or ansatz smuggled from prior author work. No step reduces by construction to a fitted input renamed as prediction, a self-definitional loop, or a load-bearing uniqueness theorem imported from the authors themselves. The explicit exponential error bound follows directly from the contraction property of PGD, which is external and falsifiable independently of the present construction. The approach is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The method rests on two standard domain assumptions from convex optimization and dynamical systems; no new free parameters beyond the step-size sequence (optimized via SLS) or invented entities are introduced.

free parameters (1)
  • step-size sequence
    Sequence of step sizes for projected gradient descent that is optimized via system-level synthesis to reduce conservativeness of the reachable-set over-approximations.
axioms (2)
  • domain assumption Projected gradient descent converges exponentially to the unique minimizer for each fixed parameter value in a strongly convex constrained program.
    Invoked to guarantee that reachable sets shrink exponentially toward the true minimizer set.
  • domain assumption Uncertain cost parameters are constant (not time-varying) and lie in known bounded sets.
    Allows modeling the parameter as a fixed but unknown constant input to the projected gradient descent dynamical system.

pith-pipeline@v0.9.0 · 5493 in / 1668 out tokens · 71873 ms · 2026-05-07T08:12:07.389863+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

27 extracted references · 4 canonical work pages · 3 internal anchors

  1. [1]

    Inverse KKT: Learning Cost Functions of Manipulation Tasks from Demonstrations,

    P. Englert, N. A. Vien, and M. Toussaint, “Inverse KKT: Learning Cost Functions of Manipulation Tasks from Demonstrations,”The International Journal of Robotics Research, vol. 36, no. 13-14, 2017

  2. [2]

    Online and Offline Learning of Player Objectives from Partial Observations in Dynamic Games,

    L. Peters, V . Rubies-Royo, C. J. Tomlin, L. Ferranti, J. Alonso-Mora, C. Stachniss, and D. Fridovich-Keil, “Online and Offline Learning of Player Objectives from Partial Observations in Dynamic Games,”The International Journal of Robotics Research, vol. 42, no. 10, 2023

  3. [3]

    Constrained Inverse Optimal Control With Application to a Human Manipulation Task,

    M. Menner, P. Worsnop, and M. N. Zeilinger, “Constrained Inverse Optimal Control With Application to a Human Manipulation Task,” IEEE Transactions on Control Systems Technology, vol. 29, 2021

  4. [4]

    Certified Gradient-Based Contact-Rich Ma- nipulation via Smoothing-Error Reachable Tubes,

    W.-C. Li and G. Chou, “Certified Gradient-Based Contact-Rich Ma- nipulation via Smoothing-Error Reachable Tubes,”arXiv preprint arXiv:2602.09368, 2026

  5. [5]

    A Convex Formulation of Compliant Contact Between Fila- ments and Rigid Bodies,

    ——, “A Convex Formulation of Compliant Contact Between Fila- ments and Rigid Bodies,”arXiv preprint arXiv:2509.13434, 2025

  6. [6]

    Reachability Analysis of Nonlinear Systems with Uncertain Parameters using Conservative Linearization,

    M. Althoff, O. Stursberg, and M. Buss, “Reachability Analysis of Nonlinear Systems with Uncertain Parameters using Conservative Linearization,” inIEEE Conference on Decision and Control, 2008

  7. [7]

    Robust Optimal Control Using Set-based Reachability Analysis,

    L. Sch ¨afer and M. Althoff, “Robust Optimal Control Using Set-based Reachability Analysis,” inEuropean Control Conference. IEEE, 2025

  8. [8]

    S. J. Wright and B. Recht,Optimization for Data Analysis. Cambridge University Press, 2022

  9. [9]

    Robust Nonlinear Optimal Control via System Level Synthesis,

    A. P. Leeman, J. K ¨ohler, A. Zanelli, S. Bennani, and M. N. Zeilinger, “Robust Nonlinear Optimal Control via System Level Synthesis,”IEEE Transactions on Automatic Control, vol. 70, no. 7, 2025

  10. [10]

    VISION-SLS: Safe Perception-Based Control from Learned Visual Representations via System Level Synthesis

    A. P. Leeman, S. Zhan, M. N. Zeilinger, and G. Chou, “Vision-sls: Safe perception-based control from learned visual representations via system level synthesis,”arXiv preprint arXiv:2604.24894, 2026

  11. [11]

    An Input-Output Parametrization of Stabilizing Controllers: amidst Youla and System Level Synthesis,

    L. Furieri, Y . Zheng, A. Papachristodoulou, and M. Kamgarpour, “An Input-Output Parametrization of Stabilizing Controllers: amidst Youla and System Level Synthesis,”IEEE Control Systems Letters, vol. 3, no. 4, pp. 1014–1019, 2019

  12. [12]

    Safe Large-Scale Robust Nonlinear MPC in Milliseconds via Reachability-Constrained System Level Synthesis on the GPU

    J. Fang and G. Chou, “Safe Large-Scale Robust Nonlinear MPC in Milliseconds via Reachability-Constrained System Level Synthesis on the GPU,”arXiv preprint arXiv:2604.07644, 2026

  13. [13]

    Online Convex Op- timization in the Bandit Setting: Gradient Descent without Gradient,

    A. D. Flaxman, A. T. Kalai, and H. B. McMahan, “Online Convex Op- timization in the Bandit Setting: Gradient Descent without Gradient,” inACM-SIAM Symposium on Discrete Algorithms, USA, 2005

  14. [14]

    The Price of Robustness,

    D. Bertsimas and M. Sim, “The Price of Robustness,”Operations Research, vol. 52, no. 1, pp. 35–53, 2004

  15. [15]

    Robust Convex Optimization,

    A. Ben-Tal and A. Nemirovski, “Robust Convex Optimization,”Math- ematics of Operations Research, vol. 23, no. 4, pp. 769–805, 1998

  16. [16]

    Theory and Applica- tions of Robust Optimization,

    D. Bertsimas, D. B. Brown, and C. Caramanis, “Theory and Applica- tions of Robust Optimization,”SIAM Review, vol. 53, no. 3, 2011

  17. [17]

    J. F. Bonnans and A. Shapiro,Perturbation Analysis of Optimization Problems. Springer Science & Business Media, 2013

  18. [18]

    A. L. Dontchev and R. T. Rockafellar,Implicit Functions and Solution Mappings. Springer, 2009, vol. 543

  19. [19]

    Sensitivity Analysis of Parameterized Variational Inequal- ities,

    A. Shapiro, “Sensitivity Analysis of Parameterized Variational Inequal- ities,”Mathematics of Operations Research, vol. 30, no. 1, 2005

  20. [20]

    Lexicographic Perturbation for Multiparametric Linear Programming with Applica- tions to Control,

    C. N. Jones, E. C. Kerrigan, and J. M. Maciejowski, “Lexicographic Perturbation for Multiparametric Linear Programming with Applica- tions to Control,”Automatica, vol. 43, no. 10, pp. 1808–1816, 2007

  21. [21]

    On the Computation of Invariant Sets for Constrained Nonlinear Systems: An Interval Arithmetic Approach,

    J. M. Bravo, D. Lim ´on, T. Alamo, and E. F. Camacho, “On the Computation of Invariant Sets for Constrained Nonlinear Systems: An Interval Arithmetic Approach,”Automatica, vol. 41, no. 9, 2005

  22. [22]

    immrax: A Paral- lelizable and Differentiable Toolbox for Interval Analysis and Mixed Monotone Reachability in JAX,

    A. Harapanahalli, S. Jafarpour, and S. Coogan, “immrax: A Paral- lelizable and Differentiable Toolbox for Interval Analysis and Mixed Monotone Reachability in JAX,”IFAC-PapersOnLine, vol. 58, 2024

  23. [23]

    Hamilton-Jacobi Reachability: A Brief Overview and Recent Advances,

    S. Bansal, M. Chen, S. Herbert, and C. J. Tomlin, “Hamilton-Jacobi Reachability: A Brief Overview and Recent Advances,” inCDC, 2017

  24. [24]

    Fast System Level Synthesis: Robust Model Predictive Control using Riccati Recursions,

    A. P. Leeman, J. Kohler, F. Messerer, A. Lahr, M. Diehl, and M. N. Zeilinger, “Fast System Level Synthesis: Robust Model Predictive Control using Riccati Recursions,”IFAC Conference on Nonlinear Model Predictive Control, vol. 58, no. 18, pp. 173–180, 2024

  25. [25]

    System Level Synthesis,

    J. Anderson, J. C. Doyle, S. H. Low, and N. Matni, “System Level Synthesis,”Annual Reviews in Control, vol. 47, 2019

  26. [26]

    Safe Control of Partially-Observed Linear Time-Varying Systems with Minimal Worst-Case Dynamic Regret,

    H. Zhou and V . Tzoumas, “Safe Control of Partially-Observed Linear Time-Varying Systems with Minimal Worst-Case Dynamic Regret,” in IEEE Conference on Decision and Control, 2023

  27. [27]

    Bandit Learning in Concave N-Person Games,

    M. Bravo, D. Leslie, and P. Mertikopoulos, “Bandit Learning in Concave N-Person Games,” inNeurIPS, 2018. ACKNOWLEDGEMENT The authors thank Shuyu Zhan for his assistance with the simulation experiments