pith. machine review for the scientific record. sign in

arxiv: 2605.10854 · v1 · submitted 2026-05-11 · 🧮 math.NA · cs.NA· math.OC

Recognition: no theorem link

Relaxation via Separable Estimators: Arithmetic and Implementation

Beno\^it Chachuat, Boris Houska, Mario Eduardo Villanueva, Yanlin Zha

Pith reviewed 2026-05-12 04:04 UTC · model grok-4.3

classification 🧮 math.NA cs.NAmath.OC
keywords superposition relaxationseparable estimatorsMcCormick relaxationfactorable functionsglobal optimizationconvergence analysisartificial neural networks
0
0 comments X

The pith

Superposition relaxations bracket factorable functions with separable estimators that are tighter than McCormick relaxations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper introduces superposition relaxation as a method to bound the graph of a multivariate factorable function using a pair of separable underestimating and overestimating functions. Propagation rules are given for affine and nonlinear compositions that take advantage of monotonicity and convexity properties. The approach is shown to achieve quadratic local convergence in certain cases and to produce tighter relaxations than the McCormick method in numerical examples, including for artificial neural networks. A sympathetic reader would care because tighter relaxations can lead to more efficient global optimization algorithms by reducing the search space more effectively.

Core claim

The paper establishes an arithmetic for superposition relaxations that constructs separable estimators for factorable functions on compact domains. It proves local convergence properties in pointwise and Hausdorff senses, with conditions for quadratic pointwise convergence to propagate through compositions. Numerical case studies demonstrate that these relaxations are consistently tighter than McCormick relaxations for various functions and for artificial neural networks, although they require more computation.

What carries the argument

Superposition relaxation arithmetic, which generates separable under- and over-estimators via composition propagation rules exploiting monotonicity and convexity.

If this is right

  • Tighter bounds improve the performance of branch-and-bound algorithms in global optimization.
  • Relaxations can be applied to artificial neural networks with better tightness than McCormick.
  • Quadratic pointwise convergence propagates through compositions when conditions on monotonicity and convexity are met.
  • Practical implementations use piecewise-constant or continuous piecewise-linear univariate summands.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Existing global optimization solvers could adopt these relaxations to reduce the number of nodes explored in nonconvex problems.
  • The separability of the estimators may enable parallel evaluation of the univariate components for high-dimensional functions.
  • The arithmetic could be extended to other classes of functions or combined with different relaxation techniques for hybrid bounds.

Load-bearing premise

The propagation rules for affine and nonlinear compositions correctly exploit global monotonicity and convexity properties of the factorable functions.

What would settle it

A specific factorable function and domain where the superposition relaxation produces a wider bounding interval than the McCormick relaxation, or where quadratic pointwise convergence fails to hold through a composition despite the stated conditions.

read the original abstract

This article presents an arithmetic, called superposition relaxation, for bracketing the graph of a multivariate factorable function on a compact domain between a pair of underestimating and overestimating functions that are both separable. Propagation rules are established for affine and nonlinear composition operations, with a focus on exploiting global monotonicity and convexity properties in the composition. The local convergence properties of this arithmetic are also analyzed in both the pointwise and Hausdorff sense, including conditions under which quadratic pointwise convergence propagates through composition. Parameterizations of the univariate summands in a superposition relaxation either as piecewise-constant or continuous piecewise-linear functions are discussed for a practical implementation. It is shown through numerical case studies that superposition relaxations can be consistently tighter than McCormick relaxations, including for the relaxation of artificial neural networks. But superposition relaxations also incur a higher computational cost than McCormick relaxations. Further investigations are thus warranted as applications in global optimization seek to balance a relaxation's tightness with its computational cost.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces an arithmetic termed superposition relaxation for bracketing the graph of a multivariate factorable function on a compact domain by a pair of separable under- and over-estimating functions. Propagation rules are derived for affine and nonlinear compositions that exploit global monotonicity and convexity properties of the factors. Local convergence is analyzed in both the pointwise and Hausdorff senses, including conditions for quadratic pointwise convergence to propagate through composition. Practical parameterizations of the univariate summands as piecewise-constant or continuous piecewise-linear functions are presented, and numerical case studies are used to show that the resulting relaxations are consistently tighter than McCormick relaxations for factorable functions and artificial neural networks, at the expense of higher computational cost.

Significance. If the propagation rules and numerical evidence hold, the work supplies a concrete alternative to McCormick envelopes that can produce tighter separable relaxations for global optimization, with direct relevance to neural-network relaxations. The explicit local convergence analysis (pointwise and Hausdorff) and the discussion of implementable piecewise parameterizations constitute genuine strengths; the paper also correctly flags the computational trade-off, which is essential for assessing practical utility.

major comments (2)
  1. [Numerical case studies section] The central empirical claim (tighter bounds than McCormick relaxations, including for ANNs) rests on numerical case studies whose construction details—specific test functions, choice of piecewise breakpoints, and how the superposition is assembled for the network layers—are not fully specified. Without these, it is difficult to judge whether the observed tightness generalizes or depends on favorable choices of the univariate estimators.
  2. [Propagation rules for nonlinear compositions] The propagation rules for nonlinear compositions are stated to exploit global monotonicity and convexity on compact domains, yet the manuscript does not provide an explicit verification (e.g., a short proof or counter-example check) that these rules preserve valid bracketing when the outer function is neither monotone nor convex. This step is load-bearing for the arithmetic’s correctness.
minor comments (2)
  1. [Abstract] The abstract mentions “conditions under which quadratic pointwise convergence propagates through composition” but does not reference the corresponding theorem or proposition number; adding the citation would improve readability.
  2. [Parameterizations and implementation] The implementation discussion would benefit from a brief complexity statement (e.g., number of univariate pieces versus McCormick cost) or pseudocode for the propagation step, even if only in an appendix.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the positive evaluation and recommendation for minor revision. The comments highlight opportunities to strengthen reproducibility and rigor, which we address below. We will incorporate the suggested clarifications into the revised manuscript.

read point-by-point responses
  1. Referee: The numerical case studies lack full specification of test functions, piecewise breakpoints, and superposition assembly for networks, making it hard to assess generalizability of the tightness claims.

    Authors: We agree that additional implementation details will improve reproducibility. In the revised manuscript we will expand the numerical case studies section with a new subsection that explicitly lists the multivariate test functions, the ANN architectures considered, the breakpoint selection strategy (including uniform grids and any adaptive criteria), and the precise layer-wise assembly procedure for the separable estimators. These additions will allow independent verification of the reported tightness relative to McCormick relaxations without altering the existing numerical results. revision: yes

  2. Referee: The propagation rules for nonlinear compositions lack explicit verification that they preserve valid bracketing when the outer function is neither monotone nor convex.

    Authors: The derivation of the nonlinear propagation rules begins from the definition of separable under- and over-estimators and applies the monotonicity/convexity properties only when they are globally available on the compact domain; in the absence of these properties the rules fall back to standard interval bounds that are known to be valid. To make this explicit we will insert a short lemma in the revised section on nonlinear compositions that proves bracketing preservation in the general case (including when the outer function is neither monotone nor convex) by direct appeal to the separable estimator definition and the univariate relaxation properties. This addition addresses the load-bearing correctness concern without changing the stated rules. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation self-contained via explicit rules

full rationale

The paper constructs superposition relaxations by defining propagation rules for affine and nonlinear compositions that directly exploit stated global monotonicity and convexity properties on compact domains. Local convergence (pointwise and Hausdorff) is analyzed from these definitions, including conditions for quadratic convergence propagation. Tightness relative to McCormick relaxations is shown only through numerical case studies on factorable functions and ANNs, without any reduction of the central arithmetic to fitted parameters, self-referential definitions, or load-bearing self-citations. The higher computational cost is explicitly noted, and no step equates a claimed result to its inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The approach rests on the assumption that functions are factorable and possess exploitable monotonicity or convexity properties, with the separable estimator concept introduced as the primary new element without external validation.

axioms (1)
  • domain assumption The functions are factorable on a compact domain.
    Required for the arithmetic and propagation rules to apply as stated.
invented entities (1)
  • superposition relaxation no independent evidence
    purpose: To bracket the graph of a multivariate factorable function between separable under- and over-estimating functions.
    This is the central new concept defined and analyzed in the paper.

pith-pipeline@v0.9.0 · 5477 in / 1228 out tokens · 29465 ms · 2026-05-12T04:04:02.495943+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

54 extracted references · 54 canonical work pages

  1. [1]

    Mathematical Programming 103, 225–249 (2005) https://doi.org/10.1007/s10107-005-0581-8

    Tawarmalani, M., Sahinidis, N.V.: A polyhedral branch-and-cut app roach to global optimization. Mathematical Programming 103, 225–249 (2005) https://doi.org/10.1007/s10107-005-0581-8

  2. [2]

    Optimization Metho ds & Software 33(3), 563–593 (2017) https://doi.org/10.1080/10556788.2017.1335312

    Vigerske, S., Gleixner, A.: SCIP: Global optimization of mixed-integ er nonlin- ear programs in a branch-and-cut framework. Optimization Metho ds & Software 33(3), 563–593 (2017) https://doi.org/10.1080/10556788.2017.1335312

  3. [3]

    https://www.gurobi.com

    Gurobi Optimization, LLC: Gurobi Optimizer Reference Manual ( 2025). https://www.gurobi.com

  4. [4]

    Springer, Berlin, Heidelberg (1996)

    Horst, R., Tuy, H.: Global Optimization: Deterministic Approaches . Springer, Berlin, Heidelberg (1996). https://doi.org/10.1007/978-3-662-03199-5

  5. [5]

    Neumaier, A.: Complete search in continuous global optimiza- tion and constraint satisfaction. Acta Numerica 13, 271–369 (2004) https://doi.org/10.1017/S0962492904000194 39 T able A3 Comparison between the minimal (discretized) L1 and L2 errors computed via ( A4) and those of the superposition overestimator constructed vi a ( 9) for a convex nondecreasing...

  6. [6]

    Mathematical P rogramming 10, 147–175 (1976) https://doi.org/10.1007/BF01580665

    McCormick, G.P.: Computability of global solutions to factorable no nconvex pro- grams: Part I – Convex underestimating problems. Mathematical P rogramming 10, 147–175 (1976) https://doi.org/10.1007/BF01580665

  7. [7]

    Springer, Boston, MA (20 02)

    Tawarmalani, M., Sahinidis, N.V.: Convexification and Global Optimiza- tion in Continuous and Mixed-Integer Nonlinear Programming: Theor y, Algorithms, Software, and Applications. Springer, Boston, MA (20 02). https://doi.org/10.1007/978-1-4757-3532-1

  8. [8]

    Journal of Global Optimization 5(3), 253–265 (1994) https://doi.org/10.1007/BF01096455

    Du, K.S., Kearfott, R.B.: The cluster problem in multivariate global optimization. Journal of Global Optimization 5(3), 253–265 (1994) https://doi.org/10.1007/BF01096455

  9. [9]

    Journal of Global Optimization 58(3), 429–438 (2014) https://doi.org/10.1007/s10898-013-0059-9

    Wechsung, A., Schaber, S.D., Barton, P.I.: The cluster problem revisited. Journal of Global Optimization 58(3), 429–438 (2014) https://doi.org/10.1007/s10898-013-0059-9

  10. [10]

    Journal of Optimization Theory & App lications 180, 925–948 (2019) https://doi.org/10.1007/s10957-018-1396-0

    Schweidtmann, A.M., Mitsos, A.: Deterministic global optimization w ith artificial neural networks embedded. Journal of Optimization Theory & App lications 180, 925–948 (2019) https://doi.org/10.1007/s10957-018-1396-0

  11. [11]

    Optimization Method s & Software 40 24(4-5), 597–634 (2009) https://doi.org/10.1080/10556780903087124

    Belotti, P., Lee, J., Liberti, L., Margot, F., W¨ achter, A.: Branch ing and bounds tightening techniques for non-convex MINLP. Optimization Method s & Software 40 24(4-5), 597–634 (2009) https://doi.org/10.1080/10556780903087124

  12. [12]

    Journal of Global Optimizatio n 67, 731– 757 (2017) https://doi.org/10.1007/s10898-016-0450-4

    Gleixner, A.M., Berthold, T., M¨ uller, B., Weltge, S.: Three enhance ments for optimization-based bound tightening. Journal of Global Optimizatio n 67, 731– 757 (2017) https://doi.org/10.1007/s10898-016-0450-4

  13. [13]

    INFORMS Journal on Computing 22(1), 26–43 (2010) https://doi.org/10.1287/ijoc.1090.0321

    Fourer, R., Maheshwari, C., Neumaier, A., Orban, D., Schichl, H.: C onvex- ity and concavity detection in computational graphs: Tree walks fo r con- vexity assessment. INFORMS Journal on Computing 22(1), 26–43 (2010) https://doi.org/10.1287/ijoc.1090.0321

  14. [14]

    Mathematical Programming 136, 155–182 (2012) https://doi.org/10.1007/s10107-012-0555-6

    Misener, R., Floudas, C.A.: Global optimization of mixed-integer qu adratically- constrained quadratic programs (MIQCQP) through piecewise-line ar and edge-concave relaxations. Mathematical Programming 136, 155–182 (2012) https://doi.org/10.1007/s10107-012-0555-6

  15. [15]

    Journal of Global Optimization 10(4), 425–437 (1997) https://doi.org/10.1023/A:1008217604285

    Rikun, A.D.: A convex envelope formula for multilinear func- tions. Journal of Global Optimization 10(4), 425–437 (1997) https://doi.org/10.1023/A:1008217604285

  16. [16]

    Journal of Global Optimiz ation 52, 447–469 (2012) https://doi.org/10.1007/s10898-011-9757-3

    Sherali, H.D., Dalkiran, E., Liberti, L.: Reduced RLT representatio ns for non- convex polynomial programming problems. Journal of Global Optimiz ation 52, 447–469 (2012) https://doi.org/10.1007/s10898-011-9757-3

  17. [17]

    Journal of Global Op timization 59(2-3), 673–693 (2014) https://doi.org/10.1007/s10898-014-0190-2

    Zorn, K., Sahinidis, N.V.: Global optimization of general nonconve x problems with intermediate polynomial substructures. Journal of Global Op timization 59(2-3), 673–693 (2014) https://doi.org/10.1007/s10898-014-0190-2

  18. [18]

    Mathematical Program- ming Computation 7(1), 1–37 (2015) https://doi.org/10.1007/s12532-014-0073-z

    Bao, X., Khajavirad, A., Sahinidis, N.V., Tawarmalani, M.: Global opt imization of nonconvex problems with multilinear intermediates. Mathematical Program- ming Computation 7(1), 1–37 (2015) https://doi.org/10.1007/s12532-014-0073-z

  19. [19]

    Sherali, H.D., Adams, W.P.: RLT Hierarchy for General Discrete Mixed-Integer Problems, pp. 103–129. Springer, Boston, MA (19 99). https://doi.org/10.1007/978-1-4757-4388-3 4

  20. [20]

    SIAM Journal on Optimization 11(3), 796–817 (2001) https://doi.org/10.1137/S1052623400366802

    Lasserre, J.B.: Global optimization with polynomials and the prob- lem of moments. SIAM Journal on Optimization 11(3), 796–817 (2001) https://doi.org/10.1137/S1052623400366802

  21. [21]

    Mathematical Programming 96, 293–320 (2015) https://doi.org/10.1007/s10107-003-0387-5

    Parrilo, P.: Semidefinite programming relaxations for semialge- braic problems. Mathematical Programming 96, 293–320 (2015) https://doi.org/10.1007/s10107-003-0387-5

  22. [22]

    SIAM Journal on Applied Algebra and Geometry 3(2), 193–230 (2019) 41 https://doi.org/10.1137/18M118935X

    Ahmadi, A.A., Majumdar, A.: DSOS and SDSOS optimization: More tractable alternatives to sum of squares and semidefinite optimizat ion. SIAM Journal on Applied Algebra and Geometry 3(2), 193–230 (2019) 41 https://doi.org/10.1137/18M118935X

  23. [23]

    Journal of Global Optimization 11, 287–311 (1997) https://doi.org/10.1023/A:1008212418949

    Epperly, T.G.W., Pistikopoulos, E.N.: A reduced space branch and b ound algo- rithm for global optimization. Journal of Global Optimization 11, 287–311 (1997) https://doi.org/10.1023/A:1008212418949

  24. [24]

    SIAM Journal on Optimization 20(2), 573–601 (2009) https://doi.org/10.1137/080717341

    Mitsos, A., Chachuat, B., Barton, P.I.: McCormick-based relaxa tions of algorithms. SIAM Journal on Optimization 20(2), 573–601 (2009) https://doi.org/10.1137/080717341

  25. [25]

    Journal of Global Optimization 56(3), 765– 785 (2013) https://doi.org/10.1007/s10898-012-990

    Kearfott, R.B., Castille, J., Tyagi, G.: A general framework for c onvexity analysis in deterministic global optimization. Journal of Global Optimization 56(3), 765– 785 (2013) https://doi.org/10.1007/s10898-012-990

  26. [26]

    AIChE Journ al 65(3), 1022– 1034 (2019) https://doi.org/10.1002/aic.16507

    Bongartz, D., Mitsos, A.: Deterministic global flowsheet optimiza tion: Between equation-oriented and sequential-modular methods. AIChE Journ al 65(3), 1022– 1034 (2019) https://doi.org/10.1002/aic.16507

  27. [27]

    Optimization & Engineering 24, 801–830 (2023) https://doi.org/10.1007/s11081-021-09707-y

    Burre, J., Bongartz, D., Mitsos, A.: Comparison of MINLP formu lations for global superstructure optimization. Optimization & Engineering 24, 801–830 (2023) https://doi.org/10.1007/s11081-021-09707-y

  28. [28]

    SIAM Journal on Scientific Com puting 27(6), 2167–2182 (2006) https://doi.org/10.1137/040604388

    Singer, A.B., Barton, P.I.: Bounding the solutions of parameter d ependent nonlin- ear ordinary differential equations. SIAM Journal on Scientific Com puting 27(6), 2167–2182 (2006) https://doi.org/10.1137/040604388

  29. [29]

    Optimization Methods & Software 30(3), 424–460 (2015) https://doi.org/10.1080/10556788.2014.924514

    Stuber, M.D., Scott, J.K., Barton, P.I.: Convex and concave rela xations of implicit functions. Optimization Methods & Software 30(3), 424–460 (2015) https://doi.org/10.1080/10556788.2014.924514

  30. [30]

    In: Ilchmann, A., Reis, T

    Scott, J.K., Barton, P.I.: Reachability Analysis and Deterministic G lobal Optimization of DAE Models. In: Ilchmann, A., Reis, T. (eds.) Surveys in Differential-Algebraic Equations III, pp. 61–116. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22428-2 2

  31. [31]

    Journal of Global Optimization 62(3), 575–613 (2015) https://doi.org/10.1007/s10898-014-0235-6

    Villanueva, M.E., Houska, B., Chachuat, B.: Unified framework for the propagation of continuous-time enclosures for parametric no n- linear ODEs. Journal of Global Optimization 62(3), 575–613 (2015) https://doi.org/10.1007/s10898-014-0235-6

  32. [32]

    http://permalink.avt.rwth-aachen.de/?id=729717

    Bongartz, D., Najman, J., Sass, S., Mitsos, A.: MAiNGO – McCormic k-based Algorithm for mixed-integer Nonlinear Global Optimization, Technical Report (2018). http://permalink.avt.rwth-aachen.de/?id=729717

  33. [33]

    Optimization Methods & Software 37(2), 425–450 (2022) https://doi.org/10.1080/10556788.2020.1786566 42

    Wilhelm, M.E., Stuber, M.D.: EAGO.jl: easy advanced global optimiza- tion in Julia. Optimization Methods & Software 37(2), 425–450 (2022) https://doi.org/10.1080/10556788.2020.1786566 42

  34. [34]

    Journal of Global Optimization 52(1), 1–28 (2012) https://doi.org/10.1007/s10898-011-9685-2

    Bompadre, A., Mitsos, A.: Convergence rate of McCormick relaxations. Journal of Global Optimization 52(1), 1–28 (2012) https://doi.org/10.1007/s10898-011-9685-2

  35. [35]

    Series in Mathematics and Its Applications

    Ratschek, H., Rokne, J.: Computer Methods for the Range of F unctions. Series in Mathematics and Its Applications. Ellis Horwood Ltd, Chichester, U K (1984). https://doi.org/10.1002/zamm.19850650904

  36. [36]

    AIP Conference P roceedings 405(1), 1–23 (1997) https://doi.org/10.1063/1.53493

    Berz, M.: From Taylor series to Taylor models. AIP Conference P roceedings 405(1), 1–23 (1997) https://doi.org/10.1063/1.53493

  37. [37]

    Journal of Global Optimization 57(1), 75–114 (2013) https://doi.org/10.1007/s10898-012-9998-9

    Bompadre, A., Mitsos, A., Chachuat, B.: Convergence analysis o f Taylor and McCormick-Taylor models. Journal of Global Optimization 57(1), 75–114 (2013) https://doi.org/10.1007/s10898-012-9998-9

  38. [38]

    Journal of Global Optimizatio n 68, 413–438 (2017) https://doi.org/10.1007/s10898-016-0474-9

    Rajyaguru, J., Villanueva, M.E., Houksa, B., Chachuat, B.: Cheby shev model arithmetic for factorable functions. Journal of Global Optimizatio n 68, 413–438 (2017) https://doi.org/10.1007/s10898-016-0474-9

  39. [39]

    https://arxiv.org/abs/1610.05862

    Zha, Y., Villanueva, M.E., Houska, B.: Interval superposition arit hmetic (2018). https://arxiv.org/abs/1610.05862

  40. [40]

    IF AC-PapersOnLine 52(1), 574–579 (2019) https://doi.org/10.1016/j.ifacol.2019.06.124

    Su, J., Zha, Y., Wang, K., Villanueva, M.E., Paulen, R., Houska, B.: Interval superposition arithmetic for guaranteed parameter estimation. IF AC-PapersOnLine 52(1), 574–579 (2019) https://doi.org/10.1016/j.ifacol.2019.06.124

  41. [41]

    Duke Mathematical Journal 42(4), 645–659 (1975) https://doi.org/10.1215/S0012-7094-75-04256-8

    Logan, B.F., Shepp, L.A.: Optimal reconstruction of a function f rom its projections. Duke Mathematical Journal 42(4), 645–659 (1975) https://doi.org/10.1215/S0012-7094-75-04256-8

  42. [42]

    Cambridge University Press, Cambr idge, UK (2015)

    Pinkus, A.: Ridge Functions. Cambridge University Press, Cambr idge, UK (2015). https://doi.org/10.1017/CBO9781316408124

  43. [43]

    Computers & Chemical Engineering 130, 106527 (2019) https://doi.org/10.1016/j.compchemeng.2019.106527

    Najman, J., Bongartz, D., Mitsos, A.: Convex relaxations of com ponentwise convex functions. Computers & Chemical Engineering 130, 106527 (2019) https://doi.org/10.1016/j.compchemeng.2019.106527

  44. [44]

    https://github.com/omega-icl/mcpp

    Chachuat, B., Omega Research Group: MC++ (version 4.0) – Too lkit for Construction, Manipulation and Bounding of Factorable Functions ( 2024). https://github.com/omega-icl/mcpp

  45. [45]

    SIA M, Philadelphia, PA (1979)

    Moore, R.E.: Methods and Applications of Interval Analysis. SIA M, Philadelphia, PA (1979). https://doi.org/10.1137/1.9781611970906

  46. [46]

    Springer, Berlin, Germany (2011)

    Hiriart-Urruty, J.-B., Lemar´ echal, C.: Fundamentals of Conve x Analysis. Springer, Berlin, Germany (2011). https://doi.org/10.1007/978-3-642-56468-0 43

  47. [47]

    Pacific Journal of Mathematics 9, 707–713 (1959) https://doi.org/10.2140/pjm.1959.9.707

    Hartman, P.: On functions representable as a difference of con - vex functions. Pacific Journal of Mathematics 9, 707–713 (1959) https://doi.org/10.2140/pjm.1959.9.707

  48. [48]

    Jour- nal of Optimization Theory & Applications 103, 1–43 (1999) https://doi.org/10.1023/A:1021765131316

    Horst, R., Thoai, N.V.: DC programming: Overview. Jour- nal of Optimization Theory & Applications 103, 1–43 (1999) https://doi.org/10.1023/A:1021765131316

  49. [49]

    Jour- nal of Computational & Applied Mathematics 121, 421–464 (2000) https://doi.org/10.1016/S0377-0427(00)00342-3

    Alefeld, G., Mayer, G.: Interval analysis: Theory and application s. Jour- nal of Computational & Applied Mathematics 121, 421–464 (2000) https://doi.org/10.1016/S0377-0427(00)00342-3

  50. [50]

    Mathematics in Computer Science 1, 9–19 (2007) https://doi.org/10.1007/s11786-007-0001-y

    Trefethen, L.N.: Computing numerically with functions instead of numbers. Mathematics in Computer Science 1, 9–19 (2007) https://doi.org/10.1007/s11786-007-0001-y

  51. [51]

    http://www.chebfun.org/docs/guide/

    Driscoll, T.A., Hale, N., Trefethen, L.N.: Chebfun Guide (2014). http://www.chebfun.org/docs/guide/

  52. [52]

    Computing 48, 337–361 (1992) https://doi.org/10.1007/BF02238642

    Rote, G.: The convergence rate of the sandwich algorithm for approximating convex functions. Computing 48, 337–361 (1992) https://doi.org/10.1007/BF02238642

  53. [53]

    Jo urnal of Global Optimization 59(2), 633–662 (2014) https://doi.org/10.1007/s10898-014-0176-0

    Tsoukalas, A., Mitsos, A.: Multivariate McCormick relaxations. Jo urnal of Global Optimization 59(2), 633–662 (2014) https://doi.org/10.1007/s10898-014-0176-0

  54. [54]

    WIREs Computational Statistics 17(2), 70028 (2025) https://doi.org/10.1002/wics.70028 44

    Naser, M.Z., Al-Bashiti, M.K., Tapeh, A.T.G., Naser, A., Kodur, V., Ha w- ileh, R., Abdalla, J., Khodadadi, N., Gandomi, A.H., Eslamlou, A.D.: A review of benchmark and test functions for global optimization algo rithms and metaheuristics. WIREs Computational Statistics 17(2), 70028 (2025) https://doi.org/10.1002/wics.70028 44