pith. machine review for the scientific record. sign in

arxiv: 2605.04232 · v1 · submitted 2026-05-05 · 💻 cs.LO · cs.NA· cs.PL· math.NA

Recognition: unknown

Probabilistic Floating-Point Round-Off Analysis via Concentration Inequalities

Hongfei Fu, Jean-Baptiste Jeannin, Jiawei Chen, Yichen Tao

Pith reviewed 2026-05-08 17:37 UTC · model grok-4.3

classification 💻 cs.LO cs.NAcs.PLmath.NA
keywords floating-point arithmeticround-off errorprobabilistic analysisconcentration inequalitiesTaylor expansionprogram verificationnumerical stability
0
0 comments X

The pith

Floating-point round-off thresholds can be derived probabilistically with orders-of-magnitude speedups by applying concentration inequalities to over-approximated Taylor expansions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes a scalable technique for computing probabilistic bounds on floating-point round-off errors that appear in numerical programs. It starts from the Taylor expansion produced by prior tools and removes absolute-value operators via a sound over-approximation, then converts fractional expressions into polynomials so that concentration inequalities can be applied. Because the remaining computation reduces to expectations of polynomials over independent variables, linearity of expectation makes the calculation efficient. The resulting thresholds are comparable in tightness to earlier methods yet far faster to obtain, which matters for programs where deterministic bounds are impractically loose.

Core claim

Sound over-approximation eliminates absolute-value operators inside the polynomial terms of the Taylor expansion, fractional expressions are transformed into the polynomial case, and concentration inequalities are then applied to the first-order partial terms; the linear and independence properties of expectation then allow efficient calculation of the required bounds, producing probabilistic round-off thresholds that scale to larger programs.

What carries the argument

The over-approximated Taylor expansion of the round-off error together with concentration inequalities, whose key computational step is the evaluation of expected values of polynomials with independent variables.

If this is right

  • Probabilistic thresholds become practical for programs whose deterministic bounds are too large to be useful.
  • The dominant cost reduces to cheap expectation calculations, so the method scales beyond what exhaustive or Monte-Carlo approaches can handle.
  • Range partitioning can be layered on top to tighten the bounds without changing the overall algorithmic structure.
  • The same pipeline applies to any error expression that can be written as a Taylor series with independent error terms.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could be combined with static range analysis to automate the choice of partitions.
  • Similar over-approximations might allow concentration inequalities to be used in other numerical verification settings that currently rely on absolute values.
  • If the independence assumption between error terms is relaxed, the method would require a different concentration inequality or a more expensive joint-moment computation.

Load-bearing premise

The over-approximation that removes absolute-value operators from the polynomial expressions remains sound after the transformation of fractional expressions into polynomials.

What would settle it

A concrete program and input distribution for which the method returns a probabilistic threshold that is violated more often than the stated failure probability.

read the original abstract

Floating-point round-off errors are ubiquitous in numerically intensive programs arising in fields such as scientific computing and optimization. As floating-point errors potentially lead to unexpected and catastrophic program failures, one must derive guaranteed round-off thresholds to ensure the correctness of these programs. However, deterministic round-off thresholds tend to be too conservative to be usable in practice, since they often involve large round-off errors that occur with small probability. Probabilistic thresholds relax deterministic ones by specifying that the probability of the round-off error exceeding a threshold is below a given confidence. In this work, we propose a novel approach to probabilistic round-off analysis, by applying concentration inequalities over the Taylor expansion from FPTaylor (TOPLAS 2018). A major obstacle in applying concentration inequalities is that the Taylor expansion involves absolute value operators that make the calculation of the expected values of the first order partial differential terms difficult. Our first step to overcome this obstacle is a sound over-approximation that removes the absolute value operators in polynomial expressions. Then, we show how to handle fractional expressions by a transformation into polynomial case. Finally, we show how to improve our approach with range partitioning. Our approach is scalable since the key computational part is the calculation of expected values of polynomial expressions with independent variables, for which the linear and independence properties of expectation boost the computation. Experimental results show that our approach is orders of magnitude more time efficient, while producing thresholds with comparable precision against the state of the art.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims a scalable probabilistic method for bounding floating-point round-off error. It starts from a Taylor expansion of the error produced by FPTaylor, replaces every absolute-value sub-expression by a sound polynomial over-approximation, converts any remaining fractional forms into polynomials, and optionally applies range partitioning. Concentration inequalities are then applied to the resulting polynomial; the linearity and independence properties of expectation are invoked to evaluate the required moments efficiently. Experiments are reported to show orders-of-magnitude speed-ups over prior art while producing thresholds of comparable precision.

Significance. If the two transformations are shown to be sound, the work supplies a practical route to probabilistic (rather than deterministic) round-off thresholds that are far less conservative yet still formally justified. The algorithmic core—reducing the problem to expectation calculations over independent variables—is clean and exploits standard properties of expectation. The experimental comparison already supplies concrete timing and precision data that can be used to judge practicality.

major comments (2)
  1. [§3.2] §3.2 (absolute-value over-approximation): The manuscript asserts that replacing |p(x)| by a polynomial upper bound preserves soundness for the subsequent concentration inequality, yet supplies only a high-level argument without an explicit proof that the tail probability is not increased. Because this step is required before the concentration bound can be applied to the original round-off error, the absence of a detailed derivation or a concrete counter-example check is load-bearing for the central probabilistic claim.
  2. [§3.3] §3.3 (fractional-to-polynomial transformation): The transformation that converts fractional expressions into polynomials is stated to be sound, but the manuscript does not show how the moments of the transformed polynomial relate to those of the original fractional expression. Without this relation, it is unclear whether the concentration inequality still yields a valid upper bound on the probability that the true round-off error exceeds the reported threshold.
minor comments (2)
  1. [Experimental evaluation] The experimental tables would benefit from explicit reporting of standard deviations or confidence intervals on both running times and threshold values, so that the claim of “comparable precision” can be assessed quantitatively.
  2. Notation for the transformed polynomial after absolute-value removal should be introduced once and used consistently; occasional reuse of the original variable names makes it harder to track which expression is being bounded.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful and constructive review. The two major comments correctly identify places where the soundness arguments in the manuscript are presented at a high level. We will revise the paper to supply the requested detailed derivations and lemmas. Our point-by-point responses follow.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (absolute-value over-approximation): The manuscript asserts that replacing |p(x)| by a polynomial upper bound preserves soundness for the subsequent concentration inequality, yet supplies only a high-level argument without an explicit proof that the tail probability is not increased. Because this step is required before the concentration bound can be applied to the original round-off error, the absence of a detailed derivation or a concrete counter-example check is load-bearing for the central probabilistic claim.

    Authors: We agree that an explicit proof is needed. In the revised manuscript we will add a lemma in §3.2 showing that if q(x) is a polynomial satisfying q(x) ≥ |p(x)| for all x in the domain, then for any threshold t the tail probability satisfies P(|p(X)| > t) ≤ P(q(X) > t). Because the concentration inequality is applied to the (larger) random variable q(X), any upper bound it returns on P(q(X) > t) is automatically a valid upper bound on the original tail probability. The proof follows directly from the monotonicity of probability and the fact that the event {|p(X)| > t} is contained in {q(X) > t}. We will also include a small numerical example verifying the inequality on a simple polynomial. revision: yes

  2. Referee: [§3.3] §3.3 (fractional-to-polynomial transformation): The transformation that converts fractional expressions into polynomials is stated to be sound, but the manuscript does not show how the moments of the transformed polynomial relate to those of the original fractional expression. Without this relation, it is unclear whether the concentration inequality still yields a valid upper bound on the probability that the true round-off error exceeds the reported threshold.

    Authors: We will expand §3.3 with a lemma that relates the moments. The transformation replaces a fractional expression f/g by a polynomial p such that |f/g| ≤ p almost surely on the domain (via a sound polynomial over-approximation of the denominator away from zero). Consequently, the absolute moments satisfy E[|f/g|^k] ≤ E[p^k] for every k. Because the concentration inequalities we employ (e.g., Bernstein or Chernoff) are monotone in the moments, any tail bound derived from the moments of p is a valid (conservative) tail bound for the original fractional expression. The revised text will state this monotonicity argument explicitly and show how the expectation and variance calculations remain efficient after the transformation. revision: yes

Circularity Check

0 steps flagged

No circularity; derivation applies external FPTaylor + standard concentration inequalities via independent over-approximations

full rationale

The claimed derivation starts from an external Taylor expansion (FPTaylor, TOPLAS 2018) and feeds it into off-the-shelf concentration inequalities. The two new transformations—an over-approximation that removes absolute-value operators and a conversion of fractional expressions into polynomials—are introduced as sound, independent steps whose validity is argued directly from the paper's own definitions rather than from the final probabilistic threshold. Expectation linearity and independence are used only after these transformations; they do not presuppose the target bound. No self-citation is load-bearing, no parameter is fitted and then renamed as a prediction, and no uniqueness theorem is imported from the authors' prior work. The method therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 3 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on the validity of FPTaylor's Taylor expansion, the soundness of the absolute-value over-approximation, and the correctness of the fractional-to-polynomial transformation; no free parameters or invented entities are mentioned.

axioms (3)
  • domain assumption The Taylor expansion produced by FPTaylor (TOPLAS 2018) is a valid starting point for round-off error analysis.
    Explicitly cited as the base for applying concentration inequalities.
  • ad hoc to paper The proposed over-approximation that removes absolute value operators preserves soundness for the subsequent probabilistic bounds.
    Described as the first step to overcome the obstacle of absolute values.
  • ad hoc to paper Fractional expressions can be soundly transformed into polynomial expressions for the purpose of expectation calculation.
    Second technical step stated in the abstract.

pith-pipeline@v0.9.0 · 5571 in / 1609 out tokens · 46411 ms · 2026-05-08T17:37:22.087146+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 8 canonical work pages

  1. [1]

    Mark Baranowski and Ian Briggs. [n. d.]. GELPIA: Global Extrema Locator Parallelization for Interval Arithmetic. https://github.com/soarlab/gelpia

  2. [2]

    2019.Introduction to probability

    Joseph K Blitzstein and Jessica Hwang. 2019.Introduction to probability. Chapman and Hall/CRC

  3. [3]

    Olivier Bouissou, Eric Goubault, Jean Goubault-Larrecq, and Sylvie Putot. 2012. A generalization of p-boxes to affine arithmetic.Computing94 (2012), 189–201

  4. [4]

    Olivier Bouissou, Eric Goubault, Sylvie Putot, Aleksandar Chakarov, and Sriram Sankaranarayanan. 2016. Uncertainty propagation using probabilistic affine forms and concentration of measure inequalities. InTools and Algorithms for the Construction and Analysis of Systems: 22nd International Conference, TACAS 2016, Held as Part of the European Joint Confere...

  5. [5]

    Aleksandar Chakarov and Sriram Sankaranarayanan. 2013. Probabilistic Program Analysis with Martingales. In Computer Aided Verification - 25th International Conference, CA V 2013, Saint Petersburg, Russia, July 13-19, 2013. Proceedings (Lecture Notes in Computer Science, Vol. 8044), Natasha Sharygina and Helmut Veith (Eds.). Springer, 511–526. doi:10.1007/...

  6. [6]

    Krishnendu Chatterjee, Hongfei Fu, Petr Novotný, and Rouzbeh Hasheminezhad. 2018. Algorithmic Analysis of Qualitative and Quantitative Termination Problems for Affine Probabilistic Programs.ACM Trans. Program. Lang. Syst. 40, 2 (2018), 7:1–7:45. doi:10.1145/3174800

  7. [7]

    Krishnendu Chatterjee, Amir Kafshdar Goharshady, Tobias Meggendorfer, and Dorde Zikelic. 2024. Quantitative Bounds on Resource Usage of Probabilistic Programs.Proc. ACM Program. Lang.8, OOPSLA1 (2024), 362–391. doi:10.1145/3649824

  8. [8]

    Krishnendu Chatterjee, Petr Novotný, and Dorde Zikelic. 2017. Stochastic invariants for probabilistic termination. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, Giuseppe Castagna and Andrew D. Gordon (Eds.). ACM, 145–160. doi:10.1145/3009837.3009873

  9. [9]

    Liqian Chen, Antoine Miné, and Patrick Cousot. 2008. A sound floating-point polyhedra abstract domain. InAsian Symposium on Programming Languages and Systems. Springer, 3–18

  10. [10]

    2011.Probability and stochastics

    Erhan Çinlar. 2011.Probability and stochastics. Springer

  11. [11]

    George Constantinides, Fredrik Dahlqvist, Zvonimir Rakamarić, and Rocco Salvia. 2021. Rigorous roundoff error analysis of probabilistic floating-point computations. InInternational Conference on Computer Aided Verification. Springer, 626–650

  12. [12]

    Patrick Cousot and Radhia Cousot. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. InProceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. 238–252

  13. [13]

    Nasrine Damouche, Matthieu Martel, Pavel Panchekha, Chen Qiu, Alexander Sanchez-Stern, and Zachary Tatlock

  14. [14]

    InNumerical Software Verification: 9th International Workshop, NSV 2016, Toronto, ON, Canada, July 17-18, 2016, Revised Selected Papers 9

    Toward a standard benchmark format and suite for floating-point analysis. InNumerical Software Verification: 9th International Workshop, NSV 2016, Toronto, ON, Canada, July 17-18, 2016, Revised Selected Papers 9. Springer, 63–77

  15. [15]

    Eva Darulova, Anastasiia Izycheva, Fariha Nasir, Fabian Ritter, Heiko Becker, and Robert Bastian. 2018. Daisy-framework for analysis and optimization of numerical programs (tool paper). InTools and Algorithms for the Construction and Analysis of Systems: 24th International Conference, TACAS 2018, Held as Part of the European Joint Conferences on Theory an...

  16. [16]

    Eva Darulova and Viktor Kuncak. 2014. Sound compilation of reals. InProceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. 235–248

  17. [17]

    Marc Daumas and Guillaume Melquiond. 2010. Certification of bounds on expressions involving rounded operators. ACM Transactions on Mathematical Software (TOMS)37, 1 (2010), 1–20

  18. [18]

    David Delmas, Eric Goubault, Sylvie Putot, Jean Souyris, Karim Tekkal, and Franck Védrine. 2009. Towards an industrial use of FLUCTUAT on safety-critical avionics software. InInternational Workshop on Formal Methods for Industrial Critical Systems. Springer, 53–69

  19. [19]

    2003.Constructing probability boxes and Dempster-Shafer structures

    Scott Ferson, Vladik Kreinovich, Lev Ginzburg, Davis S Myers, and Kari Sentz. 2003.Constructing probability boxes and Dempster-Shafer structures. Number 4015. Sandia National Laboratories

  20. [20]

    David Goldberg. 1991. What every computer scientist should know about floating-point arithmetic.ACM computing surveys (CSUR)23, 1 (1991), 5–48

  21. [21]

    William Kahan. 1996. IEEE standard 754 for binary floating-point arithmetic.Lecture Notes on the Status of IEEE754, 94720-1776 (1996), 11

  22. [22]

    Satoshi Kura, Natsuki Urabe, and Ichiro Hasuo. 2019. Tail Probabilities for Randomized Program Runtimes via Martingales for Higher Moments. InTools and Algorithms for the Construction and Analysis of Systems - 25th International Conference, TACAS 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, C...

  23. [23]

    COMBA Joao LD. 1993. Affine arithmetic and its applications to computer graphics.SIBGRAPI’93, Recife, PE (Brazil), October(1993)

  24. [24]

    Wonyeol Lee, Rahul Sharma, and Alex Aiken. 2016. Verifying bit-manipulations of floating-point. InProceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. 70–84

  25. [25]

    Michael D Linderman, Matthew Ho, David L Dill, Teresa H Meng, and Garry P Nolan. 2010. Towards program optimization through automated analysis of numerical precision. InProceedings of the 8th annual IEEE/ACM international symposium on Code generation and optimization. 230–237

  26. [26]

    Debasmita Lohar, Milos Prokop, and Eva Darulova. 2019. Sound probabilistic numerical error analysis. InInternational Conference on Integrated Formal Methods. Springer, 322–340

  27. [27]

    Victor Magron, George Constantinides, and Alastair Donaldson. 2017. Certified roundoff error bounds using semidefi- nite programming.ACM Transactions on Mathematical Software (TOMS)43, 4 (2017), 1–31

  28. [28]

    Matthieu Martel. 2011. RangeLab: a static-analyzer to bound the accuracy of finite-precision computations. In2011 13th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing. IEEE, 118–122

  29. [29]

    1966.Interval analysis

    Ramon E Moore. 1966.Interval analysis. Vol. 4. prentice-Hall Englewood Cliffs

  30. [30]

    1964.Principles of mathematical analysis

    Walter Rudin et al. 1964.Principles of mathematical analysis. Vol. 3. McGraw-hill New York

  31. [31]

    Alexey Solovyev, Marek S Baranowski, Ian Briggs, Charles Jacobsen, Zvonimir Rakamarić, and Ganesh Gopalakrishnan

  32. [32]

    Rigorous estimation of floating-point round-off errors with symbolic taylor expansions.ACM Transactions on Programming Languages and Systems (TOPLAS)41, 1 (2018), 1–39

  33. [33]

    Yican Sun, Hongfei Fu, Krishnendu Chatterjee, and Amir Kafshdar Goharshady. 2023. Automated Tail Bound Analysis for Probabilistic Recurrence Relations. InComputer Aided Verification - 35th International Conference, CA V 2023, Paris, France, July 17-22, 2023, Proceedings, Part III (Lecture Notes in Computer Science, Vol. 13966), Constantin Enea and Akash L...

  34. [34]

    2023.A Mechanized Error Analysis Framework for End-to-End Verification of Numerical Programs

    Mohit Tekriwal. 2023.A Mechanized Error Analysis Framework for End-to-End Verification of Numerical Programs. Ph. D. Dissertation

  35. [35]

    Laura Titolo, Marco A Feliú, Mariano Moscato, and César A Muñoz. 2018. An abstract interpretation framework for the round-off error analysis of floating-point programs. InVerification, Model Checking, and Abstract Interpretation: 19th International Conference, VMCAI 2018, Los Angeles, CA, USA, January 7-9, 2018, Proceedings 19. Springer, 516–537

  36. [36]

    Di Wang, Jan Hoffmann, and Thomas W. Reps. 2021. Central moment analysis for cost accumulators in proba- bilistic programs. InPLDI ’21: 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, Virtual Event, Canada, June 20-25, 2021, Stephen N. Freund and Eran Yahav (Eds.). ACM, 559–573. doi:10.1145/3453483.3454062

  37. [37]

    Jinyi Wang, Yican Sun, Hongfei Fu, Krishnendu Chatterjee, and Amir Kafshdar Goharshady. 2021. Quantitative analysis of assertion violations in probabilistic programs. InPLDI ’21: 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, Virtual Event, Canada, June 20-25, 2021, Stephen N. Freund and Eran Yahav (Eds.). ACM...

  38. [38]

    NM” report thresholds gen- erated by the Naive Markov method, while columns labeled “CMB

    David Williams. 1991.Probability with Martingales. Cambridge University Press. , Vol. 1, No. 1, Article . Publication date: May 2026. 28 Yichen Tao, Hongfei Fu, Jiawei Chen, and Jean-Baptiste Jeannin A Symbolic Computation of Expectations As is mentioned in Section 5, ProbTaylor computes expectations symbolically rather than numeri- cally. In this section...