pith. machine review for the scientific record. sign in

arxiv: 2605.07369 · v1 · submitted 2026-05-08 · 🧮 math.PR

Recognition: no theorem link

Moderate Deviation Principle for a Stochastic Approximation Process

Authors on Pith no claims yet

Pith reviewed 2026-05-11 02:14 UTC · model grok-4.3

classification 🧮 math.PR
keywords moderate deviation principlestochastic approximationmartingale differencesexponential inequalityrecursive processlarge deviationsstochastic recursion
0
0 comments X

The pith

The recursive stochastic approximation process satisfies a moderate deviation principle.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proves that the stochastic process defined by the recursion X_{n+1} = X_n + (b/(n+1))[g(X_n) + U_{n+1}], with b positive, g a function, and U_n bounded martingale differences, obeys the moderate deviation principle. This principle provides exponential rate estimates for the probability that the process deviates moderately from its expected behavior. Readers interested in iterative algorithms would value this because it offers more detailed probabilistic control than standard convergence results, aiding analysis of rare events in approximation schemes. The authors also derive an exponential inequality for the process and establish the moderate deviation principle for weighted sums of the martingale terms as supporting results.

Core claim

We establish the moderate deviation principle for the stochastic process (X_n)_{n≥0} defined by the given recursion. Auxiliary results include an exponential inequality for (X_n) and the moderate deviation principle for weighted sums of bounded martingale differences.

What carries the argument

The recursion X_{n+1} = X_n + (b/(n+1)) [g(X_n) + U_{n+1}] with bounded adapted martingale differences U_n that carries the moderate deviation analysis.

If this is right

  • The process X_n has exponentially controlled moderate deviations from its mean path.
  • Weighted sums of bounded martingale differences satisfy their own moderate deviation principle.
  • An exponential inequality provides uniform tail bounds on the process X_n.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The result can be used to quantify error probabilities in stochastic optimization and adaptive control algorithms.
  • It opens the possibility of deriving moderate deviation results for similar recursions with relaxed boundedness assumptions.
  • The auxiliary results on martingale sums may combine with other large-deviation techniques for hybrid processes.

Load-bearing premise

The noise sequence consists of bounded martingale differences adapted to the filtration.

What would settle it

A concrete counterexample recursion with bounded martingales where the moderate deviation probabilities fail to obey the predicted rate function would disprove the claim.

read the original abstract

In this paper, we investigate a stochastic approximation procedure $\left(X_n\right)_{n\ge 0}$ taking values in $R$. The process is adapted to a filtration $(F_n)_{n\ge 0}$ and satisfies the recursion $X_{n+1}=X_n+\frac{b}{n+1}\big[g(X_n)+U_{n+1}\big]$, where $b>0$, $g:R \to R$ is a function and $\left(U_n\right)_{n\ge 1}$ is a sequence of bounded martingale differences adapted to the filtration $(F_n)_{n\ge 1}$. We establish the moderate deviation principle for the stochastic process $(X_n)_{n\ge 0}$. As auxiliary results, we also obtain the exponential inequality for $(X_n)_{n\ge 0}$ and the moderate deviation principle for weighted sums of bounded martingale differences.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The manuscript establishes the moderate deviation principle for the stochastic approximation process (X_n) satisfying the recursion X_{n+1}=X_n + (b/(n+1))[g(X_n)+U_{n+1}], where b>0, g is Lipschitz with g(0)=0 and g'(0)<0, and (U_n) are bounded martingale differences adapted to (F_n). Auxiliary results include an exponential inequality for (X_n) and the MDP for weighted sums of bounded martingale differences, derived via the Gärtner-Ellis theorem applied to the martingale sums followed by a linearization around the equilibrium of g combined with a Gronwall estimate to control the perturbation.

Significance. If the result holds, it extends moderate deviation principles from the setting of weighted martingale sums to recursive stochastic approximation processes, providing a rigorous bridge via explicit linearization and Gronwall control. The paper supplies complete, self-contained proofs under clearly stated assumptions (uniform boundedness of U_n and the stability condition g'(0)<0), which is a strength for a theoretical contribution in probability theory. This could inform error analysis in stochastic optimization and recursive estimation algorithms.

minor comments (3)
  1. [Main results] The statement of the main theorem (presumably Theorem 2.1 or equivalent) would benefit from explicitly recalling the definition of the moderate deviation scaling (e.g., the speed and the space scaling) rather than referring only to the auxiliary MDP for the martingale sums.
  2. [Proof of main result] In the proof of the transfer from the weighted sum to (X_n) (likely §4), the Gronwall-type estimate absorbs the drift term; it would help to include a short remark on the dependence of the constants on the Lipschitz constant of g and on b.
  3. [Introduction] The introduction could briefly contrast the obtained rate function with the large-deviation rate function for the same recursion (when it exists) to highlight the moderate-deviation regime.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the careful reading of our manuscript and the positive evaluation, including the accurate summary of the moderate deviation principle for the stochastic approximation recursion and the auxiliary results. We appreciate the acknowledgment of the significance of extending MDP results from weighted martingale sums to recursive processes under the stated assumptions. The recommendation for minor revision is noted, but no specific major comments were provided for us to address.

Circularity Check

0 steps flagged

No significant circularity; derivation self-contained via standard theorems

full rationale

The paper establishes the MDP for the recursive process (X_n) by first proving an auxiliary MDP for weighted sums of bounded martingale differences using the Gärtner-Ellis theorem (which applies directly due to the uniform bound yielding a finite cumulant generating function), followed by a linearization around the equilibrium of g and a Gronwall estimate to transfer the principle to (X_n). These steps rely on explicitly stated assumptions (Lipschitz g with g(0)=0 and g'(0)<0, |U_n|≤M a.s.) and standard analytic tools, without any reduction to fitted parameters, self-definitional loops, or load-bearing self-citations. The central claim is obtained from the recursion and martingale property as first-principles inputs, with no internal inconsistency or presupposition of the target MDP.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The claim rests on the process definition and standard properties of bounded martingale differences; no free parameters or new entities are introduced in the abstract.

axioms (1)
  • domain assumption The sequence (U_n) consists of bounded martingale differences adapted to the filtration (F_n).
    This is the explicit setup stated in the abstract for the recursion.

pith-pipeline@v0.9.0 · 5444 in / 1218 out tokens · 49621 ms · 2026-05-11T02:14:09.033041+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

20 extracted references · 20 canonical work pages

  1. [1]

    J. R. Blum, Approximation methods which converge with probability one. Ann. Math. Statistics 25 (1954), 382-386

  2. [2]

    Dembo and O

    A. Dembo and O. Zeitouni, Large deviations techniques and applications. Second edition. Springer-Verlag, New York, 1998

  3. [3]

    Dippon, Higher order representations of the Robbins-Monro process

    J. Dippon, Higher order representations of the Robbins-Monro process. J. Multivariate Anal. 90 (2004), no. 2, 301-326

  4. [4]

    Dippon, Asymptotic expansions of the Robbins-Monro process

    J. Dippon, Asymptotic expansions of the Robbins-Monro process. Math. Methods Statist. 17 (2008), no. 2, 138-145

  5. [5]

    Eichelsbacher and M

    P. Eichelsbacher and M. L¨ owe, Lindeberg’s method for moderate deviations and random sum- mation. J. Theoret. Probab. 32 (2019), no. 2, 872-897

  6. [6]

    V. F. Gapo˘ skin and T. P. Krasulina, The law of the iterated logarithm in stochastic approximation processes. Theory Prob. Appl. 19 (1975), 844–850

  7. [7]

    Goldstein, On the choice of step size in the Robbins-Monro procedure

    L. Goldstein, On the choice of step size in the Robbins-Monro procedure. Statist. Probab. Lett. 6 (1988), no. 5, 299-303

  8. [8]

    Habib, C

    M. Habib, C. McDiarmid, J. Ramirez-Alfonsin and B. Reed, Probabilistic methods for algorithmic discrete mathematics. Algorithms and Combinatorics, 16. Springer-Verlag, Berlin, 1998

  9. [9]

    Major and P

    P. Major and P. R´ ev´ esz, A limit theorem for the Robbins-Monro approximation. Z. Wahrschein- lichkeitstheorie und Verw. Gebiete 27 (1973), 79-86

  10. [10]

    Miao and M

    Y. Miao and M. R. Dong, Asymptotic behavior for the Robbins-Monro process. J. Appl. Probab. 55 (2018), no. 2, 559-570

  11. [11]

    T. L. Lai and H. Robbins, Limit theorems for weighted sums and stochastic approximation processes. Proc. Nat. Acad. Sci. U.S.A. 75 (1978), no. 3, 1068-1070

  12. [12]

    Renlund, Generalized P´ olya urns via stochastic approximation

    H. Renlund, Generalized P´ olya urns via stochastic approximation. (2010) arXiv:1002.3716v1

  13. [13]

    Renlund, Limit theorem for stochastic approximation algorithm

    H. Renlund, Limit theorem for stochastic approximation algorithm. (2011) arXiv:1102.4741v1

  14. [14]

    Robbins and S

    H. Robbins and S. Monro, A stochastic approximation method. Ann. Math. Statist. 22 (1951), 400-407

  15. [15]

    Ruppert, A new dynamic stochastic approximation procedure

    D. Ruppert, A new dynamic stochastic approximation procedure. Ann. Statist. 7 (1979), no. 6, 1179-1195

  16. [16]

    Sacks, Asymptotic distribution of stochastic approximation procedures

    J. Sacks, Asymptotic distribution of stochastic approximation procedures. Ann. Math. Statist. 29 (1958), 373-405

  17. [17]

    J. N. Shi, Z. H. Yu and Y. Miao, Large deviation inequalities for the nonlinear unbalanced urn model. (2024) arXiv:2409.07800

  18. [18]

    J. N. Shi, Z. H. Yu and Y. Miao, The Law of the Iterated Logarithm for the Nonlinear Unbalanced Urn Model. J Theor Probab 39, 39 (2026). 22 J. N. SHI, Q. YIN, AND Y. MIAO

  19. [19]

    J. H. Venter, An extension of the Robbins-Monro procedure. Ann. Math. Statist. 38 (1967), 181-190

  20. [20]

    Woodroofe, Normal approximation and large deviations for the Robbins-Monro process

    M. Woodroofe, Normal approximation and large deviations for the Robbins-Monro process. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 21 (1972), 329-338. (J. N. Shi)School of Mathematics and Statistics, Henan Normal University, Henan Province, 453007, China. Email address:jiananshi2022@126.com (Q. Yin)School of Mathematics and Statistics, Henan Normal Un...