Recognition: no theorem link
Moderate Deviation Principle for a Stochastic Approximation Process
Pith reviewed 2026-05-11 02:14 UTC · model grok-4.3
The pith
The recursive stochastic approximation process satisfies a moderate deviation principle.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We establish the moderate deviation principle for the stochastic process (X_n)_{n≥0} defined by the given recursion. Auxiliary results include an exponential inequality for (X_n) and the moderate deviation principle for weighted sums of bounded martingale differences.
What carries the argument
The recursion X_{n+1} = X_n + (b/(n+1)) [g(X_n) + U_{n+1}] with bounded adapted martingale differences U_n that carries the moderate deviation analysis.
If this is right
- The process X_n has exponentially controlled moderate deviations from its mean path.
- Weighted sums of bounded martingale differences satisfy their own moderate deviation principle.
- An exponential inequality provides uniform tail bounds on the process X_n.
Where Pith is reading between the lines
- The result can be used to quantify error probabilities in stochastic optimization and adaptive control algorithms.
- It opens the possibility of deriving moderate deviation results for similar recursions with relaxed boundedness assumptions.
- The auxiliary results on martingale sums may combine with other large-deviation techniques for hybrid processes.
Load-bearing premise
The noise sequence consists of bounded martingale differences adapted to the filtration.
What would settle it
A concrete counterexample recursion with bounded martingales where the moderate deviation probabilities fail to obey the predicted rate function would disprove the claim.
read the original abstract
In this paper, we investigate a stochastic approximation procedure $\left(X_n\right)_{n\ge 0}$ taking values in $R$. The process is adapted to a filtration $(F_n)_{n\ge 0}$ and satisfies the recursion $X_{n+1}=X_n+\frac{b}{n+1}\big[g(X_n)+U_{n+1}\big]$, where $b>0$, $g:R \to R$ is a function and $\left(U_n\right)_{n\ge 1}$ is a sequence of bounded martingale differences adapted to the filtration $(F_n)_{n\ge 1}$. We establish the moderate deviation principle for the stochastic process $(X_n)_{n\ge 0}$. As auxiliary results, we also obtain the exponential inequality for $(X_n)_{n\ge 0}$ and the moderate deviation principle for weighted sums of bounded martingale differences.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript establishes the moderate deviation principle for the stochastic approximation process (X_n) satisfying the recursion X_{n+1}=X_n + (b/(n+1))[g(X_n)+U_{n+1}], where b>0, g is Lipschitz with g(0)=0 and g'(0)<0, and (U_n) are bounded martingale differences adapted to (F_n). Auxiliary results include an exponential inequality for (X_n) and the MDP for weighted sums of bounded martingale differences, derived via the Gärtner-Ellis theorem applied to the martingale sums followed by a linearization around the equilibrium of g combined with a Gronwall estimate to control the perturbation.
Significance. If the result holds, it extends moderate deviation principles from the setting of weighted martingale sums to recursive stochastic approximation processes, providing a rigorous bridge via explicit linearization and Gronwall control. The paper supplies complete, self-contained proofs under clearly stated assumptions (uniform boundedness of U_n and the stability condition g'(0)<0), which is a strength for a theoretical contribution in probability theory. This could inform error analysis in stochastic optimization and recursive estimation algorithms.
minor comments (3)
- [Main results] The statement of the main theorem (presumably Theorem 2.1 or equivalent) would benefit from explicitly recalling the definition of the moderate deviation scaling (e.g., the speed and the space scaling) rather than referring only to the auxiliary MDP for the martingale sums.
- [Proof of main result] In the proof of the transfer from the weighted sum to (X_n) (likely §4), the Gronwall-type estimate absorbs the drift term; it would help to include a short remark on the dependence of the constants on the Lipschitz constant of g and on b.
- [Introduction] The introduction could briefly contrast the obtained rate function with the large-deviation rate function for the same recursion (when it exists) to highlight the moderate-deviation regime.
Simulated Author's Rebuttal
We thank the referee for the careful reading of our manuscript and the positive evaluation, including the accurate summary of the moderate deviation principle for the stochastic approximation recursion and the auxiliary results. We appreciate the acknowledgment of the significance of extending MDP results from weighted martingale sums to recursive processes under the stated assumptions. The recommendation for minor revision is noted, but no specific major comments were provided for us to address.
Circularity Check
No significant circularity; derivation self-contained via standard theorems
full rationale
The paper establishes the MDP for the recursive process (X_n) by first proving an auxiliary MDP for weighted sums of bounded martingale differences using the Gärtner-Ellis theorem (which applies directly due to the uniform bound yielding a finite cumulant generating function), followed by a linearization around the equilibrium of g and a Gronwall estimate to transfer the principle to (X_n). These steps rely on explicitly stated assumptions (Lipschitz g with g(0)=0 and g'(0)<0, |U_n|≤M a.s.) and standard analytic tools, without any reduction to fitted parameters, self-definitional loops, or load-bearing self-citations. The central claim is obtained from the recursion and martingale property as first-principles inputs, with no internal inconsistency or presupposition of the target MDP.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The sequence (U_n) consists of bounded martingale differences adapted to the filtration (F_n).
Reference graph
Works this paper leans on
-
[1]
J. R. Blum, Approximation methods which converge with probability one. Ann. Math. Statistics 25 (1954), 382-386
work page 1954
-
[2]
A. Dembo and O. Zeitouni, Large deviations techniques and applications. Second edition. Springer-Verlag, New York, 1998
work page 1998
-
[3]
Dippon, Higher order representations of the Robbins-Monro process
J. Dippon, Higher order representations of the Robbins-Monro process. J. Multivariate Anal. 90 (2004), no. 2, 301-326
work page 2004
-
[4]
Dippon, Asymptotic expansions of the Robbins-Monro process
J. Dippon, Asymptotic expansions of the Robbins-Monro process. Math. Methods Statist. 17 (2008), no. 2, 138-145
work page 2008
-
[5]
P. Eichelsbacher and M. L¨ owe, Lindeberg’s method for moderate deviations and random sum- mation. J. Theoret. Probab. 32 (2019), no. 2, 872-897
work page 2019
-
[6]
V. F. Gapo˘ skin and T. P. Krasulina, The law of the iterated logarithm in stochastic approximation processes. Theory Prob. Appl. 19 (1975), 844–850
work page 1975
-
[7]
Goldstein, On the choice of step size in the Robbins-Monro procedure
L. Goldstein, On the choice of step size in the Robbins-Monro procedure. Statist. Probab. Lett. 6 (1988), no. 5, 299-303
work page 1988
- [8]
-
[9]
P. Major and P. R´ ev´ esz, A limit theorem for the Robbins-Monro approximation. Z. Wahrschein- lichkeitstheorie und Verw. Gebiete 27 (1973), 79-86
work page 1973
-
[10]
Y. Miao and M. R. Dong, Asymptotic behavior for the Robbins-Monro process. J. Appl. Probab. 55 (2018), no. 2, 559-570
work page 2018
-
[11]
T. L. Lai and H. Robbins, Limit theorems for weighted sums and stochastic approximation processes. Proc. Nat. Acad. Sci. U.S.A. 75 (1978), no. 3, 1068-1070
work page 1978
-
[12]
Renlund, Generalized P´ olya urns via stochastic approximation
H. Renlund, Generalized P´ olya urns via stochastic approximation. (2010) arXiv:1002.3716v1
-
[13]
Renlund, Limit theorem for stochastic approximation algorithm
H. Renlund, Limit theorem for stochastic approximation algorithm. (2011) arXiv:1102.4741v1
-
[14]
H. Robbins and S. Monro, A stochastic approximation method. Ann. Math. Statist. 22 (1951), 400-407
work page 1951
-
[15]
Ruppert, A new dynamic stochastic approximation procedure
D. Ruppert, A new dynamic stochastic approximation procedure. Ann. Statist. 7 (1979), no. 6, 1179-1195
work page 1979
-
[16]
Sacks, Asymptotic distribution of stochastic approximation procedures
J. Sacks, Asymptotic distribution of stochastic approximation procedures. Ann. Math. Statist. 29 (1958), 373-405
work page 1958
- [17]
-
[18]
J. N. Shi, Z. H. Yu and Y. Miao, The Law of the Iterated Logarithm for the Nonlinear Unbalanced Urn Model. J Theor Probab 39, 39 (2026). 22 J. N. SHI, Q. YIN, AND Y. MIAO
work page 2026
-
[19]
J. H. Venter, An extension of the Robbins-Monro procedure. Ann. Math. Statist. 38 (1967), 181-190
work page 1967
-
[20]
Woodroofe, Normal approximation and large deviations for the Robbins-Monro process
M. Woodroofe, Normal approximation and large deviations for the Robbins-Monro process. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 21 (1972), 329-338. (J. N. Shi)School of Mathematics and Statistics, Henan Normal University, Henan Province, 453007, China. Email address:jiananshi2022@126.com (Q. Yin)School of Mathematics and Statistics, Henan Normal Un...
work page 1972
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.