Recognition: unknown
The AECM Algorithm for Deterministic Maximum Likelihood Direction Finding in the Presence of Gaussian Mixture Noise
Pith reviewed 2026-05-08 17:25 UTC · model grok-4.3
The pith
The AECM algorithm updates direction-of-arrival estimates one at a time via golden-section search to reach stable convergence faster than the SAGE algorithm in Gaussian mixture noise.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The AECM algorithm utilizes multiple less informative versions of the complete data and applies the golden section search method to update direction-of-arrival estimates sequentially, one by one, at each iteration. Theoretical analysis shows that the AECM algorithm has almost the same computational complexity of each iteration as the SAGE algorithm. However, numerical results show that the AECM algorithm yields faster stable convergence and is computationally more efficient.
What carries the argument
Sequential one-by-one maximization of less-informative complete-data likelihoods inside the AECM framework, each performed by golden-section search on a single direction-of-arrival parameter.
Load-bearing premise
That sequential one-by-one updates via golden-section search will reliably produce faster convergence without introducing new local-optima issues or requiring problem-specific tuning beyond the SAGE baseline.
What would settle it
A Monte Carlo simulation on Gaussian-mixture noise with unequal signal powers in which AECM requires more total iterations than SAGE to reach the same estimation accuracy or fails to converge.
Figures
read the original abstract
Gaussian mixture noise can model non-Gaussian noise and also be used when outliers are present. For deterministic maximum likelihood direction finding in Gaussian mixture noise, the Space-Alternating Generalized Expectation-maximization (SAGE) algorithm, an extension of the expectation-maximization algorithm, was applied and designed by Kozick and Sadler twenty odd years ago, which simultaneously updates direction of arrival (DOA) estimates at each iteration and cannot properly converge under unequal signal powers. In this article, the Alternating Expectation-Conditional Maximization (AECM) algorithm, an extension of the SAGE algorithm, is applied and designed, which utilizes multiple less informative versions of the complete data and the golden section search method to update DOA estimates at each iteration sequentially (one by one). Theoretical analysis shows that the AECM algorithm has almost the same computational complexity of each iteration as the SAGE algorithm. However, numerical results show that the AECM algorithm yields faster stable convergence and is computationally more efficient.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces the AECM algorithm as an extension of the SAGE algorithm for deterministic maximum-likelihood DOA estimation under Gaussian mixture noise. It employs multiple less-informative complete-data versions and performs sequential one-by-one DOA updates via golden-section search, claiming nearly identical per-iteration complexity to SAGE while achieving faster stable convergence and greater efficiency, especially when signal powers are unequal.
Significance. If the central claims hold, AECM supplies a practical algorithmic refinement for array signal processing in non-Gaussian noise, with explicit strengths in the supplied pseudocode, complexity derivations, and simulation details that permit direct inspection of surrogate monotonicity and runtime accounting. This addresses a documented limitation of SAGE without increasing per-iteration cost.
minor comments (2)
- [Abstract] Abstract: the phrase 'twenty odd years ago' is informal; replace it with a precise citation to the Kozick and Sadler reference.
- [Numerical results] Numerical results: confirm that the reported runtimes include the full cost of the golden-section line searches and that the number of Monte Carlo trials and exact SNR/power configurations are tabulated for reproducibility.
Simulated Author's Rebuttal
We thank the referee for the positive summary and significance assessment of our work, as well as the recommendation for minor revision. The referee's description accurately captures the AECM algorithm's extension of SAGE, its use of multiple less-informative complete-data versions, sequential one-by-one DOA updates via golden-section search, and the claimed complexity and convergence benefits.
Circularity Check
No significant circularity detected
full rationale
The paper derives the AECM algorithm as a direct extension of the externally cited SAGE method (Kozick and Sadler) using standard alternating conditional maximization on multiple complete-data versions, with explicit pseudocode, monotonicity arguments, and per-iteration complexity bounds stated in closed form relative to SAGE. No step reduces a claimed prediction or uniqueness result to a fitted parameter or self-citation chain; the central claims rest on the provided derivations and simulations against the independent baseline rather than by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Standard assumptions of deterministic maximum likelihood estimation for direction-of-arrival problems
Reference graph
Works this paper leans on
-
[1]
Application of antenna ar- rays to mobile communications. Part II: Beam-forming and direction-of-arrival consid- erations
L. C. Godara. “Application of antenna ar- rays to mobile communications. Part II: Beam-forming and direction-of-arrival consid- erations”. In: Proc. IEEE 85.8 (Aug. 1997), pp. 1195–1245
1997
-
[2]
Complex Gaussian Processes
K. S. Miller. “Complex Gaussian Processes”. In: SIAM Rev. 11.4 (Oct. 1969), pp. 1195–1245
1969
-
[3]
Two decades of ar- ray signal processing research: The parametric approach
H. Krim and M. Viberg. “Two decades of ar- ray signal processing research: The parametric approach”. In: IEEE Signal Process. Mag. 13.4 (July 1996), pp. 67–94
1996
-
[4]
On some detec- tion and estimation problems in heavy-tailed noise
A. Swami and B. M. Sadler. “On some detec- tion and estimation problems in heavy-tailed noise”. In: Signal Process. 82.12 (Sept. 2002), pp. 1829–1846
2002
-
[5]
The robust covariation-based MUSIC (ROC-MUSIC) al- gorithm for bearing estimation in impulsive noise environments
P . Tsakalides and C. L. Nikias. “The robust covariation-based MUSIC (ROC-MUSIC) al- gorithm for bearing estimation in impulsive noise environments”. In: IEEE T rans. Signal Process. 44.7 (July 1996), pp. 1623–1633
1996
-
[6]
A subspace-based direction finding algorithm using fractional lower order statistics
T. H. Liu and J. M. Mendel. “A subspace-based direction finding algorithm using fractional lower order statistics”. In: IEEE T rans. Signal Process. 49.8 (Aug. 2001), pp. 1605–1613
2001
-
[7]
Subspace- based direction-of-arrival estimation using nonparametric statistics
S. Visuri, H. Oja, and V . Koivunen. “Subspace- based direction-of-arrival estimation using nonparametric statistics”. In: IEEE T rans. Sig- nal Process. 49.9 (Sept. 2001), pp. 2060–2073
2001
-
[8]
ℓp-MUSIC: Robust direction-of-arrival estimator for im- pulsive noise environments
W . Zeng, H. C. So, and L. Huang. “ ℓp-MUSIC: Robust direction-of-arrival estimator for im- pulsive noise environments”. In: IEEE T rans. Signal Process. 61.17 (Sept. 2013), pp. 4296– 4308
2013
-
[9]
Robust and sparse M-estimation of DOA
C. F . Mecklenbräuker and et al. “Robust and sparse M-estimation of DOA”. In: Signal Pro- cess. 220 (July 2024), p. 109461
2024
-
[10]
Robust direction-of-arrival estimation in the presence of outliers
P . Rajpurohit, P . Babu, and P . Stocia. “Robust direction-of-arrival estimation in the presence of outliers”. In: IEEE T rans. Aerosp. Electron. Syst. 61.4 (Apr. 2025), pp. 10921–10927. 8
2025
-
[11]
G. J. McLachlan and D. Peel. Finite Mixture Models. Wiley-Interscience, 2000
2000
-
[12]
Mixture den- sities, maximum likelihood and the EM algo- rithm
R. A. Redner and H. F . W alker. “Mixture den- sities, maximum likelihood and the EM algo- rithm”. In: SIAM Rev. 26.2 (Apr. 1984), pp. 195– 239
1984
-
[13]
Maximum likelihood from incomplete data via the EM algorithm
A. P . Dempster, N. M. Laird, and D. B. Rdin. “Maximum likelihood from incomplete data via the EM algorithm”. In: J. R. Stat. Soc., Ser. B 39.1 (Sept. 1977), pp. 1–38
1977
-
[14]
McLachlan and T
G. McLachlan and T. Krishnan. The EM Algo- rithm and Extensions . Wiley-Interscience, 2008
2008
-
[15]
Space- alternating generalized expectation- maximization algorithm
J. A. Fessler and A. O. Hero. “Space- alternating generalized expectation- maximization algorithm”. In: IEEE T rans. Signal Process. 42.10 (Oct. 1994), pp. 2664– 2677
1994
-
[16]
Maximum- likelihood array processing in non-Gaussian noise with gaussian mixtures
R. J. Kozick and B. M. Sadler. “Maximum- likelihood array processing in non-Gaussian noise with gaussian mixtures”. In: IEEE T rans. Signal Process. 48.12 (Dec. 2000), pp. 3520–3535
2000
-
[17]
The EM algorithm— an old folk-song sung to a fast new tune
X. Meng and D. V . Dyk. “The EM algorithm— an old folk-song sung to a fast new tune”. In: J. R. Stat. Soc., Ser. B 59.3 (Dec. 1997), pp. 511– 567
1997
-
[18]
Maximum likelihood localization of multiple sources by alternating projection
I. Ziskind and M. W ax. “Maximum likelihood localization of multiple sources by alternating projection”. In: IEEE T rans. Acoust., Speech, Sig- nal Process. 36.10 (Oct. 1988), pp. 1553–1560
1988
-
[19]
S. M. Kay. Fundamentals of Statistical Signal Pro- cessing: Estimation Theory . Prentice Hall, 1993
1993
-
[20]
and V andenberghe L
Boyd S. and V andenberghe L. Convex Optimiza- tion. Cambridge University Press, 2004
2004
-
[21]
On the convergence properties of the EM algorithm
C. F . J. Wu. “On the convergence properties of the EM algorithm”. In: Ann. Statist. 11.1 (Mar. 1983), pp. 95–103. 9
1983
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.