pith. machine review for the scientific record. sign in

arxiv: 2605.02309 · v1 · submitted 2026-05-04 · 📡 eess.SP · cs.IT· math.IT· stat.AP

Recognition: unknown

The AECM Algorithm for Deterministic Maximum Likelihood Direction Finding in the Presence of Gaussian Mixture Noise

Authors on Pith no claims yet

Pith reviewed 2026-05-08 17:25 UTC · model grok-4.3

classification 📡 eess.SP cs.ITmath.ITstat.AP
keywords AECM algorithmdirection of arrival estimationGaussian mixture noisemaximum likelihoodSAGE algorithmgolden section searchconvergence rate
0
0 comments X

The pith

The AECM algorithm updates direction-of-arrival estimates one at a time via golden-section search to reach stable convergence faster than the SAGE algorithm in Gaussian mixture noise.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper designs the Alternating Expectation-Conditional Maximization algorithm for deterministic maximum-likelihood direction-of-arrival estimation when noise follows a Gaussian mixture model. It replaces the simultaneous updates of the earlier SAGE method with sequential, one-by-one updates performed on multiple less-informative complete-data versions, each maximized by golden-section search. Per-iteration cost remains nearly the same as SAGE, yet numerical experiments indicate fewer iterations are needed to reach stable points and lower overall computation. Readers would care because direction finding under realistic non-Gaussian noise or outliers is required in radar, sonar, and communications systems.

Core claim

The AECM algorithm utilizes multiple less informative versions of the complete data and applies the golden section search method to update direction-of-arrival estimates sequentially, one by one, at each iteration. Theoretical analysis shows that the AECM algorithm has almost the same computational complexity of each iteration as the SAGE algorithm. However, numerical results show that the AECM algorithm yields faster stable convergence and is computationally more efficient.

What carries the argument

Sequential one-by-one maximization of less-informative complete-data likelihoods inside the AECM framework, each performed by golden-section search on a single direction-of-arrival parameter.

Load-bearing premise

That sequential one-by-one updates via golden-section search will reliably produce faster convergence without introducing new local-optima issues or requiring problem-specific tuning beyond the SAGE baseline.

What would settle it

A Monte Carlo simulation on Gaussian-mixture noise with unequal signal powers in which AECM requires more total iterations than SAGE to reach the same estimation accuracy or fails to converge.

Figures

Figures reproduced from arXiv: 2605.02309 by Bin Lyu, Mingyan Gong.

Figure 1
Figure 1. Figure 1: Convergence comparison of both algorithms not using Algorithm 2. 0 5 10 15 20 25 30 35 40 -2800 -2600 -2400 -2200 0 5 10 15 20 25 30 35 40 55 60 0 5 10 15 20 25 30 35 40 100 101 view at source ↗
Figure 2
Figure 2. Figure 2: Convergence comparison of both algorithms using Algorithm 2. On the contrary, view at source ↗
read the original abstract

Gaussian mixture noise can model non-Gaussian noise and also be used when outliers are present. For deterministic maximum likelihood direction finding in Gaussian mixture noise, the Space-Alternating Generalized Expectation-maximization (SAGE) algorithm, an extension of the expectation-maximization algorithm, was applied and designed by Kozick and Sadler twenty odd years ago, which simultaneously updates direction of arrival (DOA) estimates at each iteration and cannot properly converge under unequal signal powers. In this article, the Alternating Expectation-Conditional Maximization (AECM) algorithm, an extension of the SAGE algorithm, is applied and designed, which utilizes multiple less informative versions of the complete data and the golden section search method to update DOA estimates at each iteration sequentially (one by one). Theoretical analysis shows that the AECM algorithm has almost the same computational complexity of each iteration as the SAGE algorithm. However, numerical results show that the AECM algorithm yields faster stable convergence and is computationally more efficient.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript introduces the AECM algorithm as an extension of the SAGE algorithm for deterministic maximum-likelihood DOA estimation under Gaussian mixture noise. It employs multiple less-informative complete-data versions and performs sequential one-by-one DOA updates via golden-section search, claiming nearly identical per-iteration complexity to SAGE while achieving faster stable convergence and greater efficiency, especially when signal powers are unequal.

Significance. If the central claims hold, AECM supplies a practical algorithmic refinement for array signal processing in non-Gaussian noise, with explicit strengths in the supplied pseudocode, complexity derivations, and simulation details that permit direct inspection of surrogate monotonicity and runtime accounting. This addresses a documented limitation of SAGE without increasing per-iteration cost.

minor comments (2)
  1. [Abstract] Abstract: the phrase 'twenty odd years ago' is informal; replace it with a precise citation to the Kozick and Sadler reference.
  2. [Numerical results] Numerical results: confirm that the reported runtimes include the full cost of the golden-section line searches and that the number of Monte Carlo trials and exact SNR/power configurations are tabulated for reproducibility.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive summary and significance assessment of our work, as well as the recommendation for minor revision. The referee's description accurately captures the AECM algorithm's extension of SAGE, its use of multiple less-informative complete-data versions, sequential one-by-one DOA updates via golden-section search, and the claimed complexity and convergence benefits.

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper derives the AECM algorithm as a direct extension of the externally cited SAGE method (Kozick and Sadler) using standard alternating conditional maximization on multiple complete-data versions, with explicit pseudocode, monotonicity arguments, and per-iteration complexity bounds stated in closed form relative to SAGE. No step reduces a claimed prediction or uniqueness result to a fitted parameter or self-citation chain; the central claims rest on the provided derivations and simulations against the independent baseline rather than by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The work rests on standard deterministic ML assumptions for DOA estimation and the Gaussian mixture noise model; no new free parameters, axioms, or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption Standard assumptions of deterministic maximum likelihood estimation for direction-of-arrival problems
    Implicit in the problem formulation and comparison to SAGE.

pith-pipeline@v0.9.0 · 5477 in / 1042 out tokens · 52897 ms · 2026-05-08T17:25:08.692617+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

21 extracted references

  1. [1]

    Application of antenna ar- rays to mobile communications. Part II: Beam-forming and direction-of-arrival consid- erations

    L. C. Godara. “Application of antenna ar- rays to mobile communications. Part II: Beam-forming and direction-of-arrival consid- erations”. In: Proc. IEEE 85.8 (Aug. 1997), pp. 1195–1245

  2. [2]

    Complex Gaussian Processes

    K. S. Miller. “Complex Gaussian Processes”. In: SIAM Rev. 11.4 (Oct. 1969), pp. 1195–1245

  3. [3]

    Two decades of ar- ray signal processing research: The parametric approach

    H. Krim and M. Viberg. “Two decades of ar- ray signal processing research: The parametric approach”. In: IEEE Signal Process. Mag. 13.4 (July 1996), pp. 67–94

  4. [4]

    On some detec- tion and estimation problems in heavy-tailed noise

    A. Swami and B. M. Sadler. “On some detec- tion and estimation problems in heavy-tailed noise”. In: Signal Process. 82.12 (Sept. 2002), pp. 1829–1846

  5. [5]

    The robust covariation-based MUSIC (ROC-MUSIC) al- gorithm for bearing estimation in impulsive noise environments

    P . Tsakalides and C. L. Nikias. “The robust covariation-based MUSIC (ROC-MUSIC) al- gorithm for bearing estimation in impulsive noise environments”. In: IEEE T rans. Signal Process. 44.7 (July 1996), pp. 1623–1633

  6. [6]

    A subspace-based direction finding algorithm using fractional lower order statistics

    T. H. Liu and J. M. Mendel. “A subspace-based direction finding algorithm using fractional lower order statistics”. In: IEEE T rans. Signal Process. 49.8 (Aug. 2001), pp. 1605–1613

  7. [7]

    Subspace- based direction-of-arrival estimation using nonparametric statistics

    S. Visuri, H. Oja, and V . Koivunen. “Subspace- based direction-of-arrival estimation using nonparametric statistics”. In: IEEE T rans. Sig- nal Process. 49.9 (Sept. 2001), pp. 2060–2073

  8. [8]

    ℓp-MUSIC: Robust direction-of-arrival estimator for im- pulsive noise environments

    W . Zeng, H. C. So, and L. Huang. “ ℓp-MUSIC: Robust direction-of-arrival estimator for im- pulsive noise environments”. In: IEEE T rans. Signal Process. 61.17 (Sept. 2013), pp. 4296– 4308

  9. [9]

    Robust and sparse M-estimation of DOA

    C. F . Mecklenbräuker and et al. “Robust and sparse M-estimation of DOA”. In: Signal Pro- cess. 220 (July 2024), p. 109461

  10. [10]

    Robust direction-of-arrival estimation in the presence of outliers

    P . Rajpurohit, P . Babu, and P . Stocia. “Robust direction-of-arrival estimation in the presence of outliers”. In: IEEE T rans. Aerosp. Electron. Syst. 61.4 (Apr. 2025), pp. 10921–10927. 8

  11. [11]

    G. J. McLachlan and D. Peel. Finite Mixture Models. Wiley-Interscience, 2000

  12. [12]

    Mixture den- sities, maximum likelihood and the EM algo- rithm

    R. A. Redner and H. F . W alker. “Mixture den- sities, maximum likelihood and the EM algo- rithm”. In: SIAM Rev. 26.2 (Apr. 1984), pp. 195– 239

  13. [13]

    Maximum likelihood from incomplete data via the EM algorithm

    A. P . Dempster, N. M. Laird, and D. B. Rdin. “Maximum likelihood from incomplete data via the EM algorithm”. In: J. R. Stat. Soc., Ser. B 39.1 (Sept. 1977), pp. 1–38

  14. [14]

    McLachlan and T

    G. McLachlan and T. Krishnan. The EM Algo- rithm and Extensions . Wiley-Interscience, 2008

  15. [15]

    Space- alternating generalized expectation- maximization algorithm

    J. A. Fessler and A. O. Hero. “Space- alternating generalized expectation- maximization algorithm”. In: IEEE T rans. Signal Process. 42.10 (Oct. 1994), pp. 2664– 2677

  16. [16]

    Maximum- likelihood array processing in non-Gaussian noise with gaussian mixtures

    R. J. Kozick and B. M. Sadler. “Maximum- likelihood array processing in non-Gaussian noise with gaussian mixtures”. In: IEEE T rans. Signal Process. 48.12 (Dec. 2000), pp. 3520–3535

  17. [17]

    The EM algorithm— an old folk-song sung to a fast new tune

    X. Meng and D. V . Dyk. “The EM algorithm— an old folk-song sung to a fast new tune”. In: J. R. Stat. Soc., Ser. B 59.3 (Dec. 1997), pp. 511– 567

  18. [18]

    Maximum likelihood localization of multiple sources by alternating projection

    I. Ziskind and M. W ax. “Maximum likelihood localization of multiple sources by alternating projection”. In: IEEE T rans. Acoust., Speech, Sig- nal Process. 36.10 (Oct. 1988), pp. 1553–1560

  19. [19]

    S. M. Kay. Fundamentals of Statistical Signal Pro- cessing: Estimation Theory . Prentice Hall, 1993

  20. [20]

    and V andenberghe L

    Boyd S. and V andenberghe L. Convex Optimiza- tion. Cambridge University Press, 2004

  21. [21]

    On the convergence properties of the EM algorithm

    C. F . J. Wu. “On the convergence properties of the EM algorithm”. In: Ann. Statist. 11.1 (Mar. 1983), pp. 95–103. 9