pith. machine review for the scientific record. sign in

arxiv: 2604.07372 · v1 · submitted 2026-04-07 · 📊 stat.ML · cs.IT· cs.LG· math.IT· math.OC

Recognition: 2 theorem links

· Lean Theorem

NS-RGS: Newton-Schulz based Riemannian gradient method for orthogonal group synchronization

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:44 UTC · model grok-4.3

classification 📊 stat.ML cs.ITcs.LGmath.ITmath.OC
keywords group synchronizationorthogonal groupNewton-Schulz iterationRiemannian gradientlinear convergenceleave-one-out analysisspectral initialization
0
0 comments X

The pith

A Newton-Schulz based Riemannian gradient method achieves linear convergence for orthogonal group synchronization up to near-optimal noise levels while replacing expensive decompositions with matrix multiplications.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces NS-RGS for recovering orthogonal group elements from noisy pairwise measurements. It replaces the standard SVD or QR projection step with a Newton-Schulz iteration that uses only matrix multiplications. This change lowers per-iteration cost and suits GPU and TPU hardware. With spectral initialization, the method converges linearly to the target solution even when statistical noise is present at near-optimal levels. The result matters for large-scale tasks such as sensor alignment and structure-from-motion where prior projection-based solvers become slow.

Core claim

The central claim is that the Newton-Schulz iteration can be substituted for the exact orthogonal projection inside a Riemannian gradient scheme. A refined leave-one-out argument handles the statistical dependence among pairwise observations and proves that spectral initialization yields linear convergence to the ground-truth orthogonal matrices up to near-optimal noise levels. Experiments on synthetic instances and real global alignment problems show accuracy comparable to the generalized power method together with an observed factor-of-two reduction in runtime.

What carries the argument

Newton-Schulz iteration used to approximate the orthogonal projection at each Riemannian gradient step, replacing full SVD or QR while preserving the descent property on the orthogonal group.

If this is right

  • Per-iteration cost drops because each step uses only matrix multiplications instead of decompositions.
  • The algorithm achieves nearly a 2x speedup on both synthetic data and real-world global alignment tasks while matching the accuracy of the generalized power method.
  • Linear convergence continues up to near-optimal statistical noise levels, supplying the same theoretical guarantee as slower projection methods.
  • The matrix-multiplication-only design aligns directly with GPU and TPU architectures for improved scaling on large instances.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same Newton-Schulz substitution for projections could be tried in other manifold optimization problems that currently rely on repeated decompositions.
  • Faster synchronization routines may improve end-to-end performance in computer-vision pipelines that depend on global alignment.
  • The refined leave-one-out technique for dependent observations could be adapted to prove rates for other nonconvex statistical estimation tasks.

Load-bearing premise

The refined leave-one-out analysis correctly overcomes statistical dependencies among the pairwise measurements to establish the stated convergence rate.

What would settle it

A controlled run of NS-RGS on synthetic data with noise variance set just above the claimed near-optimal threshold that fails to exhibit linear convergence to the target solution.

Figures

Figures reproduced from arXiv: 2604.07372 by Deren Han, Haiyang Peng, Meng Huang, Xin Chen.

Figure 1
Figure 1. Figure 1: Histogram of the unsquared residuals of RTR, GPM and NS￾RGS for the Lucy dataset. The reconstruction results are illustrated in [PITH_FULL_IMAGE:figures/full_fig_p011_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Views of the reconstructions from the Lucy dataset. The left one is the ground truth. The last three are the reconstruction results of the RTR, GPM, and NS-RGS methods, respectively [PITH_FULL_IMAGE:figures/full_fig_p012_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Difference heatmaps for the Lucy dataset. The entire model is approximately 930 × 530 × 1600 units. 5. Proofs Recall the observation is A = ZZ⊤ + σW ∈ R nd×nd with the ground truth Z = (Z1; · · · ; Zn). Denoting D = diag(Z1, · · · , Zn), one can notice that D⊤AD = h Id + σZ ⊤ i WijZj i 1≤i,j≤n [PITH_FULL_IMAGE:figures/full_fig_p012_3.png] view at source ↗
read the original abstract

Group synchronization is a fundamental task involving the recovery of group elements from pairwise measurements. For orthogonal group synchronization, the most common approach reformulates the problem as a constrained nonconvex optimization and solves it using projection-based methods, such as the generalized power method. However, these methods rely on exact SVD or QR decompositions in each iteration, which are computationally expensive and become a bottleneck for large-scale problems. In this paper, we propose a Newton-Schulz-based Riemannian Gradient Scheme (NS-RGS) for orthogonal group synchronization that significantly reduces computational cost by replacing the SVD or QR step with the Newton-Schulz iteration. This approach leverages efficient matrix multiplications and aligns perfectly with modern GPU/TPU architectures. By employing a refined leave-one-out analysis, we overcome the challenge arising from statistical dependencies, and establish that NS-RGS with spectral initialization achieves linear convergence to the target solution up to near-optimal statistical noise levels. Experiments on synthetic data and real-world global alignment tasks demonstrate that NS-RGS attains accuracy comparable to state-of-the-art methods such as the generalized power method, while achieving nearly a 2$\times$ speedup.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The manuscript introduces NS-RGS, a Newton-Schulz-based Riemannian gradient scheme for orthogonal group synchronization. It replaces per-iteration SVD or QR factorizations with Newton-Schulz iterations to reduce computational cost and better suit GPU/TPU architectures. The central theoretical claim is that, with spectral initialization, NS-RGS achieves linear convergence to the target solution up to near-optimal statistical noise levels; this is established via a refined leave-one-out argument intended to handle statistical dependencies among pairwise measurements. Experiments on synthetic data and real-world global alignment tasks report accuracy comparable to the generalized power method together with an approximately 2x speedup.

Significance. If the convergence guarantee holds, the work supplies a practical, hardware-aligned algorithm for large-scale group synchronization with both theoretical support and empirical validation. The emphasis on matrix-multiplication-only iterations and the attempt to derive linear rates under realistic noise models are positive features that could influence subsequent algorithmic development in statistical estimation on manifolds.

major comments (2)
  1. [§4] §4 (Convergence Analysis): The linear convergence claim up to near-optimal noise rests entirely on the refined leave-one-out argument. The manuscript must demonstrate, with explicit bounds, that all cross terms induced by the dependence of each measurement on multiple gradient evaluations are controlled uniformly throughout the induction; otherwise the contraction factor may degrade and the noise tolerance may fall short of the stated near-optimal level. A concrete inequality isolating the held-out measurement from the current iterate would directly address this load-bearing step.
  2. [Theorem 4.1] Theorem 4.1 (or equivalent statement of the main convergence result): The proof sketch does not yet make clear how the spectral initialization error is propagated through the first few Newton-Schulz steps while preserving the linear rate; an explicit induction hypothesis that accounts for the approximation error of the Newton-Schulz iteration itself is required.
minor comments (3)
  1. The abstract states 'nearly a 2× speedup' without specifying the matrix dimensions or hardware on which the timing was measured; adding this information would improve reproducibility.
  2. Notation for the orthogonal group and the Riemannian metric is introduced without a dedicated preliminary section; a short paragraph collecting definitions would aid readers unfamiliar with the manifold setting.
  3. In the experimental section, the choice of stopping tolerance and the number of Newton-Schulz iterations per gradient step should be stated explicitly rather than left as 'default values.'

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive comments on the convergence analysis. The suggestions will help strengthen the presentation of the leave-one-out argument and the induction. We address each major comment below and will incorporate the requested clarifications into the revised manuscript.

read point-by-point responses
  1. Referee: [§4] §4 (Convergence Analysis): The linear convergence claim up to near-optimal noise rests entirely on the refined leave-one-out argument. The manuscript must demonstrate, with explicit bounds, that all cross terms induced by the dependence of each measurement on multiple gradient evaluations are controlled uniformly throughout the induction; otherwise the contraction factor may degrade and the noise tolerance may fall short of the stated near-optimal level. A concrete inequality isolating the held-out measurement from the current iterate would directly address this load-bearing step.

    Authors: We agree that explicit control of the cross terms is essential to confirm the contraction factor and near-optimal noise tolerance. In the revised version we will add a supporting lemma that isolates the held-out measurement from the current iterate and supplies uniform bounds on all cross terms throughout the induction. These bounds will be derived from the fact that each measurement participates in a bounded number of gradient evaluations and will be shown to contribute only lower-order terms that do not degrade the linear rate or the stated noise level. revision: yes

  2. Referee: [Theorem 4.1] Theorem 4.1 (or equivalent statement of the main convergence result): The proof sketch does not yet make clear how the spectral initialization error is propagated through the first few Newton-Schulz steps while preserving the linear rate; an explicit induction hypothesis that accounts for the approximation error of the Newton-Schulz iteration itself is required.

    Authors: We acknowledge that the propagation of the spectral initialization error through the initial Newton-Schulz steps needs to be stated more explicitly. The Newton-Schulz iteration converges quadratically to the orthogonal factor, so its approximation error becomes negligible after a fixed number of steps. In the revision we will strengthen the induction hypothesis of Theorem 4.1 to include an explicit additive term for the Newton-Schulz approximation error. We will prove that this term is absorbed into the linear contraction after the first few iterations without affecting the overall rate or the noise tolerance. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the convergence derivation.

full rationale

The paper establishes linear convergence of NS-RGS with spectral initialization via a refined leave-one-out analysis that handles statistical dependencies in pairwise measurements. This is presented as an independent technical step to address a known challenge, not as a self-definition or a fitted quantity renamed as a prediction. No load-bearing self-citations, imported uniqueness theorems, or ansatzes smuggled via prior author work are indicated. The derivation combines standard Riemannian gradient steps with Newton-Schulz iteration for efficiency and appears self-contained against external benchmarks such as generalized power method comparisons.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract provides no explicit free parameters, new axioms, or invented entities; the method builds on standard Riemannian optimization and statistical assumptions for group synchronization.

pith-pipeline@v0.9.0 · 5511 in / 942 out tokens · 38901 ms · 2026-05-10T18:44:09.285738+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A second-order method landing on the Stiefel manifold via Newton$\unicode{x2013}$Schulz iteration

    math.OC 2026-05 unverdicted novelty 7.0

    A second-order method achieves local quadratic convergence on the Stiefel manifold without retractions by combining a modified Newton tangent step with Newton-Schulz normal steps for constraint satisfaction.

  2. PolarAdamW: Disentangling Spectral Control and Schur Gauge-Equivariance in Matrix Optimisation

    cs.LG 2026-05 unverdicted novelty 6.0

    PolarAdamW disentangles spectral control from gauge-equivariance in matrix optimizers, with experiments demonstrating their distinct roles on standard versus symmetry-aware neural networks.

Reference graph

Works this paper leans on

48 extracted references · cited by 2 Pith papers

  1. [1]

    Entrywise eigenvector analysis of random matrices with low expected rank.Ann

    Emmanuel Abbe, Jianqing Fan, Kaizheng Wang, and Yiqiao Zhong. Entrywise eigenvector analysis of random matrices with low expected rank.Ann. Stat., 48(3):1452, 2020

  2. [2]

    Global motion estimation from point matches

    Mica Arie-Nachimson, Shahar Z Kovalsky, Ira Kemelmacher-Shlizerman, Amit Singer, and Ronen Basri. Global motion estimation from point matches. In2012 Second international conference on 3D imaging, modeling, processing, visualization & transmission, pages 81–88. IEEE, 2012

  3. [3]

    Random Laplacian matrices and convex relaxations.Found

    Afonso S Bandeira. Random Laplacian matrices and convex relaxations.Found. Comput. Math., 18(2):345–379, 2018

  4. [4]

    Tightness of the maximum likelihood semidefinite relaxation for angular synchronization.Math

    Afonso S Bandeira, Nicolas Boumal, and Amit Singer. Tightness of the maximum likelihood semidefinite relaxation for angular synchronization.Math. Program., 163(1):145–167, 2017

  5. [5]

    A cheeger inequality for the graph connection Laplacian.SIAM J

    Afonso S Bandeira, Amit Singer, and Daniel A Spielman. A cheeger inequality for the graph connection Laplacian.SIAM J. Matrix Anal. Appl., 34(4):1611–1630, 2013

  6. [6]

    Modular duality in deep learning

    Jeremy Bernstein and Laker Newhouse. Modular duality in deep learning. InForty-second International Conference on Machine Learning, 2025

  7. [7]

    A Riemannian low-rank method for optimization over semidefinite matrices with block- diagonal constraints, jun 2015

    Nicolas Boumal. A Riemannian low-rank method for optimization over semidefinite matrices with block- diagonal constraints, jun 2015

  8. [8]

    Nonconvex phase synchronization.SIAM J

    Nicolas Boumal. Nonconvex phase synchronization.SIAM J. Optim., 26(4):2355–2377, 2016

  9. [9]

    Cambridge University Press, 2023

    Nicolas Boumal.An introduction to optimization on smooth manifolds. Cambridge University Press, 2023

  10. [10]

    Robust relative rotation averaging.IEEE Trans

    Avishek Chatterjee and Venu Madhav Govindu. Robust relative rotation averaging.IEEE Trans. Pat- tern Anal. Mach. Intell., 40(4):958–972, 2017

  11. [11]

    Non-convex joint community detection and group synchronization via generalized power method

    Sijin Chen, Xiwei Cheng, and Anthony Man-Cho So. Non-convex joint community detection and group synchronization via generalized power method. InInternational Conference on Artificial Intelligence and Statistics, pages 2899–2907. PMLR, 2024

  12. [12]

    Non-convex pose graph optimization in SLAM via proximal linearized Riemannian ADMM.Journal of Optimization Theory and Applications, 206(3):78, 2025

    Xin Chen, Chunfeng Cui, Deren Han, and Liqun Qi. Non-convex pose graph optimization in SLAM via proximal linearized Riemannian ADMM.Journal of Optimization Theory and Applications, 206(3):78, 2025

  13. [13]

    Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval.Math

    Yuxin Chen, Yuejie Chi, Jianqing Fan, and Cong Ma. Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval.Math. Program., 176:5–37, 2019

  14. [14]

    Noisy matrix completion: Un- derstanding statistical guarantees for convex relaxation via nonconvex optimization.SIAM J

    Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, and Yuling Yan. Noisy matrix completion: Un- derstanding statistical guarantees for convex relaxation via nonconvex optimization.SIAM J. Optim., 30(4):3098–3121, 2020

  15. [15]

    Sensor network localization by eigenvector synchro- nization over the Euclidean group.ACM Trans

    Mihai Cucuringu, Yaron Lipman, and Amit Singer. Sensor network localization by eigenvector synchro- nization over the Euclidean group.ACM Trans. Sens. Netw., 8(3):1–42, 2012

  16. [16]

    The rotation of eigenvectors by a perturbation

    Chandler Davis and William Morton Kahan. The rotation of eigenvectors by a perturbation. iii.SIAM J. Numer. Anal., 7(1):1–46, 1970

  17. [17]

    When do spectral gradient updates help in deep learning?, dec 2025

    Damek Davis and Dmitriy Drusvyatskiy. When do spectral gradient updates help in deep learning?, dec 2025

  18. [18]

    Optimal orthogonal group synchronization and rotation group syn- chronization.Inf

    Chao Gao and Anderson Y Zhang. Optimal orthogonal group synchronization and rotation group syn- chronization.Inf. Inference J. IMA, 12(2):591–632, 2023

  19. [19]

    Linear convergence of randomized Kaczmarz method for solving complex- valued phaseless equations.SIAM J

    Meng Huang and Yang Wang. Linear convergence of randomized Kaczmarz method for solving complex- valued phaseless equations.SIAM J. Imag. Sci., 15(2):989–1016, 2022

  20. [20]

    Consistent shape maps via semidefinite programming

    Qi-Xing Huang and Leonidas Guibas. Consistent shape maps via semidefinite programming. InCom- puter graphics forum, volume 32, pages 177–186. Wiley Online Library, 2013

  21. [21]

    Muon: An optimizer for hidden layers in neural networks, 2024

    Keller Jordan, Yuchen Jin, Vlado Boza, You Jiacheng, Franz Cesista, Laker Newhouse, and Jeremy Bernstein. Muon: An optimizer for hidden layers in neural networks, 2024. page 4, 2024

  22. [22]

    Rational iterative methods for the matrix sign function.SIAM J

    Charles Kenney and Alan J Laub. Rational iterative methods for the matrix sign function.SIAM J. Matrix Anal. Appl., 12(2):273–291, 1991

  23. [23]

    Understanding gradient orthogonalization for deep learning via non-Euclidean trust- region optimization, mar 2025

    Dmitry Kovalev. Understanding gradient orthogonalization for deep learning via non-Euclidean trust- region optimization, mar 2025

  24. [24]

    A note on the convergence of Muon, feb 2025

    Jiaxiang Li and Mingyi Hong. A note on the convergence of Muon, feb 2025. NS-RGS: NEWTON-SCHULZ BASED RIEMANNIAN GRADIENT METHOD 31

  25. [25]

    Improved performance guarantees for orthogonal group synchronization via generalized power method.SIAM J

    Shuyang Ling. Improved performance guarantees for orthogonal group synchronization via generalized power method.SIAM J. Optim., 32(2):1018–1048, 2022

  26. [26]

    Solving orthogonal group synchronization via convex and low-rank optimization: tight- ness and landscape analysis.Math

    Shuyang Ling. Solving orthogonal group synchronization via convex and low-rank optimization: tight- ness and landscape analysis.Math. Program., 200(1):589–628, 2023

  27. [27]

    Local geometry determines global landscape in low-rank factorization for synchronization

    Shuyang Ling. Local geometry determines global landscape in low-rank factorization for synchronization. Found. Comput. Math., pages 1–33, 2025

  28. [28]

    ReSync: Riemannian subgradient-based robust rota- tion synchronization

    Huikang Liu, Xiao Li, and Anthony Man-Cho So. ReSync: Riemannian subgradient-based robust rota- tion synchronization. InAdvances in Neural Information Processing Systems, volume 36, pages 5236–

  29. [29]

    Curran Associates, Inc., 2023

  30. [30]

    On the estimation performance and conver- gence rate of the generalized power method for phase synchronization.SIAM J

    Huikang Liu, Man-Chung Yue, and Anthony Man-Cho So. On the estimation performance and conver- gence rate of the generalized power method for phase synchronization.SIAM J. Optim., 27(4):2426– 2446, 2017

  31. [31]

    A unified approach to synchronization prob- lems over subgroups of the orthogonal group.Appl

    Huikang Liu, Man-Chung Yue, and Anthony Man-Cho So. A unified approach to synchronization prob- lems over subgroups of the orthogonal group.Appl. Comput. Harmon. Anal., 66:320–372, 2023

  32. [32]

    Implicit regularization in nonconvex statisti- cal estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution.Found

    Cong Ma, Kaizheng Wang, Yuejie Chi, and Yuxin Chen. Implicit regularization in nonconvex statisti- cal estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution.Found. Comput. Math., 20(3):451–632, 2019

  33. [33]

    Backward stability of iterations for computing the polar decomposition.SIAM J

    Yuji Nakatsukasa and Nicholas J Higham. Backward stability of iterations for computing the polar decomposition.SIAM J. Matrix Anal. Appl., 33(2):460–479, 2012

  34. [34]

    SIAM, 2008

    J Hiham Nicholas.Functions of matrices: theory and computation. SIAM, 2008

  35. [35]

    A survey of structure from motion

    Onur ¨Ozye¸ sil, Vladislav Voroninski, Ronen Basri, and Amit Singer. A survey of structure from motion. Acta Numer., 26:305–364, 2017

  36. [36]

    Noisy phase retrieval from subgaussian mea- surements, dec 2024

    Haiyang Peng, Deren Han, Linbin Li, and Meng Huang. Noisy phase retrieval from subgaussian mea- surements, dec 2024

  37. [37]

    SE-Sync: A certifiably correct algorithm for synchronization over the special Euclidean group

    David M Rosen, Luca Carlone, Afonso S Bandeira, and John J Leonard. SE-Sync: A certifiably correct algorithm for synchronization over the special Euclidean group. InThe International Journal of Robotics Research, volume 38. Sage Publications Sage UK: London, England, 2019

  38. [38]

    Muon optimizes under spectral norm constraints, jun 2025

    Wei Shen, Ruichuan Huang, Minhui Huang, Cong Shen, and Jiawei Zhang. Muon optimizes under spectral norm constraints, jun 2025

  39. [39]

    On the convergence analysis of Muon, may 2025

    Wei Shen, Ruichuan Huang, Minhui Huang, Cong Shen, and Jiawei Zhang. On the convergence analysis of Muon, may 2025

  40. [40]

    Viewing direction estimation in cryo-em using synchronization.SIAM J

    Yoel Shkolnisky and Amit Singer. Viewing direction estimation in cryo-em using synchronization.SIAM J. Imag. Sci., 5(3):1088–1110, 2012

  41. [41]

    Angular synchronization by eigenvectors and semidefinite programming.Appl

    Amit Singer. Angular synchronization by eigenvectors and semidefinite programming.Appl. Comput. Harmon. Anal., 30(1):20–36, 2011

  42. [42]

    Mathematics for cryo-electron microscopy

    Amit Singer. Mathematics for cryo-electron microscopy. InProceedings of the International Congress of Mathematicians: Rio de Janeiro 2018, pages 3995–4014. World Scientific, 2018

  43. [43]

    Viewing angle classification of cryo- electron microscopy images using eigenvectors.SIAM J

    Amit Singer, Zhizhen Zhao, Yoel Shkolnisky, and Ronny Hadani. Viewing angle classification of cryo- electron microscopy images using eigenvectors.SIAM J. Imag. Sci., 4(2):723–759, 2011

  44. [44]

    Elsevier Science, 1990

    Gilbert W Stewart and Ji-guang Sun.Matrix perturbation theory. Elsevier Science, 1990

  45. [45]

    Exact and stable recovery of rotations for robust synchronization.Inf

    Lanhui Wang and Amit Singer. Exact and stable recovery of rotations for robust synchronization.Inf. Inference J. IMA, 2(2):145–193, 2013

  46. [46]

    Near-optimal bounds for phase synchronization.SIAM J

    Yiqiao Zhong and Nicolas Boumal. Near-optimal bounds for phase synchronization.SIAM J. Optim., 28(2):989–1016, 2018

  47. [47]

    Rotation group synchronization via quotient manifold, jun 2023

    Linglingzhi Zhu, Chong Li, and Anthony Man-Cho So. Rotation group synchronization via quotient manifold, jun 2023

  48. [48]

    Orthogonal group synchronization with incomplete measurements: Error bounds and linear convergence of the generalized power method, dec 2021

    Linglingzhi Zhu, Jinxin Wang, and Anthony Man-Cho So. Orthogonal group synchronization with incomplete measurements: Error bounds and linear convergence of the generalized power method, dec 2021. 32 HAIYANG PENG, DEREN HAN, XIN CHEN, AND MENG HUANG School of Mathematical Sciences, Beihang University, Beijing, 100191, China Email address:haiyangpeng@buaa.e...