pith. machine review for the scientific record. sign in

arxiv: 2605.13437 · v1 · submitted 2026-05-13 · 🧮 math.NA · cs.IT· cs.NA· math.IT

Recognition: unknown

Revisiting CUR Perturbation Analysis: A Local Tangent-Space Expansion

Longxiu Huang

Authors on Pith no claims yet

Pith reviewed 2026-05-14 18:24 UTC · model grok-4.3

classification 🧮 math.NA cs.ITcs.NAmath.IT
keywords CUR decompositionperturbation analysisFréchet derivativelow-rank approximationoblique projectortangent spacesamplingmatrix recovery
0
0 comments X

The pith

The Fréchet derivative of the rank-truncated CUR map is a sampling-induced oblique tangent-space projector that filters certain perturbations to first order.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops a local perturbation expansion for the rank-truncated CUR decomposition with fixed row and column indices near a low-rank matrix. It establishes that the derivative of this map is an oblique projector onto the tangent space induced by the sampling. As a result, the first-order recovery error depends only on the component of the perturbation visible to the selected rows and columns. Perturbations in the kernel of this projector are invisible and do not contribute to the linear error term. This provides a finer local analysis than classical bounds based on the full perturbation norm and contrasts with the orthogonal projection behavior of SVD truncation.

Core claim

The Fréchet derivative of the rank-truncated CUR map is a sampling-induced oblique tangent-space projector determined by the selected rows and columns. Consequently, the local recovery error for an underlying low-rank matrix is governed not by the full perturbation norm alone, but by the image of the perturbation under this sampling-induced tangent projector. In particular, perturbations that are invisible to the selected rows and columns are removed to first order. We compare this behavior with the classical local expansion of the rank-r SVD truncation.

What carries the argument

sampling-induced oblique tangent-space projector determined by the selected rows and columns

If this is right

  • Local recovery error is determined by the projected perturbation under the oblique projector rather than the full norm.
  • Perturbations in the kernel of the projector are removed to first order.
  • SVD truncation removes orthogonal perturbations while CUR removes kernel perturbations of the sampling projector.
  • Numerical experiments confirm the predicted first- and second-order local convergence rates.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This framework could guide the choice of row and column indices to minimize the dimension of the invisible perturbation space for improved robustness.
  • Similar tangent-space analyses might apply to other sampling-based low-rank methods beyond CUR.
  • The approach opens the door to second-order expansions and higher-order perturbation theory for CUR maps.

Load-bearing premise

The underlying matrix is close to an admissible rank-r matrix with the chosen indices fixed, making the rank-truncated CUR map Fréchet differentiable at that point.

What would settle it

Observe a perturbation vector lying in the kernel of the derived oblique projector and verify that the first-order term in the CUR recovery error vanishes, while a generic perturbation produces a linear error matching the projected norm.

Figures

Figures reproduced from arXiv: 2605.13437 by Longxiu Huang.

Figure 1
Figure 1. Figure 1: Generic perturbation. The observed CUR and SVD recovery errors are compared with [PITH_FULL_IMAGE:figures/full_fig_p015_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison between rank-truncated CUR and rank- [PITH_FULL_IMAGE:figures/full_fig_p016_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Gradually visible perturbations for three perturbation sizes. The parameter [PITH_FULL_IMAGE:figures/full_fig_p017_3.png] view at source ↗
read the original abstract

CUR decompositions approximate a matrix using selected columns, rows, and their intersection. Classical CUR theory provides exactness results for low-rank matrices and perturbation bounds controlled by the size of the noise. In this work we develop a local perturbation expansion for a fixed-index rank-truncated CUR map near an admissible rank-\(r\) matrix. We show that the Fr\'echet derivative of the rank-truncated CUR map is a sampling-induced oblique tangent-space projector determined by the selected rows and columns. Consequently, the local recovery error for an underlying low-rank matrix is governed not by the full perturbation norm alone, but by the image of the perturbation under this sampling-induced tangent projector. In particular, perturbations that are invisible to the selected rows and columns are removed to first order. We compare this behavior with the classical local expansion of the rank-\(r\) SVD truncation. SVD removes orthogonal-normal perturbations to first order, whereas rank-truncated CUR removes perturbations in the kernel of the sampling-induced oblique tangent projector. Numerical experiments illustrate these regimes and confirm the predicted first- and second-order local rates.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper develops a local perturbation expansion for the fixed-index rank-truncated CUR map near an admissible rank-r matrix. It shows via Fréchet calculus that the derivative of this map is a sampling-induced oblique tangent-space projector determined by the selected rows and columns. Consequently, the first-order recovery error is governed by the image of the perturbation under this projector (rather than the full perturbation norm), so that perturbations in the projector's kernel are annihilated to first order. The analysis is compared to the classical orthogonal-projector expansion for rank-r SVD truncation, and numerical experiments are used to illustrate the predicted first- and second-order rates.

Significance. If the central derivation holds, the result supplies a geometrically precise local error characterization for CUR that is unavailable from classical norm-based bounds. The explicit identification of the oblique tangent projector (and its kernel) distinguishes CUR behavior from SVD and explains why certain sampling-invisible perturbations are removed at first order. The derivation is parameter-free and rests only on standard Fréchet differentiability at admissible points, which is a strength. This refined view may be useful in applications where CUR is chosen for column/row interpretability and where perturbation directions relative to the sampling are known a priori.

minor comments (3)
  1. [§2.2] §2.2, Definition 2.3: the admissibility condition (invertibility of the intersection submatrix) is stated but the radius of the neighborhood on which the map remains differentiable is not quantified; an explicit lower bound in terms of the smallest singular value of the intersection would strengthen the local claim.
  2. [Figure 3] Figure 3: the log-log plots of recovery error versus perturbation size would benefit from overlaid reference lines of slope 1 and 2 to make the predicted rates visually immediate.
  3. Notation: the oblique projector is denoted P_{C,R} in the main text but appears as Π in some displayed equations; consistent use of a single symbol would improve readability.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive and accurate summary of our work on the local Fréchet perturbation analysis of fixed-index rank-truncated CUR. We appreciate the recognition that the derivative is a sampling-induced oblique tangent-space projector, which distinguishes CUR from SVD truncation by annihilating perturbations in the projector's kernel to first order. No specific major comments were raised.

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The derivation applies standard Fréchet calculus to the fixed-index rank-truncated CUR map at admissible rank-r points, identifying its derivative as the sampling-induced oblique projector onto the tangent space. This projector annihilates kernel perturbations to first order by the definition of the differential, yielding the local expansion directly from the chain rule and the map's geometry. The comparison to SVD truncation follows from the orthogonal versus oblique projectors without any reduction to fitted inputs, self-definitional loops, or load-bearing self-citations. The admissibility condition ensures differentiability in a neighborhood, making the entire argument self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Analysis rests on standard assumptions from matrix calculus and CUR theory; no free parameters, new entities, or ad-hoc axioms are introduced in the abstract.

axioms (1)
  • domain assumption The rank-truncated CUR map is Fréchet differentiable near admissible rank-r matrices with fixed indices
    Invoked to enable the local expansion and derivative calculation

pith-pipeline@v0.9.0 · 5490 in / 1116 out tokens · 32940 ms · 2026-05-14T18:24:58.481060+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages

  1. [1]

    Journal of Machine Learning Research , volume=

    Accelerated alternating projections for robust principal component analysis , author=. Journal of Machine Learning Research , volume=

  2. [2]

    Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

    Tensor-CUR decompositions for tensor-based data , author=. Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

  3. [3]

    Linear Algebra and its Applications , volume =

    Generalizing the column--row matrix decomposition to multi-way arrays , author =. Linear Algebra and its Applications , volume =

  4. [4]

    Mode-wise tensor decompositions: Multi-dimensional generalizations of

    Cai, HanQin and Hamm, Keaton and Huang, Longxiu and Needell, Deanna , journal =. Mode-wise tensor decompositions: Multi-dimensional generalizations of

  5. [5]

    Linear Algebra and its Applications , volume=

    TT-cross approximation for multidimensional arrays , author=. Linear Algebra and its Applications , volume=. 2010 , publisher=

  6. [6]

    SIAM Journal on Scientific Computing , volume=

    Tensor-train decomposition , author=. SIAM Journal on Scientific Computing , volume=. 2011 , publisher=

  7. [7]

    Advances in neural information processing systems , volume=

    Error analysis of tensor-train cross approximation , author=. Advances in neural information processing systems , volume=

  8. [8]

    Che et al

    Perturbations of the Tcur Decomposition for Tensor Valued Data in the Tucker Format: M. Che et al. , author=. Journal of Optimization Theory and Applications , volume=. 2022 , publisher=

  9. [9]

    Chen, Juefei and Wei, Yimin and Xu, Yanwei , journal=. Tensor. 2022 , publisher=

  10. [10]

    SIAM Journal on Optimization , volume=

    Low-rank matrix completion by Riemannian optimization , author=. SIAM Journal on Optimization , volume=. 2013 , publisher=

  11. [11]

    Handbook of variational methods for nonlinear geometric data , pages=

    Geometric methods on low-rank matrix and tensor manifolds , author=. Handbook of variational methods for nonlinear geometric data , pages=. 2020 , publisher=

  12. [12]

    Proceedings of the National Academy of Sciences , volume=

    CUR matrix decompositions for improved data analysis , author=. Proceedings of the National Academy of Sciences , volume=. 2009 , publisher=

  13. [13]

    and Muthukrishnan, S

    Drineas, Petros and Mahoney, Michael W. and Muthukrishnan, S. , title =. SIAM Journal on Matrix Analysis and Applications , volume =

  14. [14]

    Foundations and Trends

    Randomized algorithms for matrices and data , author=. Foundations and Trends. 2011 , publisher=

  15. [15]

    SIAM review , volume=

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions , author=. SIAM review , volume=. 2011 , publisher=

  16. [16]

    Linear algebra and its applications , volume=

    A theory of pseudoskeleton approximations , author=. Linear algebra and its applications , volume=. 1997 , publisher=

  17. [17]

    Contemporary Mathematics , volume=

    The maximal-volume concept in approximation by low-rank matrices , author=. Contemporary Mathematics , volume=

  18. [18]

    Generalized inverses and applications , pages=

    Differentiation of pseudoinverses, separable nonlinear least square problems and other tales , author=. Generalized inverses and applications , pages=. 1976 , publisher=

  19. [19]

    Matrix Methods: Theory, Algorithms And Applications: Dedicated to the Memory of Gene Golub , pages=

    How to find a good submatrix , author=. Matrix Methods: Theory, Algorithms And Applications: Dedicated to the Memory of Gene Golub , pages=. 2010 , publisher=

  20. [20]

    Improved Nystr

    Zhang, Kai and Tsang, Ivor W and Kwok, James T , booktitle=. Improved Nystr

  21. [21]

    , title =

    Gittens, Alex and Mahoney, Michael W. , title =. Journal of Machine Learning Research , volume =

  22. [22]

    Applied and Computational Harmonic Analysis , volume=

    Perspectives on CUR decompositions , author=. Applied and Computational Harmonic Analysis , volume=. 2020 , publisher=

  23. [23]

    SIAM Journal on Matrix Analysis and Applications , volume =

    Hamm, Keaton and Huang, Longxiu , title =. SIAM Journal on Matrix Analysis and Applications , volume =. 2021 , publisher=

  24. [24]

    Foundations of Data Science , volume =

    Hamm, Keaton and Huang, Longxiu , title =. Foundations of Data Science , volume =

  25. [25]

    IEEE Signal Processing Letters , volume =

    Cai, HanQin and Hamm, Keaton and Huang, Longxiu and Li, Jiaqi and Wang, Tao , title =. IEEE Signal Processing Letters , volume =

  26. [26]

    SIAM Journal on Imaging Sciences , volume =

    Cai, HanQin and Hamm, Keaton and Huang, Longxiu and Needell, Deanna , title =. SIAM Journal on Imaging Sciences , volume =

  27. [27]

    Proceedings of the First Mathematical and Scientific Machine Learning Conference , series =

    Hamm, Keaton and Huang, Longxiu and Wang, Tao , title =. Proceedings of the First Mathematical and Scientific Machine Learning Conference , series =

  28. [28]

    Applied and Computational Harmonic Analysis , volume =

    Hamm, Keaton and Huang, Longxiu , title =. Applied and Computational Harmonic Analysis , volume =

  29. [29]

    Oblique Projection for Scalable Rank-Adaptive Reduced-Order Modelling of Nonlinear Stochastic Partial Differential Equations with Time-Dependent Bases , journal =

    Donello, Michael and Palkar, Grishma and Naderi, Mohammad Hossein and Del Rey Fern. Oblique Projection for Scalable Rank-Adaptive Reduced-Order Modelling of Nonlinear Stochastic Partial Differential Equations with Time-Dependent Bases , journal =

  30. [30]

    Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume=

    Oblique projection for scalable rank-adaptive reduced-order modelling of nonlinear stochastic partial differential equations with time-dependent bases , author=. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume=. 2023 , publisher=

  31. [31]

    Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume =

    Naderi, Mohammad Hossein and Akhavan, Sara and Babaee, Hessam , title =. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume =

  32. [32]

    arXiv preprint arXiv:2509.21480 , year=

    An Adaptive CUR Algorithm and its Application to Reduced-Order Modeling of Random PDEs , author=. arXiv preprint arXiv:2509.21480 , year=

  33. [33]

    Linear Algebra and its Applications , volume =

    Dektor, Alec , title =. Linear Algebra and its Applications , volume =

  34. [34]

    SIAM Journal on Matrix Analysis and Applications , volume =

    Koch, Othmar and Lubich, Christian , title =. SIAM Journal on Matrix Analysis and Applications , volume =

  35. [35]

    2000 , publisher=

    Iterative solution of nonlinear equations in several variables , author=. 2000 , publisher=

  36. [36]

    and Mahony, Robert and Sepulchre, Rodolphe , title =

    Absil, P.-A. and Mahony, Robert and Sepulchre, Rodolphe , title =

  37. [37]

    SIAM Journal on Matrix Analysis and Applications , volume=

    Guarantees of Riemannian optimization for low rank matrix recovery , author=. SIAM Journal on Matrix Analysis and Applications , volume=. 2016 , publisher=

  38. [38]

    Journal of Computational Physics , volume=

    A semi-Lagrangian adaptive-rank (SLAR) method for linear advection and nonlinear Vlasov-Poisson system , author=. Journal of Computational Physics , volume=. 2025 , publisher=

  39. [39]

    Mathematics of Operations Research , volume=

    Alternating projections on manifolds , author=. Mathematics of Operations Research , volume=. 2008 , publisher=

  40. [40]

    SIAM Journal on numerical analysis , volume=

    The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate , author=. SIAM Journal on numerical analysis , volume=. 1973 , publisher=