Recognition: unknown
Revisiting CUR Perturbation Analysis: A Local Tangent-Space Expansion
Pith reviewed 2026-05-14 18:24 UTC · model grok-4.3
The pith
The Fréchet derivative of the rank-truncated CUR map is a sampling-induced oblique tangent-space projector that filters certain perturbations to first order.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The Fréchet derivative of the rank-truncated CUR map is a sampling-induced oblique tangent-space projector determined by the selected rows and columns. Consequently, the local recovery error for an underlying low-rank matrix is governed not by the full perturbation norm alone, but by the image of the perturbation under this sampling-induced tangent projector. In particular, perturbations that are invisible to the selected rows and columns are removed to first order. We compare this behavior with the classical local expansion of the rank-r SVD truncation.
What carries the argument
sampling-induced oblique tangent-space projector determined by the selected rows and columns
If this is right
- Local recovery error is determined by the projected perturbation under the oblique projector rather than the full norm.
- Perturbations in the kernel of the projector are removed to first order.
- SVD truncation removes orthogonal perturbations while CUR removes kernel perturbations of the sampling projector.
- Numerical experiments confirm the predicted first- and second-order local convergence rates.
Where Pith is reading between the lines
- This framework could guide the choice of row and column indices to minimize the dimension of the invisible perturbation space for improved robustness.
- Similar tangent-space analyses might apply to other sampling-based low-rank methods beyond CUR.
- The approach opens the door to second-order expansions and higher-order perturbation theory for CUR maps.
Load-bearing premise
The underlying matrix is close to an admissible rank-r matrix with the chosen indices fixed, making the rank-truncated CUR map Fréchet differentiable at that point.
What would settle it
Observe a perturbation vector lying in the kernel of the derived oblique projector and verify that the first-order term in the CUR recovery error vanishes, while a generic perturbation produces a linear error matching the projected norm.
Figures
read the original abstract
CUR decompositions approximate a matrix using selected columns, rows, and their intersection. Classical CUR theory provides exactness results for low-rank matrices and perturbation bounds controlled by the size of the noise. In this work we develop a local perturbation expansion for a fixed-index rank-truncated CUR map near an admissible rank-\(r\) matrix. We show that the Fr\'echet derivative of the rank-truncated CUR map is a sampling-induced oblique tangent-space projector determined by the selected rows and columns. Consequently, the local recovery error for an underlying low-rank matrix is governed not by the full perturbation norm alone, but by the image of the perturbation under this sampling-induced tangent projector. In particular, perturbations that are invisible to the selected rows and columns are removed to first order. We compare this behavior with the classical local expansion of the rank-\(r\) SVD truncation. SVD removes orthogonal-normal perturbations to first order, whereas rank-truncated CUR removes perturbations in the kernel of the sampling-induced oblique tangent projector. Numerical experiments illustrate these regimes and confirm the predicted first- and second-order local rates.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper develops a local perturbation expansion for the fixed-index rank-truncated CUR map near an admissible rank-r matrix. It shows via Fréchet calculus that the derivative of this map is a sampling-induced oblique tangent-space projector determined by the selected rows and columns. Consequently, the first-order recovery error is governed by the image of the perturbation under this projector (rather than the full perturbation norm), so that perturbations in the projector's kernel are annihilated to first order. The analysis is compared to the classical orthogonal-projector expansion for rank-r SVD truncation, and numerical experiments are used to illustrate the predicted first- and second-order rates.
Significance. If the central derivation holds, the result supplies a geometrically precise local error characterization for CUR that is unavailable from classical norm-based bounds. The explicit identification of the oblique tangent projector (and its kernel) distinguishes CUR behavior from SVD and explains why certain sampling-invisible perturbations are removed at first order. The derivation is parameter-free and rests only on standard Fréchet differentiability at admissible points, which is a strength. This refined view may be useful in applications where CUR is chosen for column/row interpretability and where perturbation directions relative to the sampling are known a priori.
minor comments (3)
- [§2.2] §2.2, Definition 2.3: the admissibility condition (invertibility of the intersection submatrix) is stated but the radius of the neighborhood on which the map remains differentiable is not quantified; an explicit lower bound in terms of the smallest singular value of the intersection would strengthen the local claim.
- [Figure 3] Figure 3: the log-log plots of recovery error versus perturbation size would benefit from overlaid reference lines of slope 1 and 2 to make the predicted rates visually immediate.
- Notation: the oblique projector is denoted P_{C,R} in the main text but appears as Π in some displayed equations; consistent use of a single symbol would improve readability.
Simulated Author's Rebuttal
We thank the referee for the positive and accurate summary of our work on the local Fréchet perturbation analysis of fixed-index rank-truncated CUR. We appreciate the recognition that the derivative is a sampling-induced oblique tangent-space projector, which distinguishes CUR from SVD truncation by annihilating perturbations in the projector's kernel to first order. No specific major comments were raised.
Circularity Check
No significant circularity detected
full rationale
The derivation applies standard Fréchet calculus to the fixed-index rank-truncated CUR map at admissible rank-r points, identifying its derivative as the sampling-induced oblique projector onto the tangent space. This projector annihilates kernel perturbations to first order by the definition of the differential, yielding the local expansion directly from the chain rule and the map's geometry. The comparison to SVD truncation follows from the orthogonal versus oblique projectors without any reduction to fitted inputs, self-definitional loops, or load-bearing self-citations. The admissibility condition ensures differentiability in a neighborhood, making the entire argument self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The rank-truncated CUR map is Fréchet differentiable near admissible rank-r matrices with fixed indices
Reference graph
Works this paper leans on
-
[1]
Journal of Machine Learning Research , volume=
Accelerated alternating projections for robust principal component analysis , author=. Journal of Machine Learning Research , volume=
-
[2]
Tensor-CUR decompositions for tensor-based data , author=. Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=
-
[3]
Linear Algebra and its Applications , volume =
Generalizing the column--row matrix decomposition to multi-way arrays , author =. Linear Algebra and its Applications , volume =
-
[4]
Mode-wise tensor decompositions: Multi-dimensional generalizations of
Cai, HanQin and Hamm, Keaton and Huang, Longxiu and Needell, Deanna , journal =. Mode-wise tensor decompositions: Multi-dimensional generalizations of
-
[5]
Linear Algebra and its Applications , volume=
TT-cross approximation for multidimensional arrays , author=. Linear Algebra and its Applications , volume=. 2010 , publisher=
work page 2010
-
[6]
SIAM Journal on Scientific Computing , volume=
Tensor-train decomposition , author=. SIAM Journal on Scientific Computing , volume=. 2011 , publisher=
work page 2011
-
[7]
Advances in neural information processing systems , volume=
Error analysis of tensor-train cross approximation , author=. Advances in neural information processing systems , volume=
- [8]
-
[9]
Chen, Juefei and Wei, Yimin and Xu, Yanwei , journal=. Tensor. 2022 , publisher=
work page 2022
-
[10]
SIAM Journal on Optimization , volume=
Low-rank matrix completion by Riemannian optimization , author=. SIAM Journal on Optimization , volume=. 2013 , publisher=
work page 2013
-
[11]
Handbook of variational methods for nonlinear geometric data , pages=
Geometric methods on low-rank matrix and tensor manifolds , author=. Handbook of variational methods for nonlinear geometric data , pages=. 2020 , publisher=
work page 2020
-
[12]
Proceedings of the National Academy of Sciences , volume=
CUR matrix decompositions for improved data analysis , author=. Proceedings of the National Academy of Sciences , volume=. 2009 , publisher=
work page 2009
-
[13]
Drineas, Petros and Mahoney, Michael W. and Muthukrishnan, S. , title =. SIAM Journal on Matrix Analysis and Applications , volume =
-
[14]
Randomized algorithms for matrices and data , author=. Foundations and Trends. 2011 , publisher=
work page 2011
-
[15]
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions , author=. SIAM review , volume=. 2011 , publisher=
work page 2011
-
[16]
Linear algebra and its applications , volume=
A theory of pseudoskeleton approximations , author=. Linear algebra and its applications , volume=. 1997 , publisher=
work page 1997
-
[17]
Contemporary Mathematics , volume=
The maximal-volume concept in approximation by low-rank matrices , author=. Contemporary Mathematics , volume=
-
[18]
Generalized inverses and applications , pages=
Differentiation of pseudoinverses, separable nonlinear least square problems and other tales , author=. Generalized inverses and applications , pages=. 1976 , publisher=
work page 1976
-
[19]
Matrix Methods: Theory, Algorithms And Applications: Dedicated to the Memory of Gene Golub , pages=
How to find a good submatrix , author=. Matrix Methods: Theory, Algorithms And Applications: Dedicated to the Memory of Gene Golub , pages=. 2010 , publisher=
work page 2010
-
[20]
Zhang, Kai and Tsang, Ivor W and Kwok, James T , booktitle=. Improved Nystr
- [21]
-
[22]
Applied and Computational Harmonic Analysis , volume=
Perspectives on CUR decompositions , author=. Applied and Computational Harmonic Analysis , volume=. 2020 , publisher=
work page 2020
-
[23]
SIAM Journal on Matrix Analysis and Applications , volume =
Hamm, Keaton and Huang, Longxiu , title =. SIAM Journal on Matrix Analysis and Applications , volume =. 2021 , publisher=
work page 2021
-
[24]
Foundations of Data Science , volume =
Hamm, Keaton and Huang, Longxiu , title =. Foundations of Data Science , volume =
-
[25]
IEEE Signal Processing Letters , volume =
Cai, HanQin and Hamm, Keaton and Huang, Longxiu and Li, Jiaqi and Wang, Tao , title =. IEEE Signal Processing Letters , volume =
-
[26]
SIAM Journal on Imaging Sciences , volume =
Cai, HanQin and Hamm, Keaton and Huang, Longxiu and Needell, Deanna , title =. SIAM Journal on Imaging Sciences , volume =
-
[27]
Proceedings of the First Mathematical and Scientific Machine Learning Conference , series =
Hamm, Keaton and Huang, Longxiu and Wang, Tao , title =. Proceedings of the First Mathematical and Scientific Machine Learning Conference , series =
-
[28]
Applied and Computational Harmonic Analysis , volume =
Hamm, Keaton and Huang, Longxiu , title =. Applied and Computational Harmonic Analysis , volume =
-
[29]
Donello, Michael and Palkar, Grishma and Naderi, Mohammad Hossein and Del Rey Fern. Oblique Projection for Scalable Rank-Adaptive Reduced-Order Modelling of Nonlinear Stochastic Partial Differential Equations with Time-Dependent Bases , journal =
-
[30]
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume=
Oblique projection for scalable rank-adaptive reduced-order modelling of nonlinear stochastic partial differential equations with time-dependent bases , author=. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume=. 2023 , publisher=
work page 2023
-
[31]
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume =
Naderi, Mohammad Hossein and Akhavan, Sara and Babaee, Hessam , title =. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume =
-
[32]
arXiv preprint arXiv:2509.21480 , year=
An Adaptive CUR Algorithm and its Application to Reduced-Order Modeling of Random PDEs , author=. arXiv preprint arXiv:2509.21480 , year=
-
[33]
Linear Algebra and its Applications , volume =
Dektor, Alec , title =. Linear Algebra and its Applications , volume =
-
[34]
SIAM Journal on Matrix Analysis and Applications , volume =
Koch, Othmar and Lubich, Christian , title =. SIAM Journal on Matrix Analysis and Applications , volume =
-
[35]
Iterative solution of nonlinear equations in several variables , author=. 2000 , publisher=
work page 2000
-
[36]
and Mahony, Robert and Sepulchre, Rodolphe , title =
Absil, P.-A. and Mahony, Robert and Sepulchre, Rodolphe , title =
-
[37]
SIAM Journal on Matrix Analysis and Applications , volume=
Guarantees of Riemannian optimization for low rank matrix recovery , author=. SIAM Journal on Matrix Analysis and Applications , volume=. 2016 , publisher=
work page 2016
-
[38]
Journal of Computational Physics , volume=
A semi-Lagrangian adaptive-rank (SLAR) method for linear advection and nonlinear Vlasov-Poisson system , author=. Journal of Computational Physics , volume=. 2025 , publisher=
work page 2025
-
[39]
Mathematics of Operations Research , volume=
Alternating projections on manifolds , author=. Mathematics of Operations Research , volume=. 2008 , publisher=
work page 2008
-
[40]
SIAM Journal on numerical analysis , volume=
The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate , author=. SIAM Journal on numerical analysis , volume=. 1973 , publisher=
work page 1973
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.