pith. machine review for the scientific record. sign in

arxiv: 2604.07158 · v1 · submitted 2026-04-08 · 🧮 math.NA · cs.NA

Recognition: unknown

Deterministic sketching for Krylov subspace methods

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:09 UTC · model grok-4.3

classification 🧮 math.NA cs.NA
keywords deterministic sketchingKrylov subspace methodssubspace embeddingsrow subset selectionmatrix functionslinear systemseigenvalue problems
0
0 comments X

The pith

Row subset selection techniques provide deterministic sketching for Krylov subspace methods with subspace embeddings that hold with probability 1.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines the subspace embeddings resulting from arbitrary sketching matrices applied to Krylov subspace methods. From this examination, it develops a deterministic sketching strategy that uses row subset selection to achieve embeddings guaranteed to hold with probability 1. Deterministic versions of Krylov methods are proposed for three key tasks: approximating matrix functions, solving linear systems, and computing eigenvalues. In numerical tests, these deterministic methods perform similarly to the established randomly sketched methods. This offers a way to obtain the benefits of sketching without depending on random choices.

Core claim

Analyzing subspace embeddings obtained by arbitrary sketching matrices leads to a deterministic sketching approach using row subset selection that yields embeddings holding with probability 1. Deterministically sketched Krylov methods for matrix functions, linear systems, and eigenvalue problems are proposed that show similar performance to their randomly sketched counterparts.

What carries the argument

Row subset selection techniques for producing subspace embeddings in Krylov subspaces.

Load-bearing premise

Row subset selection techniques can be applied to produce the required subspace embeddings for the specific Krylov subspaces arising in the target applications.

What would settle it

A benchmark problem for linear systems or eigenvalue computations where the deterministically sketched Krylov method shows noticeably lower accuracy or slower convergence than the random sketching version.

read the original abstract

Randomized sketching is currently introduced into every area of numerical linear algebra. In Krylov subspace methods, it allows runtime savings at the cost of small accuracy reductions. This work offers a different view on sketching in Krylov methods by analyzing what subspace embeddings are obtained by arbitrary sketching matrices. The analysis gives rise to a deterministic sketching approach leveraging row subset selection techniques that yield subspace embeddings holding with probability 1. We propose deterministically sketched Krylov methods for matrix functions, linear systems, and eigenvalue problems that show a similar performance to their randomly sketched counterparts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript analyzes subspace embeddings obtained from arbitrary sketching matrices and uses this to derive a deterministic sketching approach based on row subset selection techniques. These yield subspace embeddings that hold with probability 1. The authors then propose deterministically sketched Krylov methods for matrix functions, linear systems, and eigenvalue problems, reporting experimental performance comparable to randomly sketched counterparts.

Significance. A correct deterministic construction would be significant for numerical linear algebra, as it would replace random sketching with reproducible, probability-1 guarantees while retaining runtime savings. The reported experimental similarity to random sketching is encouraging, but the result's impact hinges on whether the deterministic selection can be realized iteratively without precomputing the full Krylov basis.

major comments (1)
  1. [Sections 3 and 4 (deterministic construction and proposed algorithms)] The central derivation (analysis of arbitrary sketching matrices leading to deterministic row-subset-selection embeddings) assumes a fixed, known tall matrix whose column space coincides with the target Krylov subspace. Standard leverage-score or volume sampling therefore requires the entire basis V_k to be available in advance. The manuscript does not supply an adaptive, online selection rule that operates only on vectors computed so far while still guaranteeing the embedding property at every iteration. Without such a rule, the deterministic methods cannot be applied to the iterative construction of Krylov subspaces as claimed.
minor comments (2)
  1. Notation for the sketching matrix S and the resulting embedding constant could be introduced earlier and used consistently when moving from the general analysis to the specific Krylov applications.
  2. [Numerical experiments] The experimental section would benefit from a brief statement of the stopping criteria and tolerance values used for the sketched and deterministic variants to allow direct comparison.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the careful reading and for noting the potential significance of a deterministic construction with probability-1 guarantees. We address the major comment point by point below.

read point-by-point responses
  1. Referee: [Sections 3 and 4 (deterministic construction and proposed algorithms)] The central derivation (analysis of arbitrary sketching matrices leading to deterministic row-subset-selection embeddings) assumes a fixed, known tall matrix whose column space coincides with the target Krylov subspace. Standard leverage-score or volume sampling therefore requires the entire basis V_k to be available in advance. The manuscript does not supply an adaptive, online selection rule that operates only on vectors computed so far while still guaranteeing the embedding property at every iteration. Without such a rule, the deterministic methods cannot be applied to the iterative construction of Krylov subspaces as claimed.

    Authors: We thank the referee for highlighting this important practical consideration. The analysis in Sections 3 and 4 indeed starts from an arbitrary sketching matrix applied to a fixed tall matrix V_k whose columns span the target Krylov subspace; the deterministic row-subset-selection embeddings (via leverage-score or volume sampling) are then obtained by operating on this full V_k. Consequently, the selection step requires the entire basis to be known in advance, and the manuscript does not develop or present an adaptive, online rule that selects rows incrementally from vectors generated so far while preserving the embedding property at each iteration. Our proposed deterministically sketched Krylov methods and the accompanying experiments therefore assume that the full basis can be formed first, after which row selection is performed and the sketched iterations proceed; this setup is what enables the reported performance comparable to randomized sketching for the matrix-function, linear-system, and eigenvalue examples. We agree that this does not yet yield a fully online deterministic procedure for standard iterative Krylov constructions. In the revised manuscript we will add an explicit discussion of this scope and limitation (both in the algorithmic sections and in the conclusions), clarifying that the deterministic approach as derived applies when the subspace basis is available explicitly, while noting the open question of incremental deterministic selection as a direction for future work. This clarification does not alter the core theoretical analysis of arbitrary sketching matrices. revision: partial

Circularity Check

0 steps flagged

No circularity: derivation builds on external row-subset-selection results without self-referential reduction

full rationale

The paper analyzes arbitrary sketching matrices to obtain subspace embeddings, then invokes existing row subset selection techniques to produce deterministic embeddings that hold with probability 1. No quoted equations or steps reduce a claimed prediction or first-principles result to a fitted input, self-definition, or load-bearing self-citation chain. The central proposal (deterministically sketched Krylov methods) rests on independent literature for the embedding guarantees rather than re-deriving them from the target application data. The skeptic concern about iterative basis construction is a potential correctness or applicability issue, not a circularity in the derivation itself.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review yields no explicit free parameters, axioms, or invented entities; the work builds on standard row-subset-selection techniques from the literature.

pith-pipeline@v0.9.0 · 5370 in / 1009 out tokens · 57740 ms · 2026-05-10T17:09:53.588777+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

80 extracted references · 8 canonical work pages · 1 internal anchor

  1. [1]

    Deterministic sketching for Krylov subspace methods

    Introduction. Krylov subspace methods are a cornerstone of numerical lin- ear algebra. Over the last decades, a wealth of efficient iterative methods has been developed and analyzed for fundamental linear algebra problems including linear sys- tems, eigenvalue problems, matrix functions, and matrix equations [63, 52, 64, 32, 48, 67]. A recent trend in num...

  2. [2]

    A wide range of established methods in numerical linear algebra builds on the Krylov subspace

    Randomized sketching in Krylov subspace methods. A wide range of established methods in numerical linear algebra builds on the Krylov subspace. For a matrix A ∈ Cn×n, a vector b ∈ Cn, and a subspace dimension m ∈ N, it is defined as (2.1) Km(A, b) = span{b, Ab, . . . ,Am−1b}. Approximations to fundamental linear algebra problems including linear systems, ...

  3. [3]

    achieves this by performing full orthogonalization of every new basis vector with respect to this S-inner product. In contrast, the k-truncated Arnoldi method that has recently been used in [53, 34] performs incomplete orthogonalization only against the k ∈ N previous basis vectors with respect to the standard n-dimensional Euclidean inner product, cf. Al...

  4. [4]

    The exposition in Section 2 aims to re- flect the angle from which sketching is typically introduced in the numerical linear algebra context

    General subspace embeddings. The exposition in Section 2 aims to re- flect the angle from which sketching is typically introduced in the numerical linear algebra context. It suggests a certain historically grounded intertwinedness of the subspace embedding property (2.2) and the choice of randomized sketching trans- form. In particular, sparse maps and SR...

  5. [5]

    surprisingly simple and effective

    Deterministic sketching via row subset selection. This section proposes an alternative approach to sketching in numerical linear algebra. Instead of drawing sketching matrices from random matrix distributions that satisfy subspace embed- dings with high probability , we propose the use of deterministic sketching matrices satisfying the same property with ...

  6. [6]

    Algorithms. This section proposes the deterministically sketched Krylov subspace methods dsFOM for approximating the action of a matrix function on a vector, dsGMRES for approximately solving a non-Hermitian linear system, and dsRR for approximating a subset of the eigenvalues and -vectors of a matrix. These are obtained as versions of the randomly sketch...

  7. [7]

    Numerical experiments. In this section, dsFOM (Algorithm 5.1), dsGMRES (Algorithm 5.2), and dsRR (Algorithm 5.3) are each tested on one example problem and compared against their unsketched and randomly sketched counterparts. Throughout the experiments, randomized methods use discrete cosine transforms (DCT) as sketching matrices, MPE uses DEIM and GappyP...

  8. [8]

    This manuscript proposes a new class of sketched Krylov subspace methods for matrix functions, linear systems, and eigenvalue prob- lems

    Conclusion and outlook. This manuscript proposes a new class of sketched Krylov subspace methods for matrix functions, linear systems, and eigenvalue prob- lems. Leveraging new theoretical insights from the analysis of subspace embeddings obtained by arbitrary sketching matrices, they utilize deterministic row subset selec- tion matrices instead of random...

  9. [9]

    A. H. Al-Mohy and N. J. Higham , Computing the action of the matrix ex- ponential, with an application to exponential integrators , SIAM J. Sci. Comput., 33 (2011), pp. 488–511

  10. [10]

    Amsel, Y

    N. Amsel, Y. Baumann, P. Beckman, P. B ¨urgisser, C. Cama ˜no, T. Chen, E. Chow, A. Damle, M. Derezinski, M. Embree, et al., Linear Systems and Eigenvalue Problems: Open Questions from a Simons Workshop , arXiv preprint arXiv:2602.05394, (2026)

  11. [11]

    W. E. Arnoldi , The principle of minimized iterations in the solution of the matrix eigenvalue problem , Quart. Appl. Math., 9 (1951), pp. 17–29

  12. [12]

    Balabanov and L

    O. Balabanov and L. Grigori , Randomized Gram–Schmidt process with ap- plication to GMRES , SIAM J. Sci. Comput., 44 (2022), pp. A1450–A1474

  13. [13]

    Beckermann, The condition number of real Vandermonde, Krylov and pos- itive definite Hankel matrices , Numer

    B. Beckermann, The condition number of real Vandermonde, Krylov and pos- itive definite Hankel matrices , Numer. Math., 85 (2000), pp. 553–577

  14. [14]

    Bekas, E

    C. Bekas, E. Kokiopoulou, and Y. Saad , An estimator for the diagonal of a matrix, Appl. Numer. Math., 57 (2007), pp. 1214–1229

  15. [15]

    Benner, S

    P. Benner, S. Gugercin, and K. Willcox , A survey of projection-based model reduction methods for parametric dynamical systems , SIAM Rev., 57 (2015), pp. 483–531

  16. [16]

    Benzi and P

    M. Benzi and P. Boito , Matrix functions in network analysis , GAMM-Mitt., 43 (2020), p. e202000012

  17. [17]

    Benzi, P

    M. Benzi, P. Boito, and N. Razouk , Decay properties of spectral projectors with applications to electronic structure , SIAM Rev., 55 (2013), pp. 3–64

  18. [18]

    Benzi and G

    M. Benzi and G. H. Golub , Bounds for the entries of matrix functions with applications to preconditioning, BIT, 39 (1999), pp. 417–438

  19. [19]

    Benzi, N

    M. Benzi, N. Razouk, et al. , Decay bounds and O(n) algorithms for approx- imating functions of sparse matrices , Electron. Trans. Numer. Anal, 28 (2007), p. 08. 24 K. BERGERMANN

  20. [20]

    Bergermann and M

    K. Bergermann and M. Stoll , Fast computation of matrix function-based centrality measures for layer-coupled multiplex networks , Phys. Rev. E, 105 (2022), p. 034305

  21. [21]

    Matrix Anal

    , Adaptive rational Krylov methods for exponential Runge–Kutta integrators, SIAM J. Matrix Anal. Appl., 45 (2024), pp. 744–770

  22. [22]

    , Gradient flow-based modularity maximization for community detection in multiplex networks, Philos. Trans. A, 383 (2025), p. 20240244

  23. [23]

    Bonacich, Power and centrality: A family of measures , American Journal of Sociology, 92 (1987), pp

    P. Bonacich, Power and centrality: A family of measures , American Journal of Sociology, 92 (1987), pp. 1170–1182

  24. [24]

    Bucci, D

    A. Bucci, D. Palitta, and L. Robol , Randomized sketched TT-GMRES for linear systems with tensor structure , SIAM J. Sci. Comput., 47 (2025), pp. A2801–A2827

  25. [25]

    Burke and S

    L. Burke and S. G¨uttel, Krylov subspace recycling with randomized sketching for matrix functions , SIAM J. Matrix Anal. Appl., 45 (2024), pp. 2243–2262

  26. [26]

    Burke, S

    L. Burke, S. G ¨uttel, and K. M. Soodhalter , GMRES with random- ized sketching and deflated restarting , SIAM J. Matrix Anal. Appl., 46 (2025), pp. 702–725

  27. [27]

    Chaturantabut and D

    S. Chaturantabut and D. C. Sorensen, Nonlinear model reduction via dis- crete empirical interpolation, SIAM J. Sci. Comput., 32 (2010), pp. 2737–2764

  28. [28]

    Chung and S

    J. Chung and S. Gazzola , Randomized Krylov methods for inverse problems , arXiv preprint arXiv:2508.20269, (2025)

  29. [29]

    Cortinovis, D

    A. Cortinovis, D. Kressner, and Y. Nakatsukasa , Speeding up Krylov subspace methods for computing f(A)b via randomization , SIAM J. Matrix Anal. Appl., 45 (2024), pp. 619–633

  30. [30]

    De Damas and L

    J.-G. De Damas and L. Grigori , Randomized implicitly restarted Arnoldi method for the non-symmetric eigenvalue problem , SIAM J. Matrix Anal. Appl., 46 (2025), pp. 2395–2422

  31. [31]

    de Damas and L

    J.-G. de Damas and L. Grigori , Randomized Krylov-Schur eigensolver with deflation, arXiv preprint arXiv:2508.05400, (2025)

  32. [32]

    de Damas, L

    J.-G. de Damas, L. Grigori, I. Simunec, and E. Timsit , Randomized or- thogonalization and Krylov subspace methods: Principles and algorithms , arXiv preprint arXiv:2512.15455, (2025)

  33. [33]

    Drmac and S

    Z. Drmac and S. Gugercin, A new selection operator for the discrete empirical interpolation method—improved a priori error bound and extensions , SIAM J. Sci. Comput., 38 (2016), pp. A631–A648

  34. [34]

    Estrada, Characterization of 3D molecular structure, Chemical Physics Let- ters, 319 (2000), pp

    E. Estrada, Characterization of 3D molecular structure, Chemical Physics Let- ters, 319 (2000), pp. 713–718

  35. [35]

    Frommer, C

    A. Frommer, C. Schimmel, and M. Schweitzer , Analysis of probing tech- niques for sparse approximation and trace estimation of decaying matrix func- tions, SIAM J. Matrix Anal. Appl., 42 (2021), pp. 1290–1318

  36. [36]

    Gaudreault, G

    S. Gaudreault, G. Rainwater, and M. Tokman , KIOPS: A fast adaptive Krylov subspace solver for exponential integrators, J. Comput. Phys., 372 (2018), pp. 236–255

  37. [37]

    Gautschi, The condition of polynomials in power form , Math

    W. Gautschi, The condition of polynomials in power form , Math. Comp., 33 (1979), pp. 343–352

  38. [38]

    Ghashami, E

    M. Ghashami, E. Liberty, J. M. Phillips, and D. P. Woodruff, Frequent directions: Simple and deterministic matrix sketching , SIAM J. Comput., 45 (2016), pp. 1762–1792

  39. [39]

    Girard , A fast ’Monte-Carlo cross-validation’ procedure for large least squares problems with noisy data , Numer

    A. Girard , A fast ’Monte-Carlo cross-validation’ procedure for large least squares problems with noisy data , Numer. Math., 56 (1989), pp. 1–23. DETERMINISTIC SKETCHING FOR KRYLOV SUBSPACE METHODS 25

  40. [40]

    G. H. Golub and C. F. Van Loan , Matrix Computations, JHU press, 2013

  41. [41]

    N. L. Guidotti, P.-G. Martinsson, J. A. Acebr ´on, and J. Monteiro , Accelerating a restarted Krylov method for matrix functions with randomization , arXiv preprint arXiv:2503.22631, (2025)

  42. [42]

    G ¨uttel and M

    S. G ¨uttel and M. Schweitzer , Randomized sketching for Krylov approxi- mations of large-scale matrix functions , SIAM J. Matrix Anal. Appl., 44 (2023), pp. 1073–1095

  43. [43]

    G¨uttel and I

    S. G¨uttel and I. Simunec , A sketch-and-select Arnoldi process, SIAM J. Sci. Comput., 46 (2024), pp. A2774–A2797

  44. [44]

    Halko, P.-G

    N. Halko, P.-G. Martinsson, and J. A. Tropp , Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix de- compositions, SIAM Rev., 53 (2011), pp. 217–288

  45. [45]

    J. M. Hammersley and D. C. Handscomb , Monte Carlo Methods , Springer Science & Business Media, 1964

  46. [46]

    N. J. Higham, Functions of Matrices: Theory and Computation , SIAM, 2008

  47. [47]

    Hochbruck and A

    M. Hochbruck and A. Ostermann , Exponential integrators, Acta Numer., 19 (2010), pp. 209–286

  48. [48]

    R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, 2012

  49. [49]

    M. F. Hutchinson , A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines , Comm. Statist. Simulation Comput., 18 (1989), pp. 1059–1076

  50. [50]

    Y. Jang, L. Grigori, E. Martin, and C. Content , Randomized flexible GMRES with deflated restarting , Numer. Algorithms, 98 (2025), pp. 431–465

  51. [51]

    W. B. Johnson and J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space, Contemp. Math., 26 (1984), p. 1

  52. [52]

    Krieger and M

    E. Krieger and M. Schweitzer , A general framework for Krylov ODE residuals with applications to randomized Krylov methods , arXiv preprint arXiv:2510.17538, (2025)

  53. [53]

    Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators , United States Governm

    C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators , United States Governm. Press Office Los Angeles, CA, 1950

  54. [54]

    Leskovec, K

    J. Leskovec, K. J. Lang, A. Dasgupta, and M. W. Mahoney, Community structure in large networks: Natural cluster sizes and the absence of large well- defined clusters, Internet Math., 6 (2009), pp. 29–123

  55. [55]

    Liberty, Simple and deterministic matrix sketching , in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 2013, pp

    E. Liberty, Simple and deterministic matrix sketching , in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 2013, pp. 581–588

  56. [56]

    Liesen and Z

    J. Liesen and Z. Strakos, Krylov Subspace Methods: Principles and Analysis , Oxford University Press, 2013

  57. [57]

    Martinsson and J

    P.-G. Martinsson and J. A. Tropp , Randomized numerical linear algebra: Foundations and algorithms , Acta Numer., 29 (2020), pp. 403–572

  58. [58]

    Meng and M

    X. Meng and M. W. Mahoney, Low-distortion subspace embeddings in input- sparsity time and applications to robust linear regression , in Proceedings of the forty-fifth annual ACM symposium on Theory of computing, 2013, pp. 91–100

  59. [59]

    Metropolis and S

    N. Metropolis and S. Ulam , The monte carlo method , J. Amer. Statist. Assoc., 44 (1949), pp. 335–341

  60. [60]

    Moler and C

    C. Moler and C. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later , SIAM Rev., 45 (2003), pp. 3–49

  61. [61]

    Nakatsukasa and J

    Y. Nakatsukasa and J. A. Tropp , Fast and accurate randomized algorithms 26 K. BERGERMANN for linear systems and eigenvalue problems , SIAM J. Matrix Anal. Appl., 45 (2024), pp. 1183–1214

  62. [62]

    Nelson and H

    J. Nelson and H. L. Nguy ˆen, OSNAP: Faster numerical linear algebra algo- rithms via sparser subspace embeddings , in 2013 IEEE 54th annual symposium on foundations of computer science, IEEE, 2013, pp. 117–126

  63. [63]

    Palitta, M

    D. Palitta, M. Schweitzer, and V. Simoncini, Sketched and truncated poly- nomial Krylov methods: Evaluation of matrix functions , Numer. Linear Algebra Appl., 32 (2025), p. e2596

  64. [64]

    Comp., 94 (2025), pp

    , Sketched and truncated polynomial Krylov subspace methods: Matrix sylvester equations, Math. Comp., 94 (2025), pp. 1761–1792

  65. [65]

    Peherstorfer, Z

    B. Peherstorfer, Z. Drmac, and S. Gugercin , Stability of discrete em- pirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points , SIAM J. Sci. Comput., 42 (2020), pp. A2837– A2864

  66. [66]

    Pons and M

    P. Pons and M. Latapy, Computing communities in large networks using ran- dom walks, in International Symposium on Computer and Information Sciences, Springer, 2005, pp. 284–293

  67. [67]

    Rokhlin, A

    V. Rokhlin, A. Szlam, and M. Tygert, A randomized algorithm for principal component analysis, SIAM J. Matrix Anal. Appl., 31 (2010), pp. 1100–1124

  68. [68]

    Rokhlin and M

    V. Rokhlin and M. Tygert, A fast randomized algorithm for overdetermined linear least-squares regression, Proc. Natl. Acad. Sci. USA, 105 (2008), pp. 13212– 13217

  69. [69]

    R. Y. Rubinstein and D. P. Kroese, Simulation and the Monte Carlo method, John Wiley & Sons, 2016

  70. [70]

    Saad, Analysis of some Krylov subspace approximations to the matrix expo- nential operator, SIAM J

    Y. Saad, Analysis of some Krylov subspace approximations to the matrix expo- nential operator, SIAM J. Numer. Anal., 29 (1992), pp. 209–228

  71. [71]

    , Iterative Methods for Sparse Linear Systems , SIAM, 2003

  72. [72]

    , Numerical Methods for Large Eigenvalue Problems: Revised Edition , SIAM, 2011

  73. [73]

    Saad and M

    Y. Saad and M. H. Schultz , GMRES: A generalized minimal residual algo- rithm for solving nonsymmetric linear systems , SIAM Journal on Scientific and Statistical Computing, 7 (1986), pp. 856–869

  74. [74]

    T. Sarlos , Improved approximation algorithms for large matrices via random projections, in 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), IEEE, 2006, pp. 143–152

  75. [75]

    Simoncini, Computational methods for linear matrix equations , SIAM Rev., 58 (2016), pp

    V. Simoncini, Computational methods for linear matrix equations , SIAM Rev., 58 (2016), pp. 377–441

  76. [76]

    D. C. Sorensen and M. Embree, A DEIM induced CUR factorization , SIAM J. Sci. Comput., 38 (2016), pp. A1454–A1482

  77. [77]

    J. M. Tang and Y. Saad , A probing method for computing the diagonal of a matrix inverse, Numer. Linear Algebra Appl., 19 (2012), pp. 485–501

  78. [78]

    Von Luxburg, A tutorial on spectral clustering , Stat

    U. Von Luxburg, A tutorial on spectral clustering , Stat. Comput., 17 (2007), pp. 395–416

  79. [79]

    D. P. Woodruff , Sketching as a tool for numerical linear algebra , arXiv pre- print arXiv:1411.4357, (2014)

  80. [80]

    Zimmermann and K

    R. Zimmermann and K. Willcox , An accelerated greedy missing point esti- mation procedure, SIAM J. Sci. Comput., 38 (2016), pp. A2827–A2850