Recognition: unknown
A Closed-Form Persistence-Landmark Pipeline for Certified Point-Cloud and Graph Classification
Pith reviewed 2026-05-09 15:37 UTC · model grok-4.3
The pith
PLACE builds classifiers for point clouds and graphs from persistent-homology signatures using only training labels and closed-form rules.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
PLACE is a closed-form pipeline that classifies point clouds and graphs by summing Mitra-Virk single-point coordinate functions over a landmark grid, choosing weights that maximize the structural distortion constant λ(ν), and thereby obtaining an O(kR/(Δ√m_min)) margin-based excess-risk bound, a closed-form Mahalanobis-margin descriptor selector, and a training-time-decided certificate in both non-asymptotic and Gaussian-plug-in forms.
What carries the argument
The embedding formed by summing Mitra-Virk coordinate functions over a sparse landmark grid, with weights chosen to maximize the Lipschitz lower bound λ(ν) under a non-interference condition.
If this is right
- The excess-risk rate improves with larger class-mean separation Δ and smaller embedding radius R.
- Mahalanobis margin under Ledoit-Wolf shrinkage selects descriptors more consistently than isotropic surrogates on heterogeneous descriptor pools.
- The per-prediction certificate can be decided once at training time and applied to new points with no additional computation.
- The same landmark embedding yields both the risk bound and the certificate, linking geometric separation directly to certified accuracy.
Where Pith is reading between the lines
- The method could be extended to other topological descriptors whose coordinate functions obey a comparable non-interference property.
- If the distortion constant λ(ν) can be bounded analytically for new landmark choices, the same guarantees would transfer without retraining.
- The gap between the derived certificate and observed accuracy on small data sets suggests that tighter multivariate-norm bounds could make the certificate operational sooner.
Load-bearing premise
The summed coordinate functions must satisfy a non-interference condition so that the distortion constant λ(ν) can be maximized in closed form from the training labels alone.
What would settle it
A concrete data set in which the empirical excess risk exceeds the derived O(kR/(Δ√m_min)) bound by more than a small constant factor, or in which the non-interference condition is visibly violated on the chosen landmark grid.
Figures
read the original abstract
We introduce PLACE (Persistence-Landmark Analytic Classification Engine), a closed-form pipeline for classifying point clouds and graphs through their persistent-homology signatures. Three quantitative guarantees -- a margin-based excess-risk rate, a closed-form descriptor-selection rule, and a per-prediction certificate -- are derived from training labels alone, with no learned weights or held-out calibration. The embedding sums Mitra-Virk single-point coordinate functions over a sparse landmark grid; closed-form weights maximize a structural distortion constant $\lambda(\nu)$ (a Lipschitz lower bound on $\mathcal{D}_n$ under non-interference). (i) An $O(kR/(\Delta\sqrt{m_{\min}}))$ margin bound, driven by class-mean separation $\Delta$ and embedding radius $R$, matched by a sample-starved minimax lower bound. (ii) The Mahalanobis margin under Ledoit-Wolf-shrunk covariance is the strongest closed-form descriptor selector on a heterogeneous 64-descriptor chemical-graph pool (mean Spearman $\rho \approx +0.54$ across 10 benchmarks, positive on 9 of 10); the isotropic surrogate $\Delta/\sqrt\ell$ admits a closed-form selection-consistency rate on homogeneous (14-15 descriptor) protein/social pools. (iii) A training-time-decided certificate with no per-prediction overhead, in non-asymptotic Pinelis and asymptotic Gaussian plug-in forms. Empirically, PLACE is the strongest diagram-based method on Orbit5k and matches the strongest topology-based baseline within statistical noise on MUTAG and COX2. The remaining gaps fall into two diagnosable regimes: descriptor blindness on NCI1/NCI109, and pool-coverage limits elsewhere. Both radii exceed the firing threshold $\hat\Delta/2$ on every benchmark at our training-set sizes, dominated by the $\sqrt\ell$ scaling of the multivariate-norm bound; the per-prediction certificate is constructive but not yet operational at these sizes.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces PLACE, a closed-form pipeline for classifying point clouds and graphs via their persistent-homology signatures. It derives three quantitative guarantees from training labels alone: an O(kR/(Δ√m_min)) margin-based excess-risk rate, a closed-form Mahalanobis descriptor-selection rule using Ledoit-Wolf shrinkage, and per-prediction certificates in Pinelis and Gaussian forms. The embedding is constructed by summing Mitra-Virk coordinate functions over a landmark grid, with weights obtained by maximizing the structural distortion constant λ(ν) under a non-interference condition.
Significance. If the central derivations hold and the non-interference condition is satisfied, this would represent a meaningful contribution to certified topological machine learning by delivering explicit, training-label-derived bounds without learned weights or calibration sets. The reported competitiveness with diagram-based and topology-based baselines on Orbit5k, MUTAG, and COX2, together with the closed-form descriptor selector, could be useful in domains requiring interpretable guarantees on graph and point-cloud data.
major comments (3)
- [Abstract] The non-interference condition required for the lower bound on λ(ν) (stated in the abstract as enabling the Lipschitz bound on D_n) is posited but neither proven nor empirically verified on the persistence diagrams from the chemical graphs or point clouds; if it fails (e.g., due to shared simplices across landmarks), the margin excess-risk rate, descriptor selector, and certificates do not follow. This assumption is load-bearing for all three quantitative guarantees.
- [Abstract] Descriptor selection employs Ledoit-Wolf shrunk covariance and the Mahalanobis margin fitted directly to the training labels that also define class means Δ and the claimed guarantees; the abstract provides no independent external benchmark or correction for potential circularity in the selection-consistency rate O(·) on the homogeneous pools.
- [Abstract] The empirical statements that PLACE is the strongest diagram-based method on Orbit5k and matches the strongest topology-based baseline within noise on MUTAG and COX2 are given without data tables, per-dataset accuracies, variance estimates, or statistical tests, preventing direct assessment of whether the quantitative guarantees are realized at the reported training-set sizes.
minor comments (2)
- [Abstract] The abstract introduces notation (k, R, Δ, m_min, ℓ, ν) without definitions or cross-references, which reduces immediate readability.
- [Abstract] The mean Spearman ρ ≈ +0.54 is reported across 10 benchmarks without listing the benchmarks or the individual ρ values, hindering reproducibility of the descriptor-selection claim.
Simulated Author's Rebuttal
We thank the referee for the insightful comments on our manuscript. We address each major point below with clarifications and indicate where revisions will be made to strengthen the presentation of the non-interference condition, descriptor selection, and empirical results.
read point-by-point responses
-
Referee: [Abstract] The non-interference condition required for the lower bound on λ(ν) (stated in the abstract as enabling the Lipschitz bound on D_n) is posited but neither proven nor empirically verified on the persistence diagrams from the chemical graphs or point clouds; if it fails (e.g., due to shared simplices across landmarks), the margin excess-risk rate, descriptor selector, and certificates do not follow. This assumption is load-bearing for all three quantitative guarantees.
Authors: We acknowledge that the non-interference condition is central to deriving the Lipschitz bound on D_n and thus the three guarantees. The full manuscript defines the condition (no shared simplices between landmark neighborhoods) and selects landmarks to maximize λ(ν) under it, but we agree the abstract and main text would benefit from explicit verification. In the revision we will add: (i) a short proof sketch showing the condition holds when landmarks are separated by more than twice the persistence radius, and (ii) an empirical check on all benchmark persistence diagrams confirming that the chosen sparse grids satisfy non-interference (reporting the fraction of violating pairs, which is zero in our experiments). This directly addresses the load-bearing concern without altering the core derivations. revision: yes
-
Referee: [Abstract] Descriptor selection employs Ledoit-Wolf shrunk covariance and the Mahalanobis margin fitted directly to the training labels that also define class means Δ and the claimed guarantees; the abstract provides no independent external benchmark or correction for potential circularity in the selection-consistency rate O(·) on the homogeneous pools.
Authors: The pipeline is intentionally closed-form and uses only training labels, so the Mahalanobis margin and Ledoit-Wolf shrinkage are computed from the same data that define Δ. This is not hidden circularity but a deliberate feature enabling training-time certificates. The O(·) consistency rate is derived specifically for the isotropic surrogate on homogeneous pools and already incorporates the dependence on the empirical means; it is not claimed to be independent of the labels. For the heterogeneous 64-descriptor pool we report the empirical Spearman correlation as an external sanity check across ten benchmarks. In revision we will add a clarifying sentence in the abstract and a dedicated paragraph in Section 4.2 stating that the rate accounts for label dependence and does not require held-out data. revision: partial
-
Referee: [Abstract] The empirical statements that PLACE is the strongest diagram-based method on Orbit5k and matches the strongest topology-based baseline within noise on MUTAG and COX2 are given without data tables, per-dataset accuracies, variance estimates, or statistical tests, preventing direct assessment of whether the quantitative guarantees are realized at the reported training-set sizes.
Authors: We agree that the empirical claims require fuller documentation to allow readers to verify competitiveness and the practical relevance of the guarantees. In the revised manuscript we will insert a new table (or expanded version of the current results table) reporting: per-dataset mean accuracies with standard deviations over 10 random seeds, the exact training-set sizes used, and p-values from paired statistical tests (Wilcoxon signed-rank) against the strongest baselines. We will also add a short paragraph linking these numbers to the training-size regime where the margin bounds become non-vacuous. This change directly enables assessment of whether the reported guarantees are realized. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The paper constructs the PLACE embedding by summing Mitra-Virk coordinate functions over a landmark grid and selects weights via closed-form maximization of the structural distortion constant λ(ν) under an explicitly stated non-interference assumption. The three quantitative guarantees—an O(kR/(Δ√m_min)) margin excess-risk bound, the Mahalanobis/Ledoit-Wolf descriptor selector, and the Pinelis/Gaussian per-prediction certificates—are then derived from this construction using standard margin analysis and concentration inequalities applied to quantities computed from the training labels. The non-interference condition is posited as an assumption rather than derived, but this does not reduce any claimed result to its inputs by construction. Descriptor selection is validated empirically on benchmarks rather than asserted as a forced prediction. No self-citation is load-bearing for the central claims, no fitted parameter is renamed as an independent prediction, and the overall pipeline remains self-contained against external benchmarks once the modeling assumptions are granted.
Axiom & Free-Parameter Ledger
free parameters (2)
- landmark grid size and placement
- Ledoit-Wolf shrinkage intensity
axioms (2)
- domain assumption Persistent homology signatures are stable under small perturbations of the input point cloud or graph.
- ad hoc to paper Non-interference condition holds for the summed Mitra-Virk coordinate functions.
Reference graph
Works this paper leans on
-
[1]
Tsybakov , title =
Alexandre B. Tsybakov , title =. 2009 , doi =
2009
-
[2]
Mathieu Carri. Sliced. Proceedings of the 34th International Conference on Machine Learning (ICML) , series =. 2017 , publisher =
2017
-
[3]
Computer Graphics Forum , volume =
Jian Sun and Maks Ovsjanikov and Leonidas Guibas , title =. Computer Graphics Forum , volume =. 2009 , doi =
2009
-
[4]
Journal of Functional Analysis , volume =
Yann Ollivier , title =. Journal of Functional Analysis , volume =. 2009 , doi =
2009
-
[5]
Foundations of Computational Mathematics , volume =
David Cohen-Steiner and Herbert Edelsbrunner and John Harer , title =. Foundations of Computational Mathematics , volume =. 2009 , doi =
2009
-
[6]
Chow, C. K. , title =. IEEE Transactions on Information Theory , volume =. 1970 , doi =
1970
-
[7]
Advances in Neural Information Processing Systems , volume =
Geifman, Yonatan and El-Yaniv, Ran , title =. Advances in Neural Information Processing Systems , volume =. 2017 , pages =
2017
-
[8]
2005 , doi =
Vovk, Vladimir and Gammerman, Alex and Shafer, Glenn , title =. 2005 , doi =
2005
-
[9]
The Space of Persistence Diagrams on n Points Coarsely Embeds into
Mitra, Atish and Virk,. The Space of Persistence Diagrams on n Points Coarsely Embeds into. Proceedings of the American Mathematical Society , volume =. 2021 , doi =
2021
-
[10]
Geometric Embeddings of Spaces of Persistence Diagrams with Explicit Distortions , year =
Mitra, Atish and Virk,. Geometric Embeddings of Spaces of Persistence Diagrams with Explicit Distortions , year =
-
[11]
Journal of Machine Learning Research , volume =
Peter Bubenik , title =. Journal of Machine Learning Research , volume =
-
[12]
Journal of Machine Learning Research , volume =
Henry Adams and Tegan Emerson and Michael Kirby and Rachel Neville and Chris Peterson and Patrick Shipman and Sofya Chepushtanova and Eric Hanson and Francis Motta and Lori Ziegelmeier , title =. Journal of Machine Learning Research , volume =
-
[13]
Proceedings of the 33rd International Conference on Machine Learning (ICML) , pages =
Genki Kusano and Yasuaki Hiraoka and Kenji Fukumizu , title =. Proceedings of the 33rd International Conference on Machine Learning (ICML) , pages =
-
[14]
Advances in Neural Information Processing Systems , volume =
Qi Zhao and Yusu Wang , title =. Advances in Neural Information Processing Systems , volume =. 2019 , pages =
2019
-
[15]
Discrete & Computational Geometry , volume =
David Cohen-Steiner and Herbert Edelsbrunner and John Harer , title =. Discrete & Computational Geometry , volume =
-
[16]
Proximity of Persistence Modules and Their Diagrams , booktitle =
Fr. Proximity of Persistence Modules and Their Diagrams , booktitle =. 2009 , doi =
2009
-
[17]
The Structure and Stability of Persistence Modules , series =
Fr. The Structure and Stability of Persistence Modules , series =. 2016 , doi =
2016
-
[18]
Computational Topology
Edelsbrunner, Herbert and Harer, John L. Computational Topology
-
[19]
Mohri, Mehryar and Rostamizadeh, Afshin and Talwalkar, Ameet , title =
-
[20]
A Closed-Form Persistence-Landmark Pipeline for Certified Point-Cloud and Graph Classification , year =
Majhi, Sushovan and Mitra, Atish and Virk,. A Closed-Form Persistence-Landmark Pipeline for Certified Point-Cloud and Graph Classification , year =
-
[21]
A Data-Adaptive Persistence-Landmark Pipeline for Certified Kernel Classification , year =
Majhi, Sushovan and Mitra, Atish and Virk,. A Data-Adaptive Persistence-Landmark Pipeline for Certified Kernel Classification , year =
-
[22]
A Statistical-Inference Pipeline for Persistence-Landmark Kernels , year =
Bagchi, Pramita and Majhi, Sushovan and Mitra, Atish and Virk,. A Statistical-Inference Pipeline for Persistence-Landmark Kernels , year =
-
[23]
Vapnik , title =
Vladimir N. Vapnik , title =
-
[24]
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , publisher =
Bernhard Sch. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , publisher =
-
[25]
and Debnath, Gargi and Shusterman, Alan J
Debnath, Asim Kumar and Lopez de Compadre, Rosa L. and Debnath, Gargi and Shusterman, Alan J. and Hansch, Corwin , title =. Journal of Medicinal Chemistry , volume =. 1991 , doi =
1991
-
[26]
Gonzalez , title =
Teofilo F. Gonzalez , title =. Theoretical Computer Science , volume =. 1985 , doi =
1985
-
[27]
Agarwal and Sariel Har-Peled and Kasturi R
Pankaj K. Agarwal and Sariel Har-Peled and Kasturi R. Varadarajan , title =. Combinatorial and Computational Geometry , series =
-
[28]
Proceedings of the 43rd Annual ACM Symposium on Theory of Computing (STOC) , pages =
Dan Feldman and Michael Langberg , title =. Proceedings of the 43rd Annual ACM Symposium on Theory of Computing (STOC) , pages =. 2011 , doi =
2011
-
[29]
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages =
Jan Reininghaus and Stefan Huber and Ulrich Bauer and Roland Kwitt , title =. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages =. 2015 , doi =
2015
-
[30]
Herbert Edelsbrunner and John Harer , title =
-
[31]
Tailen Hsing and Randall Eubank , title =
-
[32]
Statistics Surveys , volume =
Sylvain Arlot and Alain Celisse , title =. Statistics Surveys , volume =. 2010 , doi =
2010
-
[33]
O. V. Lepski , title =. Theory of Probability & Its Applications , volume =. 1991 , doi =
1991
-
[34]
Michel Ledoux and Michel Talagrand , title =
-
[35]
On the bootstrap for persistence diagrams and landscapes , journal =
Fr. On the bootstrap for persistence diagrams and landscapes , journal =
-
[36]
Stochastic Convergence of Persistence Landscapes and Silhouettes , booktitle =
Fr. Stochastic Convergence of Persistence Landscapes and Silhouettes , booktitle =. 2014 , doi =
2014
-
[37]
Journal of Applied and Computational Topology , volume =
Peter Bubenik and Alexander Wagner , title =. Journal of Applied and Computational Topology , volume =. 2020 , doi =
2020
-
[38]
Annals of Statistics , volume =
Brittany Terese Fasy and Fabrizio Lecci and Alessandro Rinaldo and Larry Wasserman and Sivaraman Balakrishnan and Aarti Singh , title =. Annals of Statistics , volume =. 2014 , doi =
2014
-
[39]
Annual Review of Statistics and Its Application , volume =
Larry Wasserman , title =. Annual Review of Statistics and Its Application , volume =. 2018 , doi =
2018
-
[40]
Levine and Gunnar Carlsson , title =
Monica Nicolau and Arnold J. Levine and Gunnar Carlsson , title =. Proceedings of the National Academy of Sciences , volume =
-
[41]
Rizvi and Pablo G
Abbas H. Rizvi and Pablo G. Camara and Elena K. Kandror and Thomas J. Roberts and Ira Schieren and Tom Maniatis and Raul Rabadan , title =. Nature Biotechnology , volume =
-
[42]
Paul Bendich and J. S. Marron and Ezra Miller and Alex Pieloch and Sean Skwerer , title =. Annals of Applied Statistics , volume =
-
[43]
Bandettini and Gunnar Carlsson and Gary Glover and Allan L
Manish Saggar and Olaf Sporns and Javier Gonzalez-Castillo and Peter A. Bandettini and Gunnar Carlsson and Gary Glover and Allan L. Reiss , title =. Nature Communications , volume =
-
[44]
Escolar and Kaname Matsue and Yasumasa Nishiura , title =
Yasuaki Hiraoka and Takenobu Nakamura and Akihiko Hirata and Emerson G. Escolar and Kaname Matsue and Yasumasa Nishiura , title =. Proceedings of the National Academy of Sciences , volume =
-
[45]
Nature Communications , volume =
Mohammad Saadatfar and Hiroshi Takeuchi and Vanessa Robins and Nicolas Francois and Yasuaki Hiraoka , title =. Nature Communications , volume =
-
[46]
Rien van de Weygaert and Gert Vegter and Herbert Edelsbrunner and Bernard J. T. Jones and Pratyush Pranav and Changbom Park and Wojciech A. Hellwing and Bob Eldering and Nico Kruithof and E. G. P. Bos and Johan Hidding and Job Feldbrugge and Eline ten Have and Matti van Engelen and Manuel Caroli and Monique Teillaud , title =. Transactions on Computationa...
-
[47]
PLOS Computational Biology , volume =
Zixuan Cang and Lin Mu and Guo-Wei Wei , title =. PLOS Computational Biology , volume =
-
[48]
Physica A: Statistical Mechanics and its Applications , volume =
Marian Gidea and Yuri Katz , title =. Physica A: Statistical Mechanics and its Applications , volume =
-
[49]
arXiv preprint arXiv:1510.02502 , year =
Yen-Chi Chen and Daren Wang and Alessandro Rinaldo and Larry Wasserman , title =. arXiv preprint arXiv:1510.02502 , year =
-
[50]
Annals of Statistics , volume =
Victor Chernozhukov and Denis Chetverikov and Kengo Kato , title =. Annals of Statistics , volume =
-
[51]
Bull and Philip S
Oliver Vipond and Joshua A. Bull and Philip S. Macklin and Ulrike Tillmann and Christopher W. Pugh and Helen M. Byrne and Heather A. Harrington , title =. Computational Topology in Image Context (CTIC) , year =
-
[52]
Robust topological inference: Distance to a measure and kernel distance , journal =
Fr. Robust topological inference: Distance to a measure and kernel distance , journal =
-
[53]
Journal of the Royal Statistical Society, Series B , volume =
Yoav Benjamini and Yosef Hochberg , title =. Journal of the Royal Statistical Society, Series B , volume =
-
[54]
Fisher Discriminant Analysis with Kernels , booktitle =
Sebastian Mika and Gunnar R. Fisher Discriminant Analysis with Kernels , booktitle =. 1999 , doi =
1999
-
[55]
Tenenbaum , title =
Vin de Silva and Joshua B. Tenenbaum , title =
-
[56]
Christopher K. I. Williams and Matthias Seeger , title =. Advances in Neural Information Processing Systems (NIPS) , volume =
-
[57]
Mahoney , title =
Petros Drineas and Michael W. Mahoney , title =. Journal of Machine Learning Research , volume =
-
[58]
A. W. van der Vaart , title =
-
[59]
van der Vaart and Jon A
Aad W. van der Vaart and Jon A. Wellner , title =
-
[60]
IEEE Transactions on Pattern Analysis and Machine Intelligence , volume =
Dashti Ali and Aras Asaad and Maria-Jose Jimenez and Vidit Nanda and Eduardo Paluzo-Hidalgo and Manuel Soriano-Trigueros , title =. IEEE Transactions on Pattern Analysis and Machine Intelligence , volume =. 2023 , doi =
2023
-
[61]
Tibshirani and Larry Wasserman , title =
Jing Lei and Max G'Sell and Alessandro Rinaldo and Ryan J. Tibshirani and Larry Wasserman , title =. Journal of the American Statistical Association , year =
-
[62]
Angelopoulos and Stephen Bates , title =
Anastasios N. Angelopoulos and Stephen Bates , title =. Foundations and Trends in Machine Learning , volume =. 2023 , doi =
2023
-
[63]
Artificial Intelligence and Statistics (AISTATS) , year =
Vladimir Vovk , title =. Artificial Intelligence and Statistics (AISTATS) , year =
-
[64]
Bartlett and Marian Hristache Wegkamp , title =
Peter L. Bartlett and Marian Hristache Wegkamp , title =. Journal of Machine Learning Research , year =
-
[65]
Algorithmic Learning Theory (ALT) , year =
Corinna Cortes and Giulia DeSalvo and Mehryar Mohri , title =. Algorithmic Learning Theory (ALT) , year =
-
[66]
International Conference on Machine Learning (ICML) , year =
Yonatan Geifman and Ran El-Yaniv , title =. International Conference on Machine Learning (ICML) , year =
-
[67]
Journal of Machine Learning Research , year =
Olympio Hacquard and Vadim Lebovici , title =. Journal of Machine Learning Research , year =
-
[68]
International Conference on Artificial Intelligence and Statistics (AISTATS) , year =
Mathieu Carri. International Conference on Artificial Intelligence and Statistics (AISTATS) , year =
-
[69]
arXiv preprint arXiv:2112.15210 , year =
Raphael Reinauer and Matteo Caorsi and Nicolas Berkouk , title =. arXiv preprint arXiv:2112.15210 , year =
-
[70]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Tam Le and Makoto Yamada , title =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[71]
International Symposium on Computational Geometry (SoCG) , year =
Hirokazu Anai and Fr. International Symposium on Computational Geometry (SoCG) , year =
-
[72]
Annals of Statistics , volume =
Le Cam, Lucien , title =. Annals of Statistics , volume =
-
[73]
Festschrift for Lucien Le Cam , publisher =
Yu, Bin , title =. Festschrift for Lucien Le Cam , publisher =
-
[74]
Annals of Probability , volume =
Pinelis, Iosif , title =. Annals of Probability , volume =
-
[75]
, title =
Bentkus, V. , title =. Journal of Statistical Planning and Inference , volume =
-
[76]
, title =
Tropp, Joel A. , title =. Foundations and Trends in Machine Learning , volume =
-
[77]
Golub, Gene H. and. Matrix Computations , edition =
-
[78]
and Martinsson, P
Halko, N. and Martinsson, P. G. and Tropp, J. A. , title =. SIAM Review , volume =
-
[79]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Zhang, Zhen and Wang, Mianzhi and Xiang, Yijian and Huang, Yan and Nehorai, Arye , title =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[80]
International Conference on Learning Representations (ICLR) , year =
Xu, Keyulu and Hu, Weihua and Leskovec, Jure and Jegelka, Stefanie , title =. International Conference on Learning Representations (ICLR) , year =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.