Recognition: 1 theorem link
· Lean TheoremTopoGeoScore: A Self-Supervised Source-Only Geometric Framework for OOD Checkpoint Selection
Pith reviewed 2026-05-12 01:03 UTC · model grok-4.3
The pith
Source embeddings encode global, local, and topological signals that identify which checkpoints will remain accurate under distribution shift.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Given a trained checkpoint, class-conditional mutual k-NN graphs constructed from its source embeddings yield three complementary signals: a torsion-inspired reduced Laplacian log-determinant that quantifies global class-manifold complexity, Ollivier-Ricci curvature that quantifies local neighborhood regularity, and persistent-homology summaries that capture fragmented connectivity, loops, and global-local inconsistency. These signals are assembled into an interpretable non-negative linear score whose coefficients are learned by a self-supervised objective enforcing invariance to approximately geometry-preserving embedding views and separation from structure-breaking views. The resulting Top
What carries the argument
TopoGeoScore, a learned non-negative linear combination of global manifold complexity, local curvature, and higher-order topological invariants extracted from class-conditional k-NN graphs on source embeddings.
If this is right
- Checkpoints can be ranked and selected for deployment using only source-domain representations and no target samples or labels.
- The selected checkpoints improve accuracy on CIFAR corruption suites, ImageNet-C, MNLI-to-HANS transfer, and OGBN-Arxiv under distribution shift.
- Global manifold complexity, local curvature, and topological inconsistency together supply measurable evidence of robustness inside source embeddings.
- The scoring procedure remains fully interpretable because each component of the linear combination corresponds to a distinct geometric or topological property.
Where Pith is reading between the lines
- If source geometry reliably signals robustness, then monitoring these same invariants during training could serve as an early-stopping criterion for robustness.
- The same graph-construction and feature-extraction pipeline might be applied to other representation spaces such as language-model hidden states or graph-neural-network embeddings.
- Explicit regularization of the three topological quantities inside the training loss could directly encourage robustness rather than merely detecting it after training.
- The approach suggests that robustness under shift is partly a property of the embedding manifold's intrinsic geometry rather than solely of the decision boundary.
Load-bearing premise
The self-supervised objective that rewards invariance under geometry-preserving embedding views actually selects for genuine OOD robustness rather than some other incidental property of the source embeddings.
What would settle it
A controlled experiment in which TopoGeoScore ranks a set of checkpoints from the same training run yet the highest-scoring checkpoints achieve lower accuracy on multiple held-out corruption and shift benchmarks than lower-scoring ones.
Figures
read the original abstract
Out-of-distribution (OOD) robustness is difficult to diagnose when target-domain labels are unavailable. We consider a more restrictive source-only variant of unsupervised accuracy estimation: selecting robust checkpoints using only source-domain representations, with no target samples or target labels. We propose \textbf{TopoGeoScore}, a source-only geometric scorer for label-free OOD checkpoint selection. Given a trained checkpoint, we construct class-conditional mutual $k$-nearest-neighbour graphs from source embeddings and extract three interpretable signals: a torsion-inspired reduced Laplacian log-determinant for global class-manifold complexity, Ollivier--Ricci curvature for local neighbourhood regularity, and higher-order topological summaries for fragmented connectivity, loops, and global--local inconsistency. Instead of fixing their weights by hand, TopoGeoScore learns a non-negative linear score through a self-supervised objective that enforces invariance under approximately geometry-preserving embedding views and separation from structure-breaking views. The score remains interpretable and uses no target-domain samples or labels. Results across CIFAR-based corruption and distribution-shift benchmarks, ImageNet-C, MNLI$\to$HANS transfer, and OGBN-Arxiv suggest that source representations contain measurable global--local--topological evidence of robustness, supporting practical checkpoint selection before deployment under distribution shift.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes TopoGeoScore, a source-only geometric framework for selecting OOD-robust model checkpoints without target samples or labels. It constructs class-conditional mutual kNN graphs from source embeddings, extracts three signals (torsion-inspired reduced Laplacian log-determinant for global manifold complexity, Ollivier-Ricci curvature for local regularity, and higher-order topological summaries for connectivity and loops), and learns non-negative linear weights via a self-supervised objective that enforces invariance under approximately geometry-preserving embedding views while separating from structure-breaking views. Experiments are claimed on CIFAR corruption/shift benchmarks, ImageNet-C, MNLI to HANS, and OGBN-Arxiv.
Significance. If the central claim holds, the work offers a practical, interpretable tool for pre-deployment checkpoint selection under distribution shift using only source data. Strengths include the combination of global-local-topological features and the self-supervised weight learning that avoids hand-tuning or target supervision. This could complement existing OOD methods if the geometric invariants prove predictive of robustness rather than incidental source stability.
major comments (2)
- [Abstract and §3] Abstract and method description: The self-supervised objective enforces invariance only under source-internal, approximately geometry-preserving embedding views. Nothing in the construction ensures these invariants align with the specific manifold distortions induced by the target shifts (CIFAR corruptions, ImageNet-C, MNLI→HANS, OGBN-Arxiv). This is load-bearing for the claim that the score selects for actual OOD robustness; an explicit correlation analysis or ablation linking the learned score to measured OOD accuracy (rather than just selection success) is required.
- [§4] §4 (Experiments): The abstract states that results 'suggest that source representations contain measurable global--local--topological evidence of robustness' across benchmarks, but supplies no quantitative metrics, baselines, error bars, or ablation details on the contribution of each geometric signal. Without these, it is impossible to verify whether the topological summaries are load-bearing or whether the method outperforms simpler alternatives.
minor comments (2)
- [§3.2] Clarify the precise construction of 'approximately geometry-preserving' vs. 'structure-breaking' views in the self-supervised loss (including any hyperparameters such as k in the mutual kNN graph).
- [Figures in §4] Ensure all figures showing graph-based features include axis labels, legends, and statistical significance markers for the reported trends.
Simulated Author's Rebuttal
We thank the referee for their thoughtful and constructive comments on our manuscript. We address each major comment below and will incorporate revisions to strengthen the presentation and empirical support for our claims.
read point-by-point responses
-
Referee: [Abstract and §3] Abstract and method description: The self-supervised objective enforces invariance only under source-internal, approximately geometry-preserving embedding views. Nothing in the construction ensures these invariants align with the specific manifold distortions induced by the target shifts (CIFAR corruptions, ImageNet-C, MNLI→HANS, OGBN-Arxiv). This is load-bearing for the claim that the score selects for actual OOD robustness; an explicit correlation analysis or ablation linking the learned score to measured OOD accuracy (rather than just selection success) is required.
Authors: We agree that an explicit demonstration of alignment between the learned geometric invariants and OOD robustness is important for supporting the central claim. The self-supervised objective is constructed to identify weights that preserve geometric properties under views that approximate plausible shifts, but we acknowledge that this does not automatically guarantee correspondence to the specific distortions in the target benchmarks. In the revised version, we will add to §4 an explicit correlation analysis (e.g., Pearson or Spearman coefficients and scatter plots) between TopoGeoScore values and measured OOD accuracy across checkpoints on each benchmark, together with component-wise ablations that quantify how each geometric signal contributes to the observed selection performance. These additions will directly address whether the score captures robustness-relevant structure rather than source-only stability. revision: yes
-
Referee: [§4] §4 (Experiments): The abstract states that results 'suggest that source representations contain measurable global--local--topological evidence of robustness' across benchmarks, but supplies no quantitative metrics, baselines, error bars, or ablation details on the contribution of each geometric signal. Without these, it is impossible to verify whether the topological summaries are load-bearing or whether the method outperforms simpler alternatives.
Authors: We accept this criticism and agree that the experimental section would benefit from greater quantitative detail and transparency. While the manuscript reports selection performance on the listed benchmarks, we will revise §4 to include full tables of quantitative metrics (selection accuracy, mean OOD accuracy of selected checkpoints), comparisons against explicit baselines (e.g., embedding-norm scoring, single-signal geometric scores, and random selection), error bars obtained from multiple independent runs or seeds, and systematic ablation tables that isolate the contribution of the torsion-inspired Laplacian log-determinant, Ollivier-Ricci curvature, and higher-order topological summaries. These revisions will allow readers to assess whether the topological components are load-bearing and whether TopoGeoScore improves upon simpler alternatives. revision: yes
Circularity Check
No significant circularity; self-supervised weights learned on source data with empirical OOD validation
full rationale
The paper defines TopoGeoScore as a non-negative linear combination of three geometric measures (Laplacian log-det, Ollivier-Ricci curvature, topological summaries) extracted from source embeddings. Weights are obtained via a self-supervised objective that penalizes deviation under source-internal geometry-preserving views. This construction uses only source data and contains no target robustness labels or OOD samples by design. The central claim—that the resulting score selects robust checkpoints—is presented as an empirical hypothesis tested on external benchmarks (CIFAR corruptions, ImageNet-C, MNLI→HANS, OGBN-Arxiv). No step reduces the claimed correlation to a definitional equivalence, fitted input renamed as prediction, or load-bearing self-citation chain. The method is self-contained against external benchmarks and does not invoke uniqueness theorems or ansatzes from prior author work.
Axiom & Free-Parameter Ledger
free parameters (2)
- k in mutual k-nearest-neighbour graph
- non-negative linear weights
axioms (2)
- domain assumption Source-domain class-conditional embeddings contain global-local-topological signals that are predictive of robustness under distribution shift
- domain assumption Approximately geometry-preserving embedding views can be generated without target data
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AlexanderDuality.lean, IndisputableMonolith/Cost/FunctionalEquation.leanalexander_duality_circle_linking, washburn_uniqueness_aczel, reality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
construct class-conditional mutual k-nearest-neighbour graphs ... torsion-inspired reduced Laplacian log-determinant ... Ollivier–Ricci curvature ... higher-order topological summaries ... self-supervised objective that enforces invariance under approximately geometry-preserving embedding views
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Advances in Neural Information Processing Systems , volume =
Spectrally-normalized margin bounds for neural networks , author =. Advances in Neural Information Processing Systems , volume =
-
[3]
International Conference on Learning Representations , year =
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , author =. International Conference on Learning Representations , year =
-
[4]
Advances in Neural Information Processing Systems , volume =
Sinkhorn Distances: Lightspeed Computation of Optimal Transport , author =. Advances in Neural Information Processing Systems , volume =
-
[5]
Invariant risk minimization , author=. arXiv preprint arXiv:1907.02893 , year=
work page internal anchor Pith review arXiv 1907
-
[6]
International Conference on Machine Learning , pages=
Fishr: Invariant gradient variances for out-of-distribution generalization , author=. International Conference on Machine Learning , pages=. 2022 , organization=
work page 2022
-
[7]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Sharpness-aware gradient matching for domain generalization , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[8]
Advances in Neural Information Processing Systems , volume=
Swad: Domain generalization by seeking flat minima , author=. Advances in Neural Information Processing Systems , volume=
-
[9]
International conference on machine learning , pages=
Similarity of neural network representations revisited , author=. International conference on machine learning , pages=. 2019 , organization=
work page 2019
-
[10]
International Conference on Machine Learning , pages=
Invariant risk minimization games , author=. International Conference on Machine Learning , pages=. 2020 , organization=
work page 2020
-
[11]
Proceedings of the 38th International Conference on Machine Learning , pages =
Out-of-Distribution Generalization via Risk Extrapolation (REx) , author =. Proceedings of the 38th International Conference on Machine Learning , pages =. 2021 , editor =
work page 2021
-
[12]
Averaging Weights Leads to Wider Optima and Better Generalization
Averaging weights leads to wider optima and better generalization , author=. arXiv preprint arXiv:1803.05407 , year=
-
[13]
arXiv preprint arXiv:2010.15327 , year=
Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth , author=. arXiv preprint arXiv:2010.15327 , year=
-
[14]
arXiv preprint arXiv:1912.02178 , year=
Fantastic generalization measures and where to find them , author=. arXiv preprint arXiv:1912.02178 , year=
-
[15]
Sharpness-aware minimization for efficiently improving generalization , author=. arXiv preprint arXiv:2010.01412 , year=
-
[16]
International conference on machine learning , pages=
Understanding contrastive representation learning through alignment and uniformity on the hypersphere , author=. International conference on machine learning , pages=. 2020 , organization=
work page 2020
-
[17]
Laplacian eigenmaps for dimensionality reduction and data representation , author=. Neural computation , volume=. 2003 , publisher=
work page 2003
-
[18]
Neural persistence: A complexity measure for deep neural networks using algebraic topology , author=. arXiv preprint arXiv:1812.09764 , year=
-
[19]
Comptes Rendus Mathematique , volume=
Ricci curvature of metric spaces , author=. Comptes Rendus Mathematique , volume=. 2007 , publisher=
work page 2007
-
[20]
Understanding over-squashing and bottlenecks on graphs via curvature , author=. arXiv preprint arXiv:2111.14522 , year=
-
[21]
arXiv preprint arXiv:1802.04443 , year=
On characterizing the capacity of neural networks using algebraic topology , author=. arXiv preprint arXiv:1802.04443 , year=
-
[22]
Computational Learning Theory and Kernel Machines (COLT/Kernel 2003) , series=
Kernels and Regularization on Graphs , author=. Computational Learning Theory and Kernel Machines (COLT/Kernel 2003) , series=. 2003 , publisher=
work page 2003
-
[23]
International Conference on Learning Representations (ICLR) , year=
Understanding over-squashing and bottlenecks on graphs via curvature , author=. International Conference on Learning Representations (ICLR) , year=
-
[24]
International Conference on Learning Representations (ICLR) , year=
Effective Structural Encodings via Local Curvature Profiles , author=. International Conference on Learning Representations (ICLR) , year=
-
[25]
Proceedings of the 39th International Conference on Machine Learning (ICML) , volume=
Representation Topology Divergence: A Method for Comparing Neural Network Representations , author=. Proceedings of the 39th International Conference on Machine Learning (ICML) , volume=. 2022 , publisher=
work page 2022
-
[26]
Proceedings of the 38th International Conference on Machine Learning (ICML) , volume=
Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization , author=. Proceedings of the 38th International Conference on Machine Learning (ICML) , volume=. 2021 , publisher=
work page 2021
-
[27]
Advances in Neural Information Processing Systems (NeurIPS) , volume=
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , author=. Advances in Neural Information Processing Systems (NeurIPS) , volume=
-
[28]
arXiv preprint arXiv:2207.02093 , year=
Harder or Different? A Closer Look at Distribution Shift in Dataset Reproduction , author=. arXiv preprint arXiv:2207.02093 , year=
-
[29]
arXiv preprint arXiv:1806.00451 , year=
Do CIFAR-10 Classifiers Generalize to CIFAR-10.1? , author=. arXiv preprint arXiv:1806.00451 , year=
-
[30]
arXiv preprint arXiv:2509.22362 , year=
Neural Feature Geometry Evolves as Discrete Ricci Flow , author=. arXiv preprint arXiv:2509.22362 , year=
-
[31]
Statistics and Computing , year=
A tutorial on spectral clustering , author=. Statistics and Computing , year=
-
[32]
Advances in Mathematics , year=
R-torsion and the Laplacian , author=. Advances in Mathematics , year=
-
[33]
Advances in Mathematics , year=
Analytic torsion and R-torsion , author=. Advances in Mathematics , year=
-
[34]
IEEE Transactions on Pattern Analysis and Machine Intelligence , volume =
Shen, Cong and Liu, Xiang and Luo, Jiawei and Xia, Kelin , title =. IEEE Transactions on Pattern Analysis and Machine Intelligence , volume =. 2025 , doi =
work page 2025
-
[35]
Proceedings of the 40th International Conference on Machine Learning (ICML) , series =
Nguyen, Khang and Nong, Hieu and Nguyen, Vinh and Ho, Nhat and Osher, Stanley and Nguyen, Tan , title =. Proceedings of the 40th International Conference on Machine Learning (ICML) , series =
-
[36]
and Zhu, Yu and Seby, Jean-Baptiste and Roddenberry, T
Schaub, Michael T. and Zhu, Yu and Seby, Jean-Baptiste and Roddenberry, T. Mitchell and Segarra, Santiago , title =. Signal Processing , volume =. 2021 , doi =
work page 2021
-
[37]
Bodnar, Cristian and Frasca, Fabrizio and Wang, Yu Guang and Otter, Nina and Mont. Weisfeiler and. Proceedings of the 38th International Conference on Machine Learning (ICML) , series =
-
[38]
Simplicial Neural Networks , booktitle =
Ebli, Stefania and Defferrard, Micha. Simplicial Neural Networks , booktitle =
-
[39]
Huang, Jinghan and Chen, Qiufeng and Bian, Yijun and Zhu, Pengli and Chen, Nanguang and Chung, Moo K. and Qiu, Anqi , title =. arXiv preprint arXiv:2403.06687 , year =
-
[40]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Zhou, Cai and Wang, Xiyuan and Zhang, Muhan , title =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[41]
and Nanda, Vidit and Subr, Kartic , title =
Keros, Alexandros D. and Nanda, Vidit and Subr, Kartic , title =. Proceedings of the AAAI Conference on Artificial Intelligence , year =
-
[42]
Journal of Machine Learning Research , volume =
Bubenik, Peter , title =. Journal of Machine Learning Research , volume =
-
[43]
Journal of Machine Learning Research , volume =
Adams, Henry and Emerson, Tegan and Kirby, Michael and Neville, Rachel and Peterson, Chris and Shipman, Patrick and Chepushtanova, Sofya and Hanson, Eric and Motta, Francis and Ziegelmeier, Lori , title =. Journal of Machine Learning Research , volume =
-
[44]
Proceedings of the European Conference on Complex Systems (ECCS 2014) , pages =
Rucco, Matteo and Castiglione, Filippo and Merelli, Emanuela and Pettini, Marco , title =. Proceedings of the European Conference on Complex Systems (ECCS 2014) , pages =. 2016 , publisher =
work page 2014
-
[45]
Birdal, Tolga and Lou, Aaron and Guibas, Leonidas J. and. Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks , booktitle =
-
[46]
Guti. Persistent Homology Captures the Generalization of Neural Networks Without a Validation Set , journal =
-
[47]
Predicting the Generalization Gap in Neural Networks Using Topological Data Analysis , journal =
Ballester, Rub. Predicting the Generalization Gap in Neural Networks Using Topological Data Analysis , journal =
-
[48]
and Neyshabur, Behnam and Sedghi, Hanie , title =
Garg, Saurabh and Balakrishnan, Sivaraman and Lipton, Zachary C. and Neyshabur, Behnam and Sedghi, Hanie , title =. International Conference on Learning Representations (ICLR) , year =
-
[49]
Proceedings of the 39th International Conference on Machine Learning (ICML) , series =
Yu, Yaodong and Yang, Zitong and Wei, Alexander and Ma, Yi and Steinhardt, Jacob , title =. Proceedings of the 39th International Conference on Machine Learning (ICML) , series =
-
[50]
arXiv preprint arXiv:2401.08909 , year =
Deng, Renchunzi and Wei, Hongxin and Zhang, Zhi and Cao, Yuzhou and Feng, Lei and An, Bo , title =. arXiv preprint arXiv:2401.08909 , year =
-
[51]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Xie, Renchunzi and Wei, Hongxin and Cao, Yuzhou and Feng, Lei and An, Bo , title =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[52]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
Deng, Weijian and Zheng, Liang , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
-
[53]
arXiv preprint arXiv:2510.02956 , year =
Deng, Weijian and others , title =. arXiv preprint arXiv:2510.02956 , year =
-
[54]
Mitchell and Schaub, Michael T
Roddenberry, T. Mitchell and Schaub, Michael T. and Hajij, Mustafa , title =. arXiv preprint arXiv:2110.05614 , year =
-
[55]
ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning , year =
Lu, Shangyun and Nott, Bradley and Olson, Aaron and Todeschini, Alberto and Vahabi, Hossein and Carmon, Yair and Schmidt, Ludwig , title =. ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning , year =
work page 2020
-
[56]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
Liu, Zhuang and Mao, Hanzi and Wu, Chao-Yuan and Feichtenhofer, Christoph and Darrell, Trevor and Xie, Saining , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
- [57]
-
[58]
Thomas and Pavlick, Ellie and Linzen, Tal , title =
McCoy, R. Thomas and Pavlick, Ellie and Linzen, Tal , title =. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) , year =
-
[59]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Hu, Weihua and Fey, Matthias and Zitnik, Marinka and Dong, Yuxiao and Ren, Hongyu and Liu, Bowen and Catasta, Michele and Leskovec, Jure , title =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[60]
Proceedings of the 37th International Conference on Machine Learning (ICML) , pages =
A Simple Framework for Contrastive Learning of Visual Representations , author =. Proceedings of the 37th International Conference on Machine Learning (ICML) , pages =
-
[61]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =
Momentum Contrast for Unsupervised Visual Representation Learning , author =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =
-
[62]
Advances in Neural Information Processing Systems (NeurIPS) , volume =
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , author =. Advances in Neural Information Processing Systems (NeurIPS) , volume =
-
[63]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =
Exploring Simple Siamese Representation Learning , author =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =
-
[64]
Proceedings of the 38th International Conference on Machine Learning (ICML) , pages =
Barlow Twins: Self-Supervised Learning via Redundancy Reduction , author =. Proceedings of the 38th International Conference on Machine Learning (ICML) , pages =
-
[65]
Bardes, Adrien and Ponce, Jean and LeCun, Yann , booktitle =
-
[66]
Advances in Neural Information Processing Systems (NeurIPS) , volume =
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss , author =. Advances in Neural Information Processing Systems (NeurIPS) , volume =
-
[67]
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages =
Predicting with Confidence on Unseen Distributions , author =. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages =
-
[68]
Advances in Neural Information Processing Systems (NeurIPS) , volume =
Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift , author =. Advances in Neural Information Processing Systems (NeurIPS) , volume =
-
[69]
Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (UAI) , year =
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data , author =. Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (UAI) , year =
-
[70]
Advances in Neural Information Processing Systems (NeurIPS) , year=
Open Graph Benchmark: Datasets for Machine Learning on Graphs , author=. Advances in Neural Information Processing Systems (NeurIPS) , year=
-
[71]
International Congress on Mathematical Software , pages=
The GUDHI library: Simplicial complexes and persistent homology , author=. International Congress on Mathematical Software , pages=. 2014 , organization=
work page 2014
-
[72]
Advances in Neural Information Processing Systems (NeurIPS) , year=
Inductive representation learning on large graphs , author=. Advances in Neural Information Processing Systems (NeurIPS) , year=
-
[73]
International Conference on Learning Representations (ICLR) , year=
Graph attention networks , author=. International Conference on Learning Representations (ICLR) , year=
-
[74]
Artificial Intelligence Review , volume=
Topological deep learning: a review of an emerging paradigm , author=. Artificial Intelligence Review , volume=
-
[75]
Bartlett and Shahar Mendelson , title =
Peter L. Bartlett and Shahar Mendelson , title =. Journal of Machine Learning Research , volume =
-
[76]
Rajendra Bhatia , title =
-
[77]
Proximity of Persistence Modules and Their Diagrams , booktitle =
Fr. Proximity of Persistence Modules and Their Diagrams , booktitle =
-
[78]
Fan R. K. Chung , title =
-
[79]
Discrete & Computational Geometry , volume =
David Cohen-Steiner and Herbert Edelsbrunner and John Harer , title =. Discrete & Computational Geometry , volume =
-
[80]
Commentarii Mathematici Helvetici , volume =
Beno Eckmann , title =. Commentarii Mathematici Helvetici , volume =
- [81]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.