Recognition: 2 theorem links
· Lean TheoremA Spectral Framework for Multi-Scale Nonlinear Dimensionality Reduction
Pith reviewed 2026-05-13 21:14 UTC · model grok-4.3
The pith
A spectral framework pairs linear decomposition with cross-entropy optimization to create multi-scale nonlinear dimensionality reduction embeddings that preserve both global and local structure.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper claims that high-dimensional data can be embedded via a spectral basis derived from linear decomposition, then adjusted through cross-entropy optimization, to produce representations that simultaneously maintain global manifold geometry and local neighborhood separation. The same linear basis supports post-hoc examination of the embedding by decomposing it into contributions from different graph-frequency modes. Glyph-based augmentations on the resulting scatterplots further allow visual tracing of those mode influences.
What carries the argument
The spectral basis obtained from linear spectral decomposition of the input graph, which supplies an explicit multi-scale foundation that cross-entropy optimization then refines while preserving analytical access to frequency-mode contributions.
If this is right
- The embedding process becomes inspectable by isolating the contribution of each spectral mode to the final coordinates.
- Scatterplot visualizations can be augmented with glyphs that directly encode those mode contributions for qualitative checks.
- Manifold continuity improves because the spectral starting point anchors global shape before local refinement occurs.
- The same pipeline supports quantitative comparisons that track how changes in spectral truncation affect both scale levels.
Where Pith is reading between the lines
- The framework might be tested on time-varying or streaming data by updating the spectral basis incrementally rather than recomputing from scratch.
- One could replace cross-entropy with other contrastive losses and measure whether the graph-frequency analysis remains equally informative.
- Integration with existing visualization software could let users interactively suppress or amplify specific frequency bands to explore embedding sensitivity.
Load-bearing premise
That a linear spectral decomposition can be merged with cross-entropy optimization to improve both global and local fidelity without creating new uncontrolled distortions in the embedding.
What would settle it
Apply the framework to a standard test manifold such as the Swiss roll or a hierarchical cluster dataset, then measure global structure error against Laplacian Eigenmaps and local neighborhood error against t-SNE; if the new embeddings score worse on both metrics than the specialized baselines, the central claim is falsified.
Figures
read the original abstract
Dimensionality reduction (DR) is characterized by two longstanding trade-offs. First, there is a global-local preservation tension: methods such as t-SNE and UMAP prioritize local neighborhood preservation, yet may distort global manifold structure, while methods such as Laplacian Eigenmaps preserve global geometry but often yield limited local separation. Second, there is a gap between expressiveness and analytical transparency: many nonlinear DR methods produce embeddings without an explicit connection to the underlying high-dimensional structure, limiting insight into the embedding process. In this paper, we introduce a spectral framework for nonlinear DR that addresses these challenges. Our approach embeds high-dimensional data using a spectral basis combined with cross-entropy optimization, enabling multi-scale representations that bridge global and local structure. Leveraging linear spectral decomposition, the framework further supports analysis of embeddings through a graph-frequency perspective, enabling examination of how spectral modes influence the resulting embedding. We complement this analysis with glyph-based scatterplot augmentations for visual exploration. Quantitative evaluations and case studies demonstrate that our framework improves manifold continuity while enabling deeper analysis of embedding structure through spectral mode contributions.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces a spectral framework for nonlinear dimensionality reduction that combines linear spectral decomposition (via graph Laplacian eigendecomposition) with cross-entropy optimization to produce multi-scale embeddings bridging global and local manifold structure, while enabling graph-frequency analysis of mode contributions and glyph-based visual augmentations.
Significance. If the central claim holds, the framework would meaningfully address the global-local tension in DR methods and the expressiveness-transparency gap by retaining analytical access to spectral modes after nonlinear optimization, offering a principled alternative to purely heuristic nonlinear techniques like t-SNE or UMAP.
major comments (1)
- [§3.2 and §4.1] §3.2, Eq. (4) and §4.1: the spectral basis is obtained from eigendecomposition of the graph Laplacian, yet the cross-entropy loss minimized in §4.1 operates directly on pairwise distances without any regularization or projection term that would preserve stable contributions from the original eigenmodes. No derivation is provided showing that post-optimization coordinates remain interpretable in the graph-frequency basis, which directly threatens the multi-scale bridging and frequency-analysis claims.
minor comments (1)
- [Abstract] The abstract states that quantitative evaluations demonstrate improved manifold continuity, but no specific metrics, baselines, or ablation results are previewed; adding a brief table reference would improve clarity.
Simulated Author's Rebuttal
We thank the referee for their constructive comments. The major comment identifies a need for explicit derivation regarding post-optimization spectral interpretability. We address this point directly below and outline the planned revision.
read point-by-point responses
-
Referee: [§3.2 and §4.1] §3.2, Eq. (4) and §4.1: the spectral basis is obtained from eigendecomposition of the graph Laplacian, yet the cross-entropy loss minimized in §4.1 operates directly on pairwise distances without any regularization or projection term that would preserve stable contributions from the original eigenmodes. No derivation is provided showing that post-optimization coordinates remain interpretable in the graph-frequency basis, which directly threatens the multi-scale bridging and frequency-analysis claims.
Authors: We appreciate the referee highlighting this aspect. The embedding is initialized via the spectral basis from the graph Laplacian eigendecomposition in Eq. (4) of §3.2, after which the cross-entropy loss in §4.1 refines the coordinates for improved local preservation. While the optimization lacks an explicit regularization term, the coordinates remain linear combinations of the original eigenmodes by construction. We acknowledge, however, that the manuscript does not supply a formal derivation of the stability of these frequency contributions after optimization. We will revise the paper by inserting a new subsection after §4.1 that derives the re-projection of optimized coordinates onto the graph-frequency basis and shows that mode contributions remain interpretable under standard manifold smoothness assumptions. The revision will also include a brief empirical check confirming that frequency signatures are preserved. This change directly addresses the concern and strengthens the multi-scale and frequency-analysis claims. revision: yes
Circularity Check
No circularity: derivation combines standard spectral basis with cross-entropy optimization without self-referential reduction
full rationale
The paper's core construction starts from eigendecomposition of a graph Laplacian (standard linear spectral step) and then applies cross-entropy optimization to produce the final embedding coordinates. No equation in the provided abstract or described sections shows the optimized coordinates being redefined back into the spectral basis or the optimization loss being fitted to recover the same eigenmodes by construction. The multi-scale bridging claim rests on the explicit combination of these two independent components rather than on any parameter being renamed as a prediction or on a self-citation chain that forbids alternatives. External benchmarks (quantitative evaluations and case studies) are invoked to support performance, keeping the derivation self-contained.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearembedding expressed as Y = U_S · P_S; optimize cross-entropy on coefficients of Laplacian spectral modes
-
IndisputableMonolith/Foundation/AlexanderDuality.leanalexander_duality_circle_linking unclearprogressive coarse-to-fine via expanding spectral subspaces S1 < S2 < … < S_T
Reference graph
Works this paper leans on
-
[1]
R. Balestriero and Y . LeCun. Contrastive and non-contrastive self-supervised learning recover global and local spectral em- bedding methods.Adv. Neural Inf. Process. Syst., 35:26671– 26685, 2022. https://proceedings.neurips.cc/paper_files/paper/2022/file/ aa56c74513a5e35768a11f4e82dd7ffb-Paper-Conference.pdf. 1, 3, 9
work page 2022
-
[2]
E. Becht, L. McInnes, J. Healy, C.-A. Dutertre, I. W. Kwok, L. G. Ng et al. Dimensionality reduction for visualizing single-cell data using UMAP. Nat. Biotechnol., 37(1):38–44, 2019. doi: 10.1038/nbt.4314 1, 2, 8
-
[3]
M. Belkin and P. Niyogi. Laplacian Eigenmaps for dimensionality reduc- tion and data representation.Neural Comput., 15(6):1373–1396, 2003. doi: 10.1162/089976603321780317 1, 2, 3, 6, 7
-
[4]
D. Benson-Putnins, M. Bonfardin, M. E. Magnoni, and D. Martin. Spectral clustering and visualization: A novel clustering of Fisher’s iris data set. SIAM Undergraduate Research Online, 4:1–15, 2011. https://www.siam. org/media/s12ln4i2/spectral_clustering_and_visualization.pdf. 3
work page 2011
- [5]
-
[6]
doi: 10.1016/j.neucom.2023.01.073 3
-
[7]
A. Bibal, R. Marion, R. von Sachs, and B. Frénay. BIOT: Explaining multidimensional nonlinear MDS embeddings using the Best Interpretable Orthogonal Transformation.Neurocomputing, 453:109–118, 2021. doi: 10.1016/j.neucom.2021.04.088. 1, 3
-
[8]
J. N. Böhm, P. Berens, and D. Kobak. Unsupervised visualization of image datasets using contrastive learning.arXiv:2210.09879, 2022. doi: 10.48550/arXiv.2210.09879 1
-
[9]
J. N. Böhm, P. Berens, and D. Kobak. Attraction-repulsion spectrum in neighbor embeddings.J. Mach. Learn. Res., 23(95):1–32, 2022. https: //jmlr.org/papers/volume23/21-0055/21-0055.pdf. 1, 2
work page 2022
-
[10]
T. T. Cai and R. Ma. Theoretical foundations of t-SNE for visualizing high-dimensional clustered data.J. Mach. Learn. Res., 23(301):1–54,
-
[11]
https://jmlr.org/papers/volume23/21-0524/21-0524.pdf. 3, 8
-
[12]
R. R. Coifman and S. Lafon. Diffusion maps.Appl. Comput. Harmon. Anal., 21(1):5–30, 2006. doi: 10.1016/j.acha.2006.04.006 3
-
[13]
D. B. Coimbra, R. M. Martins, T. T. Neves, A. C. Telea, and F. V . Paulovich. Explaining three-dimensional dimensionality reduction plots.Inf. Vis., 15(2):154–172, 2016. doi: 10.1177/1473871615600010 2
-
[14]
E. Couplet, P. Lambert, M. Verleysen, D. Mulders, J. A. Lee, and C. De Bodt. Natively interpretable t-SNE. InProc. ECML PKDD, pp. 107–123. Springer, 2023. doi: 10.1007/978-3-031-74630-7_8 1, 3
-
[15]
J. P. Cunningham and Z. Ghahramani. Linear dimensionality re- duction: Survey, insights, and generalizations.J. Mach. Learn. Res., 16(89):2859–2900, 2015. https://jmlr.org/papers/volume16/ cunningham15a/cunningham15a.pdf. 2
work page 2015
-
[16]
S. Damrich, N. Böhm, F. A. Hamprecht, and D. Kobak. From t-SNE to UMAP with contrastive learning. InProc. ICLR, 2023. 44 pages. doi: abs/2206.01816 1
-
[17]
S. Damrich and F. A. Hamprecht. On UMAP’s true loss function.Adv. Neural Inf. Process. Syst., 34:5798–5809, 2021. https://proceedings.nips. cc/paper/2021/file/2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf. 1, 4, 9
work page 2021
-
[18]
C. de Bodt, A. Diaz-Papkovich, M. Bleher, K. Bunte, C. Coupette, S. Dam- rich et al. Low-dimensional embeddings of high-dimensional data, Aug
-
[19]
doi: 10.48550/arXiv.2508.15929 2, 3
-
[20]
M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering.Adv. Neural Inf. Process. Syst., 29, 2016. https://proceedings.neurips.cc/paper/2016/file/ 04df4d434d481c5bb723be1b6df1ee65-Paper.pdf. 1, 3, 9
work page 2016
-
[21]
L. Deng. The MNIST database of handwritten digit images for machine learning research.IEEE Signal Process. Mag., 29(6):141–142, 2012. doi: 10.1109/MSP.2012.2211477 4, 5
-
[22]
X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst. Learning Lapla- cian matrix in smooth graph signal representations.IEEE Trans. Signal Process., 64(23):6160–6173, 2016. doi: 10.1109/TSP.2016.2602809 6
- [23]
-
[24]
R. Faust, D. Glickenstein, and C. Scheidegger. DimReader: Axis lines that explain non-linear projections .IEEE Trans. Vis. Comput. Graph., 25(01):481–490, 2019. doi: 10.1109/TVCG.2018.2865194 2
-
[25]
J. J. Forman, P. A. Clemons, S. L. Schreiber, and S. J. Haggarty. Spectral- NET – an application for spectral graph analysis and visualization.BMC Bioinformatics, 6(1), 2005. doi: 10.1186/1471-2105-6-260 1, 3
-
[26]
T. Fujiwara, O.-H. Kwon, and K.-L. Ma. Supporting analysis of dimen- sionality reduction results with contrastive learning.IEEE Trans. Vis. Comput. Graph., 26(01):45–55, 2020. doi: 10.1109/TVCG.2019.2934251 3
-
[27]
T. Fujiwara, X. Wei, J. Zhao, and K.-L. Ma. Interactive dimensionality reduction for comparative analysis.IEEE Trans. Vis. Comput. Graph., 28(1):758–768, 2022. doi: 10.1109/TVCG.2021.3114807 3
- [28]
-
[29]
A. Gisbrecht, A. Schulz, and B. Hammer. Parametric nonlinear dimension- ality reduction using kernel t-SNE.Neurocomputing, 147:71–82, 2015. doi: 10.1016/j.neucom.2013.11.045 1
-
[30]
X. Guo, W. Xu, W. Zhang, C. Pan, A. E. Thalacker-Mercer, H. Zheng et al. High-frequency and functional mitochondrial DNA mutations at the single-cell level.Proc. Natl. Acad. Sci. U.S.A., 120(1):e2201518120, 2023. doi: 10.1073/pnas.2201518120 6
-
[31]
D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory.Appl. Comput. Harmon. Anal., 30(2):129–150,
-
[32]
doi: 10.1016/j.acha.2010.04.005 3
-
[33]
N. Heulot, M. Aupetit, and J.-D. Fekete. ProxiLens: Interactive Ex- ploration of High-Dimensional Data using Projections. InProc. VAMP. The Eurographics Association, 2013. 5 pages. doi: 10.2312/PE.V AMP. V AMP2013.011-015 2
-
[34]
T. Höllt, N. Pezzotti, V . van Unen, F. Koning, E. Eisemann, B. Lelieveldt et al. Cytosplore: Interactive immune cell phenotyping for large single-cell datasets.Comput. Graph. Forum, 35(3):171–180, 2016. doi: 10.1111/cgf. 12893 8
work page doi:10.1111/cgf 2016
-
[36]
G. Huguet, A. Tong, E. De Brouwer, Y . Zhang, G. Wolf, I. Adel- stein et al. A heat diffusion perspective on geodesic preserving di- mensionality reduction.Adv. Neural Inf. Process. Syst., 36:6986– 7016, 2023. https://proceedings.neurips.cc/paper_files/paper/2023/file/ 16063a1c0f0cddd4894585cf44cebb2c-Paper-Conference.pdf. 1, 3, 8, 9
work page 2023
-
[37]
H. Jeon, A. Cho, J. Jang, S. Lee, J. Hyun, H.-K. Ko et al. ZADU: A python library for evaluating the reliability of dimensionality reduction embeddings. InProc. VIS, pp. 196–200, 2023. doi: 10.1109/VIS54172. 2023.00048 7
-
[38]
H. Jeon, Y .-H. Kuo, M. Aupetit, K.-L. Ma, and J. Seo. Classes are not clusters: Improving label-based evaluation of dimensionality reduction. IEEE Trans. Vis. Comput. Graph., 30(1):781–791, 2024. doi: 10.1109/ TVCG.2023.3327187 2, 7
-
[39]
H. Jeon, H. Lee, Y .-H. Kuo, T. Yang, D. Archambault, S. Ko et al. Un- veiling high-dimensional backstage: A survey for reliable visual analytics with dimensionality reduction. InProc. CHI, art. no. 394, 24 pages. ACM,
-
[40]
doi: 10.1145/3706598.3713551 1, 2, 7
-
[41]
M. Jung, T. Fujiwara, and J. Jo. GhostUMAP2: Measuring and analyzing (r,d)-stability of UMAP.IEEE Trans. Vis. Comput. Graph., 32(1):353– 362, 2026. doi: 10.1109/TVCG.2025.3633894 2, 7
-
[42]
D. Kobak and G. C. Linderman. Initialization is critical for preserving global data structure in both t-SNE and UMAP.Nat Biotechnol, 39(2):156– 157, 2021. doi: 10.1038/s41587-020-00809-z 1, 3, 6, 8
-
[43]
J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis.Psychometrika, 29(1):1–27, 1964. doi: 10. 1007/BF02289565 7
work page 1964
-
[44]
J. A. Lee and M. Verleysen. Quality assessment of dimensionality reduc- tion: Rank-based criteria.Neurocomputing, 72(7–9):1431–1443, 2009. doi: 10.1016/j.neucom.2008.12.017 7
-
[45]
R. B. Lehoucq, D. C. Sorensen, and C. Yang.ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. SIAM, 1998. doi: 10.1137/1.9780898719628 6
-
[46]
N. Li, V . van Unen, T. Höllt, A. Thompson, J. van Bergen, N. Pezzotti et al. Mass cytometry reveals innate lymphoid cell differentiation pathways in the human fetal intestine.J. Exp. Med., 215(5):1383–1396, 2018. doi: 10.1084/jem.20171934 8
-
[47]
D. Liao, C. Liu, B. W. Christensen, A. Tong, G. Huguet, G. Wolf et al. Assessing neural network representations during training using noise- resilient diffusion spectral entropy. InProc. CISS, pp. 1–6. IEEE, 2024. doi: 10.1109/CISS59072.2024.10480166 3
-
[48]
Q. Liu, Y . Ren, Z. Zhu, D. Li, X. Ma, and Q. Li. RankAxis: Towards a systematic combination of projection and ranking in multi-attribute data exploration.IEEE Trans. Vis. Comput. Graph., 29(1):701–711, 2023. doi: 10.1109/TVCG.2022.3209463 3
-
[49]
A. Machado, M. Behrisch, and A. Telea. Necessary but not suffi- cient: Limitations of projection quality metrics.Comput. Graph. Forum, 44(3):e70101, 2025. doi: 10.1111/cgf.70101 2, 7
-
[50]
A. Ma´ckiewicz and W. Ratajczak. Principal components analysis (PCA). Comput. Geosci., 19(3):303–342, 1993. 6
work page 1993
-
[51]
T. Manz, F. Lekschas, E. Greene, G. Finak, and N. Gehlenborg. A general framework for comparing embedding visualizations across class-label hierarchies.IEEE Trans. Vis. Comput. Graph., 31(1):283–293, 2025. doi: 10.1109/TVCG.2024.3456370 3
-
[52]
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
L. McInnes, J. Healy, and J. Melville. UMAP: Uniform manifold approxi- mation and projection for dimension reduction.arXiv:1802.03426, 2018. doi: 10.48550/arXiv.1802.03426 1, 2, 3, 4, 6, 7, 9
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1802.03426 2018
-
[53]
B. Montambault, G. Appleby, J. Rogers, C. D. Brumar, M. Li, and R. Chang. DimBridge: Interactive explanation of visual patterns in di- mensionality reductions with predicate logic.IEEE Trans. Vis. Comput. Graph., 31(1):207–217, 2025. doi: 10.1109/TVCG.2024.3456391 2
-
[54]
K. R. Moon, D. van Dijk, Z. Wang, S. Gigante, D. B. Burkhardt, W. S. Chen et al. Visualizing structure and transitions in high-dimensional biological data.Nat. Biotechnol., 37(12):1482–1492, Dec. 2019. doi: 10. 1038/s41587-019-0336-3 3, 5, 7, 8, 9
work page 2019
-
[55]
N. Okami, K. Miyake, N. Sakamoto, J. Nonaka, and T. Fujiwara. Visual analytics using tensor unified linear comparative analysis.IEEE Trans. Vis. Comput. Graph., 32(1):79–89, 2026. doi: 10.1109/TVCG.2025.3633912 3
-
[56]
J. S. Packer, Q. Zhu, C. Huynh, P. Sivaramakrishnan, E. Preston, H. Dueck et al. A lineage-resolved molecular atlas ofC. elegansembryogenesis at single-cell resolution.Science, 365(6459):eaax1971, 2019. doi: 10. 1126/science.aax1971 5, 8
work page 2019
-
[57]
P. Rosen, M. Hajij, and B. Wang. Homology-preserving multi-scale graph skeletonization using mapper on graphs. InProc. TopoInVis, pp. 10–20. IEEE, 2023. doi: 10.1109/TopoInVis60193.2023.00008 3
-
[58]
T. L. Saidi, A. Hickok, and A. J. Blumberg. Recovering manifold structure using Ollivier Ricci curvature. InProc. ICLR, 2025. doi: 10.48550/arXiv. 2410.01149 2, 9
work page internal anchor Pith review doi:10.48550/arxiv 2025
-
[59]
T. Sainburg, L. McInnes, and T. Q. Gentner. Parametric UMAP embed- dings for representation and semisupervised learning.Neural Comput., 33(11):2881–2907, 10 2021. doi: 10.1162/neco_a_01434 1
-
[60]
P. Salmanian, A. Chatzimparmpas, A. C. Karaca, and R. M. Martins. DimVis: Interpreting Visual Clusters in Dimensionality Reduction With Explainable Boosting Machine. InProc. MLVis. The Eurographics Asso- ciation, 2024. doi: 10.2312/mlvis.20241125 3
-
[61]
A. Sarikaya and M. Gleicher. Scatterplots: Tasks, data, and designs.IEEE Trans. Vis. Comput. Graph., 24(1):402–412, 2018. doi: 10.1109/TVCG. 2017.2744184 5
-
[62]
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The emerging field of signal processing on graphs: Extending high- dimensional data analysis to networks and other irregular domains.IEEE Signal Process. Mag., 30(3):83–98, 2013. 1, 2, 3, 4, 6, 9
work page 2013
-
[63]
S. Sidney. Nonparametric statistics for the behavioral sciences.J. Nerv. Ment. Dis., 125(3), 1957. 7
work page 1957
-
[64]
P. Sivaramakrishnan, C. Watkins, and J. I. Murray. Transcript accu- mulation rates in the earlyCaenorhabditis elegansembryo.Sci. Adv., 9(34):eadi1270, 2023. doi: 10.1126/sciadv.adi1270 8
-
[65]
D. Spielman. Spectral graph theory. In U. Naumann and O. Schenk, eds.,Combinatorial Scientific Computing, vol. 18. CRC Press Boca Raton, Florida, 2012. doi: 10.1201/b11644 3
- [66]
-
[67]
J. B. Tenenbaum, V . d. Silva, and J. C. Langford. A global geometric frame- work for nonlinear dimensionality reduction.Science, 290(5500):2319– 2323, 2000. doi: 10.1126/science.290.5500.2319 3, 5
-
[68]
V . Thibeault, A. Allard, and P. Desrosiers. The low-rank hypothesis of complex systems.Nat. Phys., 20(2):294–302, 2024. doi: 10.1038/s41567 -023-02303-0 3, 6
-
[69]
Z. Tian, X. Zhai, D. van Driel, G. van Steenpaal, M. Espadoto, and A. Telea. Using multiple attribute-based explanations of multidimensional projections to explore high-dimensional data.Comput. Graph., 98:93–104,
-
[70]
doi: 10.1016/j.cag.2021.04.034 2
- [71]
-
[72]
C. Trapnell, D. Cacchiarelli, J. Grimsby, P. Pokharel, S. Li, M. Morse et al. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells.Nat. Biotechnol., 32(4):381–386,
-
[73]
doi: 10.1038/nbt.2859 6
-
[74]
A. Tsitsulin, D. Mottin, P. Karras, A. Bronstein, and E. Müller. NetLSD: Hearing the shape of a graph. InProc. KDD, pp. 2347–2356, 2018. doi: 10.1145/3219819.3219991 3
-
[75]
L. van der Maaten and G. Hinton. Visualizing data using t-SNE.J. Mach. Learn. Res., 9(11), 2008. https://www.jmlr.org/papers/volume9/ vandermaaten08a/vandermaaten08a.pdf. 1, 2
work page 2008
-
[76]
L. van der Maaten, E. Postma, and J. van den Herik. Dimensionality reduc- tion: A comparative review. Technical Report TiCC-TR 2009-005, Tilburg University Technical Report, 2009. 36 pages, https://lvdmaaten.github.io/ publications/papers/TR_Dimensionality_Reduction_Review_2009.pdf. 2
work page 2009
-
[77]
V . van Unen, T. Höllt, N. Pezzotti, N. Li, M. J. T. Reinders, E. Eisemann et al. Visual analysis of mass cytometry data by hierarchical stochastic neighbour embedding reveals rare cell types.Nat. Commun., 8(1), 2017. doi: 10.1038/s41467-017-01689-9 8
-
[78]
J. Venna and S. Kaski. Local multidimensional scaling.Neural Netw., 19(6-7):889–899, 2006. doi: 10.1016/j.neunet.2006.05.014 7
-
[79]
D. E. Wagner and A. M. Klein. Lineage tracing meets single-cell omics: opportunities and challenges.Nat. Rev. Genet., 21(7):410–427, 2020. doi: 10.1038/s41576-020-0223-2 8
-
[80]
X. Wang and M. Zhang. How powerful are spectral graph neural networks. InProc. ICML, vol. 162, pp. 23341–23362. PMLR, 2022. https://proceedings.mlr.press/v162/wang22am/wang22am.pdf. 3, 9
work page 2022
-
[81]
Y . Wang, Y . Sun, H. Huang, and C. Rudin. Dimension reduction with locally adjusted graphs. InProc. AAAI, vol. 39, pp. 21357–21365, 2025. doi: 10.1609/aaai.v39i20.35436 7
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.