pith. machine review for the scientific record. sign in

arxiv: 2604.02535 · v1 · submitted 2026-04-02 · 💻 cs.LG · cs.HC

Recognition: 2 theorem links

· Lean Theorem

A Spectral Framework for Multi-Scale Nonlinear Dimensionality Reduction

Angelos Chatzimparmpas, Takanori Fujiwara, Thomas H\"ollt, Zeyang Huang

Pith reviewed 2026-05-13 21:14 UTC · model grok-4.3

classification 💻 cs.LG cs.HC
keywords dimensionality reductionspectral methodsmulti-scale embeddingsgraph frequency analysisnonlinear DRcross-entropy optimizationmanifold continuityvisual analytics
0
0 comments X

The pith

A spectral framework pairs linear decomposition with cross-entropy optimization to create multi-scale nonlinear dimensionality reduction embeddings that preserve both global and local structure.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a framework that starts from a linear spectral decomposition of the data graph to capture global geometry and then refines the embedding with cross-entropy optimization to add local neighborhood separation. This combination is meant to resolve the longstanding tension where local-focused methods distort overall manifold shape and global-focused methods lose fine detail. Because the starting point remains linear and explicit, the approach also lets users trace how individual spectral modes shape the final layout through a graph-frequency lens, and it adds glyph overlays on scatterplots for visual inspection. If the method works as described, it would give practitioners embeddings that are both more faithful across scales and more open to direct analysis than current nonlinear techniques.

Core claim

The paper claims that high-dimensional data can be embedded via a spectral basis derived from linear decomposition, then adjusted through cross-entropy optimization, to produce representations that simultaneously maintain global manifold geometry and local neighborhood separation. The same linear basis supports post-hoc examination of the embedding by decomposing it into contributions from different graph-frequency modes. Glyph-based augmentations on the resulting scatterplots further allow visual tracing of those mode influences.

What carries the argument

The spectral basis obtained from linear spectral decomposition of the input graph, which supplies an explicit multi-scale foundation that cross-entropy optimization then refines while preserving analytical access to frequency-mode contributions.

If this is right

  • The embedding process becomes inspectable by isolating the contribution of each spectral mode to the final coordinates.
  • Scatterplot visualizations can be augmented with glyphs that directly encode those mode contributions for qualitative checks.
  • Manifold continuity improves because the spectral starting point anchors global shape before local refinement occurs.
  • The same pipeline supports quantitative comparisons that track how changes in spectral truncation affect both scale levels.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The framework might be tested on time-varying or streaming data by updating the spectral basis incrementally rather than recomputing from scratch.
  • One could replace cross-entropy with other contrastive losses and measure whether the graph-frequency analysis remains equally informative.
  • Integration with existing visualization software could let users interactively suppress or amplify specific frequency bands to explore embedding sensitivity.

Load-bearing premise

That a linear spectral decomposition can be merged with cross-entropy optimization to improve both global and local fidelity without creating new uncontrolled distortions in the embedding.

What would settle it

Apply the framework to a standard test manifold such as the Swiss roll or a hierarchical cluster dataset, then measure global structure error against Laplacian Eigenmaps and local neighborhood error against t-SNE; if the new embeddings score worse on both metrics than the specialized baselines, the central claim is falsified.

Figures

Figures reproduced from arXiv: 2604.02535 by Angelos Chatzimparmpas, Takanori Fujiwara, Thomas H\"ollt, Zeyang Huang.

Figure 1
Figure 1. Figure 1: Comparison of data structure, UMAP [47], and our spectral decomposition snapshots on three datasets. As the number of used spectral modes S increases, low-frequency snapshots recover coarse organization first and then add finer detail toward the full-spectrum embedding. analyzed through contrastive feature analysis [23] or supervised models such as decision trees [5] and explainable boosting machines [55].… view at source ↗
Figure 2
Figure 2. Figure 2: Petal glyph. The outline length of each petal rep￾resents |un,s|. The filled length repre￾sents ∥un,s ·ps∥. Comparing these two lengths reveals how cross-entropy op￾timization amplifies or suppresses in￾dividual spectral modes. We further encode this change using color: red indicates that a mode is emphasized, while blue indicates that it is suppressed. Color intensity reflects the magnitude of this change… view at source ↗
Figure 3
Figure 3. Figure 3: Reconstruction error across datasets as subspace [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of embeddings across three representative datasets as [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Spectral response at the final stage with full spectral modes for [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 5
Figure 5. Figure 5: (a), already at S = 8, amphid sensory classes such as AFD, ASH, [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Local spectral explanation of the embedding for MNIST handwrit [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
read the original abstract

Dimensionality reduction (DR) is characterized by two longstanding trade-offs. First, there is a global-local preservation tension: methods such as t-SNE and UMAP prioritize local neighborhood preservation, yet may distort global manifold structure, while methods such as Laplacian Eigenmaps preserve global geometry but often yield limited local separation. Second, there is a gap between expressiveness and analytical transparency: many nonlinear DR methods produce embeddings without an explicit connection to the underlying high-dimensional structure, limiting insight into the embedding process. In this paper, we introduce a spectral framework for nonlinear DR that addresses these challenges. Our approach embeds high-dimensional data using a spectral basis combined with cross-entropy optimization, enabling multi-scale representations that bridge global and local structure. Leveraging linear spectral decomposition, the framework further supports analysis of embeddings through a graph-frequency perspective, enabling examination of how spectral modes influence the resulting embedding. We complement this analysis with glyph-based scatterplot augmentations for visual exploration. Quantitative evaluations and case studies demonstrate that our framework improves manifold continuity while enabling deeper analysis of embedding structure through spectral mode contributions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper introduces a spectral framework for nonlinear dimensionality reduction that combines linear spectral decomposition (via graph Laplacian eigendecomposition) with cross-entropy optimization to produce multi-scale embeddings bridging global and local manifold structure, while enabling graph-frequency analysis of mode contributions and glyph-based visual augmentations.

Significance. If the central claim holds, the framework would meaningfully address the global-local tension in DR methods and the expressiveness-transparency gap by retaining analytical access to spectral modes after nonlinear optimization, offering a principled alternative to purely heuristic nonlinear techniques like t-SNE or UMAP.

major comments (1)
  1. [§3.2 and §4.1] §3.2, Eq. (4) and §4.1: the spectral basis is obtained from eigendecomposition of the graph Laplacian, yet the cross-entropy loss minimized in §4.1 operates directly on pairwise distances without any regularization or projection term that would preserve stable contributions from the original eigenmodes. No derivation is provided showing that post-optimization coordinates remain interpretable in the graph-frequency basis, which directly threatens the multi-scale bridging and frequency-analysis claims.
minor comments (1)
  1. [Abstract] The abstract states that quantitative evaluations demonstrate improved manifold continuity, but no specific metrics, baselines, or ablation results are previewed; adding a brief table reference would improve clarity.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive comments. The major comment identifies a need for explicit derivation regarding post-optimization spectral interpretability. We address this point directly below and outline the planned revision.

read point-by-point responses
  1. Referee: [§3.2 and §4.1] §3.2, Eq. (4) and §4.1: the spectral basis is obtained from eigendecomposition of the graph Laplacian, yet the cross-entropy loss minimized in §4.1 operates directly on pairwise distances without any regularization or projection term that would preserve stable contributions from the original eigenmodes. No derivation is provided showing that post-optimization coordinates remain interpretable in the graph-frequency basis, which directly threatens the multi-scale bridging and frequency-analysis claims.

    Authors: We appreciate the referee highlighting this aspect. The embedding is initialized via the spectral basis from the graph Laplacian eigendecomposition in Eq. (4) of §3.2, after which the cross-entropy loss in §4.1 refines the coordinates for improved local preservation. While the optimization lacks an explicit regularization term, the coordinates remain linear combinations of the original eigenmodes by construction. We acknowledge, however, that the manuscript does not supply a formal derivation of the stability of these frequency contributions after optimization. We will revise the paper by inserting a new subsection after §4.1 that derives the re-projection of optimized coordinates onto the graph-frequency basis and shows that mode contributions remain interpretable under standard manifold smoothness assumptions. The revision will also include a brief empirical check confirming that frequency signatures are preserved. This change directly addresses the concern and strengthens the multi-scale and frequency-analysis claims. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation combines standard spectral basis with cross-entropy optimization without self-referential reduction

full rationale

The paper's core construction starts from eigendecomposition of a graph Laplacian (standard linear spectral step) and then applies cross-entropy optimization to produce the final embedding coordinates. No equation in the provided abstract or described sections shows the optimized coordinates being redefined back into the spectral basis or the optimization loss being fitted to recover the same eigenmodes by construction. The multi-scale bridging claim rests on the explicit combination of these two independent components rather than on any parameter being renamed as a prediction or on a self-citation chain that forbids alternatives. External benchmarks (quantitative evaluations and case studies) are invoked to support performance, keeping the derivation self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Review based on abstract only; no explicit free parameters, axioms, or invented entities are detailed in the provided text. The approach appears to rely on standard linear spectral decomposition and cross-entropy loss without introducing new postulated entities.

pith-pipeline@v0.9.0 · 5497 in / 1114 out tokens · 38619 ms · 2026-05-13T21:14:22.295712+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

84 extracted references · 84 canonical work pages · 4 internal anchors

  1. [1]

    Balestriero and Y

    R. Balestriero and Y . LeCun. Contrastive and non-contrastive self-supervised learning recover global and local spectral em- bedding methods.Adv. Neural Inf. Process. Syst., 35:26671– 26685, 2022. https://proceedings.neurips.cc/paper_files/paper/2022/file/ aa56c74513a5e35768a11f4e82dd7ffb-Paper-Conference.pdf. 1, 3, 9

  2. [2]

    Becht, L

    E. Becht, L. McInnes, J. Healy, C.-A. Dutertre, I. W. Kwok, L. G. Ng et al. Dimensionality reduction for visualizing single-cell data using UMAP. Nat. Biotechnol., 37(1):38–44, 2019. doi: 10.1038/nbt.4314 1, 2, 8

  3. [3]

    Belkin and P

    M. Belkin and P. Niyogi. Laplacian Eigenmaps for dimensionality reduc- tion and data representation.Neural Comput., 15(6):1373–1396, 2003. doi: 10.1162/089976603321780317 1, 2, 3, 6, 7

  4. [4]

    Benson-Putnins, M

    D. Benson-Putnins, M. Bonfardin, M. E. Magnoni, and D. Martin. Spectral clustering and visualization: A novel clustering of Fisher’s iris data set. SIAM Undergraduate Research Online, 4:1–15, 2011. https://www.siam. org/media/s12ln4i2/spectral_clustering_and_visualization.pdf. 3

  5. [5]

    Bibal, V

    A. Bibal, V . Delchevalerie, and B. Frénay. DT-SNE: t-SNE discrete visualizations as decision tree structures.Neurocomputing, 529:101–112,

  6. [6]

    doi: 10.1016/j.neucom.2023.01.073 3

  7. [7]

    Bibal, R

    A. Bibal, R. Marion, R. von Sachs, and B. Frénay. BIOT: Explaining multidimensional nonlinear MDS embeddings using the Best Interpretable Orthogonal Transformation.Neurocomputing, 453:109–118, 2021. doi: 10.1016/j.neucom.2021.04.088. 1, 3

  8. [8]

    J. N. Böhm, P. Berens, and D. Kobak. Unsupervised visualization of image datasets using contrastive learning.arXiv:2210.09879, 2022. doi: 10.48550/arXiv.2210.09879 1

  9. [9]

    J. N. Böhm, P. Berens, and D. Kobak. Attraction-repulsion spectrum in neighbor embeddings.J. Mach. Learn. Res., 23(95):1–32, 2022. https: //jmlr.org/papers/volume23/21-0055/21-0055.pdf. 1, 2

  10. [10]

    T. T. Cai and R. Ma. Theoretical foundations of t-SNE for visualizing high-dimensional clustered data.J. Mach. Learn. Res., 23(301):1–54,

  11. [11]

    https://jmlr.org/papers/volume23/21-0524/21-0524.pdf. 3, 8

  12. [12]

    R. R. Coifman and S. Lafon. Diffusion maps.Appl. Comput. Harmon. Anal., 21(1):5–30, 2006. doi: 10.1016/j.acha.2006.04.006 3

  13. [13]

    D. B. Coimbra, R. M. Martins, T. T. Neves, A. C. Telea, and F. V . Paulovich. Explaining three-dimensional dimensionality reduction plots.Inf. Vis., 15(2):154–172, 2016. doi: 10.1177/1473871615600010 2

  14. [14]

    Couplet, P

    E. Couplet, P. Lambert, M. Verleysen, D. Mulders, J. A. Lee, and C. De Bodt. Natively interpretable t-SNE. InProc. ECML PKDD, pp. 107–123. Springer, 2023. doi: 10.1007/978-3-031-74630-7_8 1, 3

  15. [15]

    J. P. Cunningham and Z. Ghahramani. Linear dimensionality re- duction: Survey, insights, and generalizations.J. Mach. Learn. Res., 16(89):2859–2900, 2015. https://jmlr.org/papers/volume16/ cunningham15a/cunningham15a.pdf. 2

  16. [16]

    Damrich, N

    S. Damrich, N. Böhm, F. A. Hamprecht, and D. Kobak. From t-SNE to UMAP with contrastive learning. InProc. ICLR, 2023. 44 pages. doi: abs/2206.01816 1

  17. [17]

    Damrich and F

    S. Damrich and F. A. Hamprecht. On UMAP’s true loss function.Adv. Neural Inf. Process. Syst., 34:5798–5809, 2021. https://proceedings.nips. cc/paper/2021/file/2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf. 1, 4, 9

  18. [18]

    de Bodt, A

    C. de Bodt, A. Diaz-Papkovich, M. Bleher, K. Bunte, C. Coupette, S. Dam- rich et al. Low-dimensional embeddings of high-dimensional data, Aug

  19. [19]

    doi: 10.48550/arXiv.2508.15929 2, 3

  20. [20]

    Defferrard, X

    M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering.Adv. Neural Inf. Process. Syst., 29, 2016. https://proceedings.neurips.cc/paper/2016/file/ 04df4d434d481c5bb723be1b6df1ee65-Paper.pdf. 1, 3, 9

  21. [21]

    L. Deng. The MNIST database of handwritten digit images for machine learning research.IEEE Signal Process. Mag., 29(6):141–142, 2012. doi: 10.1109/MSP.2012.2211477 4, 5

  22. [22]

    X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst. Learning Lapla- cian matrix in smooth graph signal representations.IEEE Trans. Signal Process., 64(23):6160–6173, 2016. doi: 10.1109/TSP.2016.2602809 6

  23. [23]

    Y . Dong, P. Soga, Y . He, S. Wang, and J. Li. Graph neural networks are more than filters: Revisiting and benchmarking from a spectral perspective. InProc. ICLR, 2026. doi: abs/2412.07188 3

  24. [24]

    Faust, D

    R. Faust, D. Glickenstein, and C. Scheidegger. DimReader: Axis lines that explain non-linear projections .IEEE Trans. Vis. Comput. Graph., 25(01):481–490, 2019. doi: 10.1109/TVCG.2018.2865194 2

  25. [25]

    J. J. Forman, P. A. Clemons, S. L. Schreiber, and S. J. Haggarty. Spectral- NET – an application for spectral graph analysis and visualization.BMC Bioinformatics, 6(1), 2005. doi: 10.1186/1471-2105-6-260 1, 3

  26. [26]

    Fujiwara, O.-H

    T. Fujiwara, O.-H. Kwon, and K.-L. Ma. Supporting analysis of dimen- sionality reduction results with contrastive learning.IEEE Trans. Vis. Comput. Graph., 26(01):45–55, 2020. doi: 10.1109/TVCG.2019.2934251 3

  27. [27]

    Fujiwara, X

    T. Fujiwara, X. Wei, J. Zhao, and K.-L. Ma. Interactive dimensionality reduction for comparative analysis.IEEE Trans. Vis. Comput. Graph., 28(1):758–768, 2022. doi: 10.1109/TVCG.2021.3114807 3

  28. [28]

    W. Fung, L. Wexler, and M. G. Heiman. Cell-type-specific promoters forC. elegansglia.J. Neurogenet., 34(3-4):335–346, 2020. doi: 10. 1080/01677063.2020.1781851 8

  29. [29]

    Gisbrecht, A

    A. Gisbrecht, A. Schulz, and B. Hammer. Parametric nonlinear dimension- ality reduction using kernel t-SNE.Neurocomputing, 147:71–82, 2015. doi: 10.1016/j.neucom.2013.11.045 1

  30. [30]

    X. Guo, W. Xu, W. Zhang, C. Pan, A. E. Thalacker-Mercer, H. Zheng et al. High-frequency and functional mitochondrial DNA mutations at the single-cell level.Proc. Natl. Acad. Sci. U.S.A., 120(1):e2201518120, 2023. doi: 10.1073/pnas.2201518120 6

  31. [31]

    D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory.Appl. Comput. Harmon. Anal., 30(2):129–150,

  32. [32]

    doi: 10.1016/j.acha.2010.04.005 3

  33. [33]

    Heulot, M

    N. Heulot, M. Aupetit, and J.-D. Fekete. ProxiLens: Interactive Ex- ploration of High-Dimensional Data using Projections. InProc. VAMP. The Eurographics Association, 2013. 5 pages. doi: 10.2312/PE.V AMP. V AMP2013.011-015 2

  34. [34]

    Höllt, N

    T. Höllt, N. Pezzotti, V . van Unen, F. Koning, E. Eisemann, B. Lelieveldt et al. Cytosplore: Interactive immune cell phenotyping for large single-cell datasets.Comput. Graph. Forum, 35(3):171–180, 2016. doi: 10.1111/cgf. 12893 8

  35. [36]

    Huguet, A

    G. Huguet, A. Tong, E. De Brouwer, Y . Zhang, G. Wolf, I. Adel- stein et al. A heat diffusion perspective on geodesic preserving di- mensionality reduction.Adv. Neural Inf. Process. Syst., 36:6986– 7016, 2023. https://proceedings.neurips.cc/paper_files/paper/2023/file/ 16063a1c0f0cddd4894585cf44cebb2c-Paper-Conference.pdf. 1, 3, 8, 9

  36. [37]

    H. Jeon, A. Cho, J. Jang, S. Lee, J. Hyun, H.-K. Ko et al. ZADU: A python library for evaluating the reliability of dimensionality reduction embeddings. InProc. VIS, pp. 196–200, 2023. doi: 10.1109/VIS54172. 2023.00048 7

  37. [38]

    Jeon, Y .-H

    H. Jeon, Y .-H. Kuo, M. Aupetit, K.-L. Ma, and J. Seo. Classes are not clusters: Improving label-based evaluation of dimensionality reduction. IEEE Trans. Vis. Comput. Graph., 30(1):781–791, 2024. doi: 10.1109/ TVCG.2023.3327187 2, 7

  38. [39]

    H. Jeon, H. Lee, Y .-H. Kuo, T. Yang, D. Archambault, S. Ko et al. Un- veiling high-dimensional backstage: A survey for reliable visual analytics with dimensionality reduction. InProc. CHI, art. no. 394, 24 pages. ACM,

  39. [40]

    doi: 10.1145/3706598.3713551 1, 2, 7

  40. [41]

    M. Jung, T. Fujiwara, and J. Jo. GhostUMAP2: Measuring and analyzing (r,d)-stability of UMAP.IEEE Trans. Vis. Comput. Graph., 32(1):353– 362, 2026. doi: 10.1109/TVCG.2025.3633894 2, 7

  41. [42]

    Kobak and G

    D. Kobak and G. C. Linderman. Initialization is critical for preserving global data structure in both t-SNE and UMAP.Nat Biotechnol, 39(2):156– 157, 2021. doi: 10.1038/s41587-020-00809-z 1, 3, 6, 8

  42. [43]

    J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis.Psychometrika, 29(1):1–27, 1964. doi: 10. 1007/BF02289565 7

  43. [44]

    J. A. Lee and M. Verleysen. Quality assessment of dimensionality reduc- tion: Rank-based criteria.Neurocomputing, 72(7–9):1431–1443, 2009. doi: 10.1016/j.neucom.2008.12.017 7

  44. [45]

    R. B. Lehoucq, D. C. Sorensen, and C. Yang.ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. SIAM, 1998. doi: 10.1137/1.9780898719628 6

  45. [46]

    N. Li, V . van Unen, T. Höllt, A. Thompson, J. van Bergen, N. Pezzotti et al. Mass cytometry reveals innate lymphoid cell differentiation pathways in the human fetal intestine.J. Exp. Med., 215(5):1383–1396, 2018. doi: 10.1084/jem.20171934 8

  46. [47]

    D. Liao, C. Liu, B. W. Christensen, A. Tong, G. Huguet, G. Wolf et al. Assessing neural network representations during training using noise- resilient diffusion spectral entropy. InProc. CISS, pp. 1–6. IEEE, 2024. doi: 10.1109/CISS59072.2024.10480166 3

  47. [48]

    Q. Liu, Y . Ren, Z. Zhu, D. Li, X. Ma, and Q. Li. RankAxis: Towards a systematic combination of projection and ranking in multi-attribute data exploration.IEEE Trans. Vis. Comput. Graph., 29(1):701–711, 2023. doi: 10.1109/TVCG.2022.3209463 3

  48. [49]

    Machado, M

    A. Machado, M. Behrisch, and A. Telea. Necessary but not suffi- cient: Limitations of projection quality metrics.Comput. Graph. Forum, 44(3):e70101, 2025. doi: 10.1111/cgf.70101 2, 7

  49. [50]

    Ma´ckiewicz and W

    A. Ma´ckiewicz and W. Ratajczak. Principal components analysis (PCA). Comput. Geosci., 19(3):303–342, 1993. 6

  50. [51]

    T. Manz, F. Lekschas, E. Greene, G. Finak, and N. Gehlenborg. A general framework for comparing embedding visualizations across class-label hierarchies.IEEE Trans. Vis. Comput. Graph., 31(1):283–293, 2025. doi: 10.1109/TVCG.2024.3456370 3

  51. [52]

    UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction

    L. McInnes, J. Healy, and J. Melville. UMAP: Uniform manifold approxi- mation and projection for dimension reduction.arXiv:1802.03426, 2018. doi: 10.48550/arXiv.1802.03426 1, 2, 3, 4, 6, 7, 9

  52. [53]

    Montambault, G

    B. Montambault, G. Appleby, J. Rogers, C. D. Brumar, M. Li, and R. Chang. DimBridge: Interactive explanation of visual patterns in di- mensionality reductions with predicate logic.IEEE Trans. Vis. Comput. Graph., 31(1):207–217, 2025. doi: 10.1109/TVCG.2024.3456391 2

  53. [54]

    K. R. Moon, D. van Dijk, Z. Wang, S. Gigante, D. B. Burkhardt, W. S. Chen et al. Visualizing structure and transitions in high-dimensional biological data.Nat. Biotechnol., 37(12):1482–1492, Dec. 2019. doi: 10. 1038/s41587-019-0336-3 3, 5, 7, 8, 9

  54. [55]

    Okami, K

    N. Okami, K. Miyake, N. Sakamoto, J. Nonaka, and T. Fujiwara. Visual analytics using tensor unified linear comparative analysis.IEEE Trans. Vis. Comput. Graph., 32(1):79–89, 2026. doi: 10.1109/TVCG.2025.3633912 3

  55. [56]

    J. S. Packer, Q. Zhu, C. Huynh, P. Sivaramakrishnan, E. Preston, H. Dueck et al. A lineage-resolved molecular atlas ofC. elegansembryogenesis at single-cell resolution.Science, 365(6459):eaax1971, 2019. doi: 10. 1126/science.aax1971 5, 8

  56. [57]

    Rosen, M

    P. Rosen, M. Hajij, and B. Wang. Homology-preserving multi-scale graph skeletonization using mapper on graphs. InProc. TopoInVis, pp. 10–20. IEEE, 2023. doi: 10.1109/TopoInVis60193.2023.00008 3

  57. [58]

    T. L. Saidi, A. Hickok, and A. J. Blumberg. Recovering manifold structure using Ollivier Ricci curvature. InProc. ICLR, 2025. doi: 10.48550/arXiv. 2410.01149 2, 9

  58. [59]

    Sainburg, L

    T. Sainburg, L. McInnes, and T. Q. Gentner. Parametric UMAP embed- dings for representation and semisupervised learning.Neural Comput., 33(11):2881–2907, 10 2021. doi: 10.1162/neco_a_01434 1

  59. [60]

    Salmanian, A

    P. Salmanian, A. Chatzimparmpas, A. C. Karaca, and R. M. Martins. DimVis: Interpreting Visual Clusters in Dimensionality Reduction With Explainable Boosting Machine. InProc. MLVis. The Eurographics Asso- ciation, 2024. doi: 10.2312/mlvis.20241125 3

  60. [61]

    Sarikaya and M

    A. Sarikaya and M. Gleicher. Scatterplots: Tasks, data, and designs.IEEE Trans. Vis. Comput. Graph., 24(1):402–412, 2018. doi: 10.1109/TVCG. 2017.2744184 5

  61. [62]

    D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The emerging field of signal processing on graphs: Extending high- dimensional data analysis to networks and other irregular domains.IEEE Signal Process. Mag., 30(3):83–98, 2013. 1, 2, 3, 4, 6, 9

  62. [63]

    S. Sidney. Nonparametric statistics for the behavioral sciences.J. Nerv. Ment. Dis., 125(3), 1957. 7

  63. [64]

    Sivaramakrishnan, C

    P. Sivaramakrishnan, C. Watkins, and J. I. Murray. Transcript accu- mulation rates in the earlyCaenorhabditis elegansembryo.Sci. Adv., 9(34):eadi1270, 2023. doi: 10.1126/sciadv.adi1270 8

  64. [65]

    Spielman

    D. Spielman. Spectral graph theory. In U. Naumann and O. Schenk, eds.,Combinatorial Scientific Computing, vol. 18. CRC Press Boca Raton, Florida, 2012. doi: 10.1201/b11644 3

  65. [66]

    J. Tang, J. Liu, M. Zhang, and Q. Mei. Visualizing large-scale and high- dimensional data. InProc. WWW, pp. 287–297, 2016. doi: 10.1145/ 2872427.2883041 2

  66. [67]

    J. B. Tenenbaum, V . d. Silva, and J. C. Langford. A global geometric frame- work for nonlinear dimensionality reduction.Science, 290(5500):2319– 2323, 2000. doi: 10.1126/science.290.5500.2319 3, 5

  67. [68]

    Thibeault, A

    V . Thibeault, A. Allard, and P. Desrosiers. The low-rank hypothesis of complex systems.Nat. Phys., 20(2):294–302, 2024. doi: 10.1038/s41567 -023-02303-0 3, 6

  68. [69]

    Z. Tian, X. Zhai, D. van Driel, G. van Steenpaal, M. Espadoto, and A. Telea. Using multiple attribute-based explanations of multidimensional projections to explore high-dimensional data.Comput. Graph., 98:93–104,

  69. [70]

    doi: 10.1016/j.cag.2021.04.034 2

  70. [71]

    Trapnell

    C. Trapnell. Constructing single-cell trajectories. https://cole-trapnell-lab. github.io/monocle3/docs/trajectories/, 2022. Accessed: 2025-09-07. 5, 8

  71. [72]

    Trapnell, D

    C. Trapnell, D. Cacchiarelli, J. Grimsby, P. Pokharel, S. Li, M. Morse et al. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells.Nat. Biotechnol., 32(4):381–386,

  72. [73]

    doi: 10.1038/nbt.2859 6

  73. [74]

    Vandal, E

    A. Tsitsulin, D. Mottin, P. Karras, A. Bronstein, and E. Müller. NetLSD: Hearing the shape of a graph. InProc. KDD, pp. 2347–2356, 2018. doi: 10.1145/3219819.3219991 3

  74. [75]

    van der Maaten and G

    L. van der Maaten and G. Hinton. Visualizing data using t-SNE.J. Mach. Learn. Res., 9(11), 2008. https://www.jmlr.org/papers/volume9/ vandermaaten08a/vandermaaten08a.pdf. 1, 2

  75. [76]

    van der Maaten, E

    L. van der Maaten, E. Postma, and J. van den Herik. Dimensionality reduc- tion: A comparative review. Technical Report TiCC-TR 2009-005, Tilburg University Technical Report, 2009. 36 pages, https://lvdmaaten.github.io/ publications/papers/TR_Dimensionality_Reduction_Review_2009.pdf. 2

  76. [77]

    van Unen, T

    V . van Unen, T. Höllt, N. Pezzotti, N. Li, M. J. T. Reinders, E. Eisemann et al. Visual analysis of mass cytometry data by hierarchical stochastic neighbour embedding reveals rare cell types.Nat. Commun., 8(1), 2017. doi: 10.1038/s41467-017-01689-9 8

  77. [78]

    Venna and S

    J. Venna and S. Kaski. Local multidimensional scaling.Neural Netw., 19(6-7):889–899, 2006. doi: 10.1016/j.neunet.2006.05.014 7

  78. [79]

    D. E. Wagner and A. M. Klein. Lineage tracing meets single-cell omics: opportunities and challenges.Nat. Rev. Genet., 21(7):410–427, 2020. doi: 10.1038/s41576-020-0223-2 8

  79. [80]

    Wang and M

    X. Wang and M. Zhang. How powerful are spectral graph neural networks. InProc. ICML, vol. 162, pp. 23341–23362. PMLR, 2022. https://proceedings.mlr.press/v162/wang22am/wang22am.pdf. 3, 9

  80. [81]

    Y . Wang, Y . Sun, H. Huang, and C. Rudin. Dimension reduction with locally adjusted graphs. InProc. AAAI, vol. 39, pp. 21357–21365, 2025. doi: 10.1609/aaai.v39i20.35436 7

Showing first 80 references.