pith. machine review for the scientific record. sign in

arxiv: 1802.03426 · v3 · submitted 2018-02-09 · 📊 stat.ML · cs.CG· cs.LG

Recognition: 2 theorem links

· Lean Theorem

UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:06 UTC · model grok-4.3

classification 📊 stat.ML cs.CGcs.LG
keywords dimension reductionmanifold learningdata visualizationUMAPt-SNEmachine learningtopological methods
0
0 comments X

The pith

UMAP matches t-SNE visualization quality with faster runtime and better global structure preservation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces UMAP as a new manifold learning technique for dimension reduction. It derives the method from a framework in Riemannian geometry and algebraic topology to produce a practical and scalable algorithm for real data. A sympathetic reader would care because the approach promises effective visualization of complex datasets along with use in general machine learning tasks where output dimension is not restricted.

Core claim

UMAP is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.

What carries the argument

The UMAP algorithm, which constructs a topological model of the data manifold from local geometric information for projection into lower dimensions.

If this is right

  • It can replace t-SNE for visualization tasks on large datasets while running faster.
  • It supports dimension reduction to any number of dimensions without added computational cost.
  • It serves as a general preprocessing step in machine learning pipelines for high-dimensional data.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Fields handling very large datasets such as single-cell biology could gain new exploratory capabilities.
  • The method might combine with supervised learning models to improve feature extraction.
  • Tests on streaming data could show whether the approach extends beyond static datasets.

Load-bearing premise

The theoretical framework based in Riemannian geometry and algebraic topology can be translated into a practical scalable algorithm that achieves the claimed performance advantages over existing methods like t-SNE.

What would settle it

Benchmark runs on standard high-dimensional datasets where UMAP produces visualizations with less cluster separation than t-SNE or requires more computation time.

read the original abstract

UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The manuscript introduces UMAP, a dimension-reduction algorithm derived from a Riemannian-geometry and algebraic-topology framework. Local manifold structure is approximated by k-nearest-neighbor graphs that are converted into fuzzy simplicial sets; a cross-entropy objective is then minimized to obtain a low-dimensional embedding. The authors claim that the resulting method matches t-SNE visualization quality, preserves global structure more faithfully, runs faster, and admits arbitrary embedding dimensions, thereby serving as a general-purpose ML preprocessing tool.

Significance. If the performance claims are substantiated, UMAP supplies a theoretically grounded, scalable alternative to t-SNE that is immediately useful for visualization of large data sets and for dimension reduction prior to downstream learning tasks. The explicit construction of the fuzzy simplicial set and the provision of both the derivation (Section 2) and the implementable algorithm (Section 3) constitute a clear strength.

minor comments (3)
  1. [Section 4.1] Section 4.1: the quantitative comparison tables would benefit from reporting both mean and standard deviation over multiple random seeds rather than single-run results.
  2. [Figure 3] Figure 3 caption: the precise values of the UMAP hyperparameters (n_neighbors, min_dist, etc.) used for each panel should be stated explicitly.
  3. [Section 2.2] Section 2.2: the notation for the fuzzy simplicial set membership strengths could be introduced with a short reminder of the exponential kernel definition to aid readers unfamiliar with the topological construction.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive summary, assessment of significance, and recommendation to accept the manuscript.

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The UMAP construction begins from an explicit Riemannian manifold approximation via local k-NN distance estimates converted to fuzzy simplicial sets (Section 2), followed by a cross-entropy minimization objective in the target embedding space (Section 3). These steps are derived from algebraic topology and geometry without reducing to fitted parameters renamed as predictions or to self-citations that carry the central claim. Empirical comparisons in Section 4 are presented as validation rather than as the source of the algorithm itself. No load-bearing step equates the output to the input by construction, satisfying the criteria for a self-contained derivation.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract; full details of any parameters or assumptions not available.

axioms (1)
  • domain assumption A theoretical framework based in Riemannian geometry and algebraic topology can be used to construct a practical dimension reduction algorithm.
    Directly stated in the abstract as the construction basis for UMAP.

pith-pipeline@v0.9.0 · 5380 in / 1188 out tokens · 88976 ms · 2026-05-10T17:06:36.445113+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Forward citations

Cited by 60 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. GPT-Image-2 in the Wild: A Twitter Dataset of Self-Reported AI-Generated Images from the First Week of Deployment

    cs.CV 2026-04 unverdicted novelty 8.0

    The first public dataset of 10,217 GPT-Image-2 generated images sourced from Twitter in the week after release, with CLIP taxonomy, OCR, face detection, clustering analyses, and a finding that C2PA provenance data is ...

  2. On the continuum limit of t-SNE for data visualization

    stat.ML 2026-04 unverdicted novelty 8.0

    t-SNE converges in the large-data limit to a non-convex variational energy with attraction and repulsion terms that admits a unique smooth minimizer but infinitely many discontinuous ones in one dimension.

  3. Making MLLMs Blind: Adversarial Smuggling Attacks in MLLM Content Moderation

    cs.CV 2026-04 unverdicted novelty 8.0

    Adversarial smuggling attacks encode harmful content into human-readable visuals that evade MLLM detection, achieving over 90% attack success rates on models like GPT-5 and Qwen3-VL via the new SmuggleBench benchmark.

  4. Discovering Language Model Behaviors with Model-Written Evaluations

    cs.CL 2022-12 unverdicted novelty 8.0

    Language models can automatically generate high-quality evaluation datasets that reveal new cases of inverse scaling, sycophancy, and concerning goal-seeking behaviors, including some worsened by RLHF.

  5. Determining star formation histories and age-metallicity relations with convolutional neural networks

    astro-ph.GA 2026-05 unverdicted novelty 7.0

    A CNN with attention and shared latent space recovers SFHs and metallicities from spectro-photometric data with ~0.12 dex age and ~0.03 dex metallicity dispersion while running thousands of times faster than full spec...

  6. PRISM-X: Experiments on Personalised Fine-Tuning with Human and Simulated Users

    cs.CL 2026-05 unverdicted novelty 7.0

    Preference fine-tuning outperforms prompting for personalisation but amplifies sycophancy and relationship-seeking, while simulated users recover aggregate rankings yet show far lower self-consistency and different to...

  7. scShapeBench: Discovering geometry from high dimensional scRNAseq data

    cs.LG 2026-05 unverdicted novelty 7.0

    scShapeBench supplies synthetic and real annotated single-cell datasets across four shape categories, with scReebTower outperforming PAGA and Mapper on topology-aware metrics.

  8. Much of Geospatial Web Search Is Beyond Traditional GIS

    cs.IR 2026-05 unverdicted novelty 7.0

    Analysis of 1.01 million unfiltered Bing queries identifies 18% as geospatial, dominated by transactional categories like costs (15.3%) that exceed traditional GIS scope.

  9. Quantifying the Reconstructability of Astrophysical Methods with Large Language Models and Information Theory: A Case Study in Spectral Reconstruction

    astro-ph.IM 2026-05 unverdicted novelty 7.0

    LLMs prompted with increasing levels of text on TNO spectral reconstruction from photometry reveal an entropy floor where implementation variance persists, showing text alone cannot capture all tacit expert knowledge ...

  10. An Experimental Method to Study Opinion Diffusion in Human-AI Hybrid Societies

    cs.SI 2026-05 unverdicted novelty 7.0

    Hybrid human-AI networks in 5x5 grids reached lower final polarization than human-only networks after eight rounds of opinion revision on polarizing topics.

  11. Privacy-Aware Video Anomaly Detection through Orthogonal Subspace Projection

    cs.CV 2026-05 unverdicted novelty 7.0

    A new orthogonal projection module for video anomaly detection suppresses facial attributes via weak face-presence signals and cosine alignment while preserving anomaly-relevant features like pose and motion.

  12. eXplaining to Learn (eX2L): Regularization Using Contrastive Visual Explanation Pairs for Distribution Shifts

    cs.CV 2026-05 unverdicted novelty 7.0

    eX2L improves robustness to distribution shifts by penalizing similarity between Grad-CAM maps of a label classifier and a confounder classifier, reaching new SOTA average and worst-group accuracy on the Spawrious benchmark.

  13. Knowing when to trust machine-learned interatomic potentials

    cs.LG 2026-05 unverdicted novelty 7.0

    PROBE recasts MLIP uncertainty quantification as selective classification by training a compact discriminative classifier on frozen per-atom backbone embeddings, yielding a reliability probability that tracks actual e...

  14. Sparsity as a Key: Unlocking New Insights from Latent Structures for Out-of-Distribution Detection

    cs.CV 2026-04 unverdicted novelty 7.0

    Sparse autoencoders on ViT class tokens reveal stable Class Activation Profiles for in-distribution data, enabling OOD detection via divergence from core energy profiles.

  15. From Chatbots to Confidants: A Cross-Cultural Study of LLM Adoption for Emotional Support

    cs.CL 2026-04 unverdicted novelty 7.0

    A cross-cultural survey finds LLM emotional support adoption ranges from 20% to 59% by country, with positive perceptions strongest among higher-SES, religious, married adults aged 25-44 and in English-speaking nations.

  16. GPT-Image-2 in the Wild: A Twitter Dataset of Self-Reported AI-Generated Images from the First Week of Deployment

    cs.CV 2026-04 accept novelty 7.0

    The first public dataset of 10,217 GPT-image-2 AI-generated images from Twitter, with CLIP taxonomy, OCR, face detection, and clustering analyses, plus the finding that C2PA credentials are stripped by the platform.

  17. The Platform Is Mostly Not a Platform: Token Economies and Agent Discourse on Moltbook

    cs.CY 2026-04 unverdicted novelty 7.0

    Moltbook operates as two largely separate layers: a dominant transactional token economy using protocols like MBC-20 and a thinner discursive conversation layer with only 3.6% agent overlap.

  18. Participatory provenance as representational auditing for AI-mediated public consultation

    cs.AI 2026-04 unverdicted novelty 7.0

    Participatory provenance auditing of Canada's AI strategy consultation shows official AI summaries exclude 15-17% of participants more than random baselines, with 33-88% exclusion for dissent clusters.

  19. Comparison Drives Preference: Reference-Aware Modeling for AI-Generated Video Quality Assessment

    cs.CV 2026-04 unverdicted novelty 7.0

    RefVQA uses a query-centered reference graph and graph-guided difference aggregation to improve AI-generated video quality assessment by incorporating inter-video comparisons.

  20. Neighbor Embedding for High-Dimensional Sparse Poisson Data

    stat.ML 2026-04 unverdicted novelty 7.0

    p-SNE embeds sparse Poisson count data into low dimensions by using KL divergence between Poisson distributions to measure pairwise dissimilarity and Hellinger distance to optimize the layout.

  21. Physics-informed, Generative Adversarial Design of Funicular Shells

    cs.CE 2026-04 unverdicted novelty 7.0

    A modified DCGAN with an auxiliary discriminator using the membrane factor generates stable, previously unseen funicular shells optimized for pure compression in three dimensions.

  22. MADE: A Living Benchmark for Multi-Label Text Classification with Uncertainty Quantification of Medical Device Adverse Events

    cs.CL 2026-04 unverdicted novelty 7.0

    MADE creates a contamination-resistant living benchmark for multi-label classification of medical device adverse events, with evaluations revealing model-specific trade-offs in accuracy and uncertainty quantification.

  23. Computational Lesions in Multilingual Language Models Separate Shared and Language-specific Brain Alignment

    cs.CL 2026-04 unverdicted novelty 7.0

    Lesioning a shared core in multilingual LLMs drops whole-brain fMRI encoding correlation by 60.32%, while language-specific lesions selectively weaken predictions only for the matched native language.

  24. L-fuzzy simplicial homology

    math.AT 2026-04 unverdicted novelty 7.0

    L-fuzzy simplicial homology generalizes simplicial homology to L-fuzzy subcomplexes by assigning values from a completely distributive lattice L to simplices and deriving associated homology modules.

  25. Emotion Concepts and their Function in a Large Language Model

    cs.AI 2026-04 unverdicted novelty 7.0

    Claude Sonnet 4.5 exhibits functional emotions via abstract internal representations of emotion concepts that causally influence its preferences and misaligned behaviors without implying subjective experience.

  26. Dynamic Context Evolution for Scalable Synthetic Data Generation

    cs.CL 2026-04 conditional novelty 7.0

    Dynamic Context Evolution prevents cross-batch mode collapse in LLMs by combining model self-assessment for idea filtering, embedding-based deduplication, and evolving prompts, yielding zero collapse and consistently ...

  27. Are We Recognizing the Jaguar or Its Background? A Diagnostic Framework for Jaguar Re-Identification

    cs.CV 2026-04 unverdicted novelty 7.0

    A new diagnostic framework using inpainted context ratios and laterality checks on a Pantanal jaguar benchmark reveals whether re-ID models depend on coat patterns or spurious background evidence.

  28. Beyond Corner Patches: Semantics-Aware Backdoor Attack in Federated Learning

    cs.CR 2026-03 unverdicted novelty 7.0

    SABLE shows that semantics-aware natural triggers enable effective backdoor attacks in federated learning against multiple aggregation rules while preserving benign accuracy.

  29. A Large-Scale Comparative Analysis of Imputation Methods for Single-Cell RNA Sequencing Data

    q-bio.GN 2026-03 unverdicted novelty 7.0

    A large benchmark finds traditional imputation methods for scRNA-seq data generally outperform deep learning ones, but numerical recovery does not reliably improve biological downstream analyses and no method wins acr...

  30. Scaling and evaluating sparse autoencoders

    cs.LG 2024-06 unverdicted novelty 7.0

    K-sparse autoencoders with dead-latent fixes produce clean scaling laws and better feature quality metrics that improve with size, shown by training a 16-million-latent model on GPT-4 activations.

  31. Stories in Space: In-Context Learning Trajectories in Conceptual Belief Space

    cs.CL 2026-05 unverdicted novelty 6.0

    LLMs perform in-context learning as trajectories through a structured low-dimensional conceptual belief space, with the structure visible in both behavior and internal representations and causally manipulable via inte...

  32. Set-Aggregated Genome Embeddings for Microbiome Abundance Prediction

    q-bio.GN 2026-05 unverdicted novelty 6.0

    Set-aggregated genome embeddings from genomic language models predict microbiome abundance profiles with improved generalization to novel genomes over classical bioinformatics methods.

  33. Probing Non-Equilibrium Grain Boundary Dynamics with XPCS and Domain-Adaptive Machine Learning

    cond-mat.mtrl-sci 2026-05 unverdicted novelty 6.0

    XPCS fluctuation maps analyzed via domain-adaptive ML trained on continuum simulations yield bulk diffusivity, GB stiffness, and effective GB concentration, demonstrating persistent non-equilibrium GB relaxation in na...

  34. BoolXLLM: LLM-Assisted Explainability for Boolean Models

    cs.AI 2026-05 unverdicted novelty 6.0

    BoolXLLM augments an existing Boolean rule learner with LLMs for feature selection, discretization thresholds, and natural-language rule translation to improve interpretability while preserving accuracy.

  35. Toward Modeling Player-Specific Chess Behaviors

    cs.AI 2026-05 unverdicted novelty 6.0

    Champion-specific embeddings and limited MCTS in Maia-2 reduce average Jensen-Shannon divergence to 16 historical chess champions' move distributions in a new latent-space metric, even as standard move accuracy falls.

  36. Behavioral Integrity Verification for AI Agent Skills

    cs.CR 2026-05 unverdicted novelty 6.0

    BIV audits AI agent skills at scale, finding 80% deviate from declared behavior on 49,943 skills and achieving 0.946 F1 for malicious skill detection.

  37. FastUMAP: Scalable Dimensionality Reduction via Bipartite Landmark Sampling

    cs.LG 2026-05 unverdicted novelty 6.0

    FastUMAP speeds up UMAP by 15x on 70k-point datasets via bipartite landmark sampling and Nystrom initialization while retaining 96% of the kNN accuracy of stronger baselines.

  38. SOMA: Efficient Multi-turn LLM Serving via Small Language Model

    cs.CL 2026-05 unverdicted novelty 6.0

    SOMA estimates a local response manifold from early turns and adapts a small surrogate model via divergence-maximizing prompts and localized LoRA fine-tuning for efficient multi-turn serving.

  39. Biosignal Fingerprinting: A Cross-Modal PPG-ECG Foundation Model

    cs.LG 2026-05 unverdicted novelty 6.0

    A cross-modal masked autoencoder creates reusable biosignal fingerprints that match or exceed specialist models on seven cardiovascular tasks using only single-modality input.

  40. In-Context Black-Box Optimization with Unreliable Feedback

    cs.LG 2026-05 unverdicted novelty 6.0

    FICBO pretrains a feedback-aware transformer with a structured prior on feedback distortion to adaptively exploit or ignore unreliable auxiliary signals during in-context black-box optimization.

  41. DexSynRefine: Synthesizing and Refining Human-Object Interaction Motion for Physically Feasible Dexterous Robot Actions

    cs.RO 2026-05 unverdicted novelty 6.0

    DexSynRefine synthesizes HOI motions with an extended manifold method, refines them via task-space residual RL, and adapts for sim-to-real transfer, outperforming kinematic retargeting by 50-70 percentage points on fi...

  42. Practical validation of synthetic pre-crash scenarios

    cs.RO 2026-05 unverdicted novelty 6.0

    A binning-based Bayesian ROPE equivalence testing method is introduced to quantitatively assess practical equivalence between synthetic and real pre-crash scenario datasets for driving automation safety impact evaluation.

  43. Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models

    cs.AI 2026-05 unverdicted novelty 6.0

    MoR lets clients train local reward models on private preferences and uses a learned Mixture-of-Rewards with GRPO on the server to align a shared base VLM without exchanging parameters, architectures, or raw data.

  44. OGPO: Sample Efficient Full-Finetuning of Generative Control Policies

    cs.LG 2026-05 unverdicted novelty 6.0

    OGPO is a sample-efficient off-policy method for full finetuning of generative control policies that reaches SOTA on robotic manipulation tasks and can recover from poor behavior-cloning initializations without expert data.

  45. DR-SNE: Density-Regularized Stochastic Neighbor Embedding

    cs.LG 2026-05 unverdicted novelty 6.0

    DR-SNE augments the SNE objective with a density regularization term from normalized log-density estimates to preserve relative densities while retaining neighborhood structure.

  46. Retrieval with Multiple Query Vectors through Anomalous Pattern Detection

    cs.LG 2026-05 unverdicted novelty 6.0

    A retrieval approach identifies anomalous dimensions in a set of query vectors and retrieves database vectors that are anomalous across those dimensions, with performance improving as query set size grows to around 8.

  47. LLM-Augmented Semantic Steering of Text Embedding Projection Spaces

    cs.HC 2026-05 unverdicted novelty 6.0

    LLM-augmented semantic steering lets analysts reshape text embedding projections by providing semantic groupings that an LLM externalizes and extends to improve alignment with intended structures using minimal interaction.

  48. Robust Conditional Conformal Prediction via Branched Normalizing Flow

    cs.LG 2026-05 unverdicted novelty 6.0

    Branched Normalizing Flow improves conditional coverage robustness of conformal prediction under distribution shift by normalizing test inputs to the calibration distribution and mapping prediction sets back.

  49. Disentangled Anatomy-Disease Diffusion (DADD) for Controllable Ulcerative Colitis Progression Synthesis

    cs.CV 2026-05 unverdicted novelty 6.0

    DADD disentangles anatomy and disease in a latent diffusion model using a Feature Purifier, ordinal disease embeddings, and Delta Steering to synthesize controllable ulcerative colitis progression images.

  50. Controlled Paraphrase Geometry in Sentence Embedding Space: Local Manifold Modeling and Latent Probing

    cs.CL 2026-05 unverdicted novelty 6.0

    Nonlinear polynomial models fit local paraphrase embedding clouds more accurately than linear ones and support geometrically consistent synthetic point generation, yet this geometric fidelity does not improve classifi...

  51. Class Angular Distortion Index for Dimensionality Reduction

    cs.LG 2026-05 unverdicted novelty 6.0

    CADI quantifies the preservation of relative cluster angles in low-dimensional projections using internal angles from point triples.

  52. Is Textual Similarity Invariant under Machine Translation? Evidence Based on the Political Manifesto Corpus

    cs.CL 2026-05 unverdicted novelty 6.0

    Machine translation preserves embedding similarity structure for ten languages but distorts it for four in the Manifesto Corpus, via a new non-inferiority testing framework.

  53. Diverse Image Priors for Black-box Data-free Knowledge Distillation

    cs.LG 2026-04 unverdicted novelty 6.0

    DIP-KD achieves state-of-the-art results in black-box data-free knowledge distillation across 12 benchmarks by synthesizing diverse image priors, applying contrastive learning, and using a primer student for soft-prob...

  54. DiRe-RAPIDS: Topology-faithful dimensionality reduction at scale

    cs.LG 2026-04 unverdicted novelty 6.0

    DiRe recovers exact first Betti numbers on noisy manifold stress tests, matches or beats GPU UMAP on classification, and preserves 3-4 times more topological structure than UMAP on 723K arXiv embeddings at similar speed.

  55. Diffusion-Guided Feature Selection via Nishimori Temperature: Noise-Based Spectral Embedding

    cs.LG 2026-04 unverdicted novelty 6.0

    NBSE identifies the Nishimori temperature where the Bethe Hessian singularizes to embed features via degree-corrected diffusion and selects one representative per redundant group, preserving accuracy at 30% retention ...

  56. StarCLR: Contrastive Learning Representation for Astronomical Light Curves

    astro-ph.SR 2026-04 conditional novelty 6.0

    StarCLR pretrains on TESS light curves via contrastive learning on overlapping subsequences and improves variable star classification F1 scores over scratch-trained models when fine-tuned on TESS, ZTF, and Gaia.

  57. Explainable AI in Speaker Recognition -- Making Latent Representations Understandable

    eess.AS 2026-04 unverdicted novelty 6.0

    Speaker recognition networks form hierarchical clusters in latent space that can be matched to semantic classes using new HCCM algorithm and quantified by Liebig's score.

  58. A Machine Learning Approach to Meteor Classification

    astro-ph.EP 2026-04 unverdicted novelty 6.0

    Machine learning clustering of meteor observations produces a new hardness classification H_class that refines traditional Kb models using more parameters and reveals compositional structure in meteoroid populations.

  59. Large language model-enabled automated data extraction for concrete materials informatics

    cond-mat.mtrl-sci 2026-04 unverdicted novelty 6.0

    LLM pipeline extracts nearly 9,000 high-quality blended-cement concrete records from over 27,000 publications with F1 scores up to 0.97 and enables ML analyses showing benefits of large diverse datasets.

  60. LatentGandr: Visual Exploration of Generative AI Latent Space via Local Embeddings

    cs.HC 2026-04 unverdicted novelty 6.0

    LatentGandr computes local principal components from neighborhood embeddings in generative model latent spaces and visualizes them as interactive grids to improve exploration over global slider methods.

Reference graph

Works this paper leans on

65 extracted references · 65 canonical work pages · cited by 122 Pith papers · 1 internal anchor

  1. [1]

    Pen-based recognition of handwrit- ten digits data set

    E Alpaydin and Fevzi Alimoglu. Pen-based recognition of handwrit- ten digits data set. university of california, irvine. Machine Learning Repository. Irvine: University of California , 4(2), 1998

  2. [2]

    Bloodspot: a database of healthy and malignant haematopoiesis updated with pu- ri/f_ied and single cell mrna sequencing pro/f_iles.Nucleic Acids Research, 2018

    Frederik Otzen Bagger, Savvas Kinalis, and Nicolas Rapin. Bloodspot: a database of healthy and malignant haematopoiesis updated with pu- ri/f_ied and single cell mrna sequencing pro/f_iles.Nucleic Acids Research, 2018

  3. [3]

    Fuzzy set theory and topos theory

    Michael Barr. Fuzzy set theory and topos theory. Canad. Math. Bull , 29(4):501–508, 1986

  4. [4]

    Kwok, Lai Guan Ng, Florent Ginhoux, and Evan W Newell

    Etienne Becht, Charles-Antoine Dutertre, Immanuel W.H. Kwok, Lai Guan Ng, Florent Ginhoux, and Evan W Newell. Evaluation of umap as an alternative to t-sne for single-cell data. bioRxiv, 2018

  5. [5]

    Dimensionality reduction for visualizing single-cell data using umap

    Etienne Becht, Leland McInnes, John Healy, Charles-Antoine Dutertre, Immanuel WH Kwok, Lai Guan Ng, Florent Ginhoux, and Evan W Newell. Dimensionality reduction for visualizing single-cell data using umap. Nature biotechnology, 37(1):38, 2019

  6. [6]

    Laplacian eigenmaps and spec- tral techniques for embedding and clustering

    Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spec- tral techniques for embedding and clustering. In Advances in neural information processing systems, pages 585–591, 2002

  7. [7]

    Laplacian eigenmaps for dimen- sionality reduction and data representation

    Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimen- sionality reduction and data representation. Neural computation , 15(6):1373–1396, 2003

  8. [8]

    A survey on metric learning for feature vectors and structured data.arXiv preprint arXiv:1306.6709, 2013

    Aur ´elien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data.arXiv preprint arXiv:1306.6709, 2013

  9. [9]

    Omip-018: Chemokine receptor expression on human t helper cells

    Tess Brodie, Elena Brenna, and Federica Sallusto. Omip-018: Chemokine receptor expression on human t helper cells. Cytometry Part A, 83(6):530–532, 2013

  10. [10]

    API design for machine learning so/f_tware: experiences from the scikit-learn project

    Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Pre/t_tenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, and Ga¨el Varoquaux. API design for machine learning so/f_tware: experiences from the scikit-learn project. InECML PKDD Workshop: ...

  11. [11]

    A molecular census of arcuate hypothalamus and median eminence cell types

    John N Campbell, Evan Z Macosko, Henning Fenselau, Tune H Pers, Anna Lyubetskaya, Danielle Tenen, Melissa Goldman, Anne MJ Ver- stegen, Jon M Resch, Steven A McCarroll, et al. A molecular census of arcuate hypothalamus and median eminence cell types. Nature neu- roscience, 20(3):484, 2017

  12. [12]

    /T_he single-cell transcriptional land- scape of mammalian organogenesis

    Junyue Cao, Malte Spielmann, Xiaojie Qiu, Xingfan Huang, Daniel M Ibrahim, Andrew J Hill, Fan Zhang, Stefan Mundlos, Lena Chris- tiansen, Frank J Steemers, et al. /T_he single-cell transcriptional land- scape of mammalian organogenesis. Nature, page 1, 2019

  13. [13]

    Classifying clustering schemes

    Gunnar Carlsson and Facundo M ´emoli. Classifying clustering schemes. Foundations of Computational Mathematics , 13(2):221–252, 2013

  14. [14]

    Activation atlas

    Shan Carter, Zan Armstrong, Ludwig Schubert, Ian John- son, and Chris Olah. Activation atlas. Distill, 2019. h/t_tps://distill.pub/2019/activation-atlas

  15. [15]

    Comprehensive analysis of retinal development at single cell resolution identi/f_ies n/f_i factors as essential for mitotic exit and speci/f_ication of late-born cells

    Brian Clark, Genevieve Stein-O’Brien, Fion Shiau, Gabrielle Can- non, Emily Davis, /T_homas Sherman, Fatemeh Rajaii, Rebecca James- Esposito, Richard Gronostajski, Elana Fertig, et al. Comprehensive analysis of retinal development at single cell resolution identi/f_ies n/f_i factors as essential for mitotic exit and speci/f_ication of late-born cells. bio...

  16. [16]

    Diffusion maps

    Ronald R Coifman and St ´ephane Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006

  17. [17]

    Re- vealing multi-scale population structure in large cohorts

    Alex Diaz-Papkovich, Luke Anderson-Trocme, and Simon Gravel. Re- vealing multi-scale population structure in large cohorts. bioRxiv, page 423632, 2018

  18. [18]

    Efficient k-nearest neighbor graph construction for generic similarity measures

    Wei Dong, Charikar Moses, and Kai Li. Efficient k-nearest neighbor graph construction for generic similarity measures. In Proceedings of the 20th International Conference on World Wide Web , WWW ’11, pages 577–586, New York, NY, USA, 2011. ACM

  19. [19]

    (self- a/t_tentive) autoencoder-based universal language representation for machine translation

    Carlos Escolano, Marta R Costa-juss `a, and Jos ´e AR Fonollosa. (self- a/t_tentive) autoencoder-based universal language representation for machine translation. arXiv preprint arXiv:1810.06351, 2018

  20. [20]

    Deep learn- ing multidimensional projections

    Mateus Espadoto, Nina ST Hirata, and Alexandru C Telea. Deep learn- ing multidimensional projections. arXiv preprint arXiv:1902.07958 , 2019. 59

  21. [21]

    Visual analytics of multidimensional projections for construct- ing classi/f_ier decision boundary maps

    Mateus Espadoto, Francisco Caio M Rodrigues, and Alexandru C Telea. Visual analytics of multidimensional projections for construct- ing classi/f_ier decision boundary maps

  22. [22]

    Survey article: an elementary illustrated intro- duction to simplicial sets

    Greg Friedman et al. Survey article: an elementary illustrated intro- duction to simplicial sets. Rocky Mountain Journal of Mathematics , 42(2):353–423, 2012

  23. [23]

    Data-driven design: Exploring new structural forms using machine learning and graphic statics

    Lukas Fuhrimann, Vahid Moosavi, Patrick Ole Ohlbrock, and Pierluigi Dacunto. Data-driven design: Exploring new structural forms using machine learning and graphic statics. arXiv preprint arXiv:1809.08660, 2018

  24. [24]

    Gaussian mixture models with wasserstein distance

    Benoit Gaujac, Ilya Feige, and David Barber. Gaussian mixture models with wasserstein distance. arXiv preprint arXiv:1806.04465, 2018

  25. [25]

    Simplicial homotopy theory

    Paul G Goerss and John F Jardine. Simplicial homotopy theory . Springer Science & Business Media, 2009

  26. [26]

    Graph laplacians and their convergence on random neighborhood graphs

    Ma/t_thias Hein, Jean-Yves Audibert, and Ulrike von Luxburg. Graph laplacians and their convergence on random neighborhood graphs. Journal of Machine Learning Research , 8(Jun):1325–1368, 2007

  27. [27]

    Analysis of a complex of statistical variables into principal components

    Harold Hotelling. Analysis of a complex of statistical variables into principal components. Journal of educational psychology , 24(6):417, 1933

  28. [28]

    /T_he art of using t-sne for single-cell transcriptomics

    Dmitry Kobak and Philipp Berens. /T_he art of using t-sne for single-cell transcriptomics. Nature communications, 10(1):1–14, 2019

  29. [29]

    Umap does not preserve global structure any be/t_ter than t-sne when using the same initializa- tion

    Dmitry Kobak and George C Linderman. Umap does not preserve global structure any be/t_ter than t-sne when using the same initializa- tion. bioRxiv, 2019

  30. [30]

    J. B. Kruskal. Multidimensional scaling by optimizing goodness of /f_it to a nonmetric hypothesis. Psychometrika, 29(1):1–27, Mar 1964

  31. [31]

    Numba: A llvm- based python jit compiler

    Siu Kwan Lam, Antoine Pitrou, and Stanley Seibert. Numba: A llvm- based python jit compiler. In Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC , LLVM ’15, pages 7:1–7:6, New York, NY, USA, 2015. ACM

  32. [32]

    /T_he MNIST database of handwri/t_ten digits

    Yann Lecun and Corinna Cortes. /T_he MNIST database of handwri/t_ten digits

  33. [33]

    Shi/f_t-invariant similarities circum- vent distance concentration in stochastic neighbor embedding and variants

    John A Lee and Michel Verleysen. Shi/f_t-invariant similarities circum- vent distance concentration in stochastic neighbor embedding and variants. Procedia Computer Science, 4:538–547, 2011. 60

  34. [34]

    Mani- fold learning of four-dimensional scanning transmission electron mi- croscopy

    Xin Li, Ondrej E Dyck, Mark P Oxley, Andrew R Lupini, Leland McInnes, John Healy, Stephen Jesse, and Sergei V Kalinin. Mani- fold learning of four-dimensional scanning transmission electron mi- croscopy. npj Computational Materials, 5(1):5, 2019

  35. [35]

    M. Lichman. UCI machine learning repository, 2013

  36. [36]

    George Linderman. Fit-sne. https://github.com/KlugerLab/ FIt-SNE, 2018

  37. [37]

    Efficient algorithms for t-distributed stochastic neighborhood embedding

    George C Linderman, Manas Rachh, Jeremy G Hoskins, Stefan Steinerberger, and Yuval Kluger. Efficient algorithms for t-distributed stochastic neighborhood embedding. arXiv preprint arXiv:1712.09005, 2017

  38. [38]

    Clustering with t-sne, provably

    George C Linderman and Stefan Steinerberger. Clustering with t-sne, provably. SIAM Journal on Mathematics of Data Science , 1(2):313–332, 2019

  39. [39]

    Categories for the working mathematician , vol- ume 5

    Saunders Mac Lane. Categories for the working mathematician , vol- ume 5. Springer Science & Business Media, 2013

  40. [40]

    Simplicial objects in algebraic topology , volume 11

    J Peter May. Simplicial objects in algebraic topology , volume 11. Uni- versity of Chicago Press, 1992

  41. [41]

    Distributed representations of words and phrases and their compositionality

    Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing sys- tems, pages 3111–3119, 2013

  42. [42]

    Visualizing structure and transitions in high-dimensional biological data

    Kevin R Moon, David van Dijk, Zheng Wang, Sco/t_t Gigante, Daniel B Burkhardt, William S Chen, Kristina Yim, Antonia van den Elzen, Ma/t_thew J Hirn, Ronald R Coifman, et al. Visualizing structure and transitions in high-dimensional biological data. Nature biotechnology, 37(12):1482–1492, 2019

  43. [43]

    Nene, Shree K

    Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. Columbia object image library (coil-20. Technical report, 1996

  44. [44]

    Nene, Shree K

    Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. object image library (coil-100. Technical report, 1996

  45. [45]

    Human bone marrow assessment by single cell rna sequencing, mass cytometry and /f_low cytometry

    Karolyn A Oetjen, Katherine E Lindblad, Meghali Goswami, Gege Gui, Pradeep K Dagur, Catherine Lai, Laura W Dillon, J Philip McCoy, and Christopher S Hourigan. Human bone marrow assessment by single cell rna sequencing, mass cytometry and /f_low cytometry. bioRxiv, 2018. 61

  46. [46]

    Fast batch alignment of single cell transcriptomes uni/f_ies multiple mouse cell atlases into an integrated landscape.bioRxiv, page 397042, 2018

    Jong-Eun Park, Krzysztof Polanski, Kerstin Meyer, and Sarah A Te- ichmann. Fast batch alignment of single cell transcriptomes uni/f_ies multiple mouse cell atlases into an integrated landscape.bioRxiv, page 397042, 2018

  47. [47]

    Simplicial autoencoders

    Jose Daniel Gallego Posada. Simplicial autoencoders. 2018

  48. [48]

    A leisurely introduction to simplicial sets

    Emily Riehl. A leisurely introduction to simplicial sets. Unpublished expository article available online at h/t_tp://www. math. harvard. edu/˜ eriehl, 2011

  49. [49]

    Category theory in context

    Emily Riehl. Category theory in context . Courier Dover Publications, 2017

  50. [50]

    A nonlinear mapping for data structure analysis

    John W Sammon. A nonlinear mapping for data structure analysis. IEEE Transactions on computers , 100(5):401–409, 1969

  51. [51]

    Flowrepository: A resource of annotated /f_low cy- tometry datasets associated with peer-reviewed publications

    Josef Spidlen, Karin Breuer, Chad Rosenberg, Nikesh Kotecha, and Ryan R Brinkman. Flowrepository: A resource of annotated /f_low cy- tometry datasets associated with peer-reviewed publications. Cytom- etry Part A, 81(9):727–731, 2012

  52. [52]

    Metric realization of fuzzy simplicial sets

    David I Spivak. Metric realization of fuzzy simplicial sets. Self pub- lished notes, 2012

  53. [53]

    Largevis

    Jian Tang. Largevis. https://github.com/lferry007/LargeVis, 2016

  54. [54]

    Visualizing large-scale and high-dimensional data

    Jian Tang, Jingzhou Liu, Ming Zhang, and Qiaozhu Mei. Visualizing large-scale and high-dimensional data. InProceedings of the 25th Inter- national Conference on World Wide Web, pages 287–297. International World Wide Web Conferences Steering Commi/t_tee, 2016

  55. [55]

    Tenenbaum

    Joshua B. Tenenbaum. Mapping a manifold of perceptual observa- tions. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors,Advances in Neural Information Processing Systems 10 , pages 682–688. MIT Press, 1998

  56. [56]

    A global geometric framework for nonlinear dimensionality reduction.science, 290(5500):2319–2323, 2000

    Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction.science, 290(5500):2319–2323, 2000

  57. [57]

    Multicore-tsne

    Dmitry Ulyanov. Multicore-tsne. https://github.com/ DmitryUlyanov/Multicore-TSNE, 2016

  58. [58]

    Accelerating t-sne using tree-based algo- rithms

    Laurens van der Maaten. Accelerating t-sne using tree-based algo- rithms. Journal of machine learning research , 15(1):3221–3245, 2014. 62

  59. [59]

    Visualizing data using t-sne

    Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research , 9(Nov):2579–2605, 2008

  60. [60]

    Visualizing data using t-SNE

    Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research , 9:2579–2605, 2008

  61. [61]

    What do numbers look like? https://johnhw

    John Williamson. What do numbers look like? https://johnhw. github.io/umap_primes/index.md.html, 2018

  62. [62]

    Comparison between umap and t-sne for multiplex-immuno/f_luorescence derived single-cell data from tissue sections

    Duoduo Wu, Joe Yeong, Grace Tan, Marion Chevrier, Josh Loh, Tony Lim, and Jinmiao Chen. Comparison between umap and t-sne for multiplex-immuno/f_luorescence derived single-cell data from tissue sections. bioRxiv, page 549659, 2019

  63. [63]

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017

  64. [64]

    Distance metric learning: A comprehensive survey

    Liu Yang and Rong Jin. Distance metric learning: A comprehensive survey. Michigan State Universiy, 2(2):4, 2006

  65. [65]

    Information and control.Fuzzy sets, 8(3):338–353, 1965

    Lo/f_ti A Zadeh. Information and control.Fuzzy sets, 8(3):338–353, 1965. 63