pith. machine review for the scientific record. sign in

arxiv: 2605.01817 · v1 · submitted 2026-05-03 · 💻 cs.LG

Recognition: unknown

Skipping the Zeros in Diffusion Models for Sparse Data Generation

Andriy Balinskyy, Carl Herrmann, Gabriel Vicente Rodrigues, Jean Radig, Marius Kloft, Mayank Nagda, Phil Sidney Ostheimer, Sophie Fellenz, Stephan Mandt

Authors on Pith no claims yet

Pith reviewed 2026-05-10 14:53 UTC · model grok-4.3

classification 💻 cs.LG
keywords diffusion modelssparse data generationsparsity preservationgenerative modelingcomputational efficiencyphysics simulationbiological data
0
0 comments X

The pith

Diffusion models can generate sparse data by modeling only non-zero values while handling zero locations separately.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Standard diffusion models treat all entries equally and therefore erase exact zero patterns that mark deliberate absences in sparse continuous data. They also perform unnecessary computations across mostly empty positions. Sparsity-Exploiting Diffusion separates the problem by training and sampling exclusively on the non-zero values and managing sparsity patterns on their own. This produces computational savings and keeps or improves sample quality. The change matters wherever data naturally contains many exact zeros, such as particle counts in physics or gene expression levels in biology.

Core claim

The paper establishes that by skipping zeros during both training and inference, and modeling only the non-zero values while preserving sparsity patterns independently, Sparsity-Exploiting Diffusion achieves lower computational cost without loss of generation quality. On physics and biology benchmarks it matches or exceeds conventional diffusion models and specialized baselines; vision experiments illustrate how dense models blur sparsity and how the new separation avoids that failure.

What carries the argument

Sparsity-Exploiting Diffusion (SED), the mechanism that restricts the diffusion process to non-zero entries and treats sparsity pattern modeling as a separate step.

Load-bearing premise

The locations of zeros can be handled independently from the values in the non-zero positions without losing essential distributional information.

What would settle it

If SED produces lower-quality samples or incorrect sparsity patterns than a standard diffusion model on a dataset where zero positions are strongly correlated with the non-zero values, the separation approach would be shown to fail.

Figures

Figures reproduced from arXiv: 2605.01817 by Andriy Balinskyy, Carl Herrmann, Gabriel Vicente Rodrigues, Jean Radig, Marius Kloft, Mayank Nagda, Phil Sidney Ostheimer, Sophie Fellenz, Stephan Mandt.

Figure 1
Figure 1. Figure 1: Sparsity preservation on MNIST. While dense models (DDPM, LDM) fail to preserve exact zeros and introduce spuri￾ous non-zero entries, the proposed Sparsity-Exploiting Diffusion (SED) model preserves sparsity patterns closely aligned with the ground truth. (sparsity-aware stress tests), but it requires both high-fidelity samples and faithful recovery of sparsity patterns. Zeros are semantically meaningful a… view at source ↗
Figure 3
Figure 3. Figure 3: DDPM on sparse MNIST images: rate–distortion curves show the allocation of less rate to zero dimensions, yet the de￾noising network in training/inference processes all dimensions, incurring overhead. The proposed SED operates only on non-zero values, preserving sparsity and avoiding unnecessary compute. higher rate corresponds to greater information capacity de￾voted to representing the data. In DMs, follo… view at source ↗
Figure 4
Figure 4. Figure 4: The proposed SED processes only non-zero values for efficient sparse data generation. Overview of SED applied to sparse calorimeter images where white pixels represent zero energy deposits. The sparsity-aware encoder qϕ extracts dimension-value pairs from non-zero input elements, averages the Transformer output to produce a fixed-size dense latent representation z, and performs diffusion in this dense spac… view at source ↗
Figure 5
Figure 5. Figure 5: Histograms of per-sample sparsity, displaying sparsity levels (20 bins) with mean values indicated by dashed lines. SED achieves accurate sparsity preservation, matching real data spar￾sity. Sparsity-unaware methods (DDPM, LDM, scDiffusion) sys￾tematically underestimate sparsity. On calorimeter images, SED performance is comparable to the sparsity-aware SARM [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: SED generates physically realistic sparse background images with proper energy clustering. Comparison of calorimeter images from the Muon Background dataset. Columns show sam￾ples from: (1) Dataset, (2) SED (proposed method), (3) DDPM, (4) DDPM with post-hoc thresholding (DDPM-T), (5) LDM, (6) LDM with post-hoc thresholding (LDM-T), and (7) domain-specific SARM. Pixel intensities represent energy deposits … view at source ↗
Figure 7
Figure 7. Figure 7: SED’s greedy autoregressive dimension generation pro￾duces correct orderings in the vast majority of cases. Rare failures occur when dimensions are generated out of order, which can lead to unrealistic samples, as illustrated here on MNIST [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: SED generates physically realistic sparse calorimeter images with proper energy clustering. Comparison of generated signal calorimeter images from muon isolation dataset. Columns show: (1) real training data, (2) SED (proposed method), (3) DDPM, (4) DDPM with post-hoc thresholding (DDPM-T), (5) LDM, (6) LDM with post-hoc thresholding (LDM-T), and (7) SARM. Pixel intensities represent energy deposits in GeV… view at source ↗
Figure 9
Figure 9. Figure 9: SED keeps memory usage nearly constant for high-dimensional (1K–27K dimensions) sparse data with a fixed number of active genes (1K). Unlike DDPM and LDM, whose costs grow with total dimensionality, SED processes only the non-zero gene expression values, maintaining efficiency regardless of input size. 5K 10K 15K 20K 25K Data Dimensionality 0.2 0.4 0.6 0.8 1.0 Sparsity Dataset DDPM LDM SED [PITH_FULL_IMAG… view at source ↗
Figure 10
Figure 10. Figure 10: Sparsity levels for Human Lung Pulmonary Fibrosis with varying ground-truth sparsity (1K–27K dimensions; 1K active genes fixed). The plot compares DDPM, LDM, and SED, showing that SED is the only model to accurately reflect sparsity even beyond 99%. effectively capturing zero-valued elements’ proportion and structure. In contrast, DDPM and LDM fail to reproduce these sparsity patterns, resulting in a dist… view at source ↗
Figure 11
Figure 11. Figure 11: Shown are, from top to bottom, in the first row: Fashion-MNIST images sampled from the dataset, DDPM sampled images, thresholded DDPM sampled images (DDPM-T), LDM sampled images, thresholded LDM sampled images (LDM-T), and SED sampled images. The second columns contains the respective sparsity information. Despite highly visually similar images, DDPM and LDM fail to reflect the sparsity, whereas the propo… view at source ↗
Figure 12
Figure 12. Figure 12: Shown are, from top to bottom, in the first row: MNIST images sampled from the dataset, DDPM sampled images, thresholded DDPM sampled images (DDPM-T), LDM sampled images, thresholded LDM sampled images (LDM-T), and SED sampled images. The second column contains the respective sparsity information. Despite highly visually similar images, DDPM and LDM fail to reflect the sparsity, whereas the proposed SED h… view at source ↗
read the original abstract

Diffusion models (DMs) excel on dense continuous data, but are not designed for sparse continuous data. They do not model exact zeros that represent the deliberate absence of a signal. As a result, they erase sparsity patterns and perform unnecessary computation on mostly zero entries. With Sparsity-Exploiting Diffusion (SED), we model only non-zero values, preserving sparsity. SED delivers computational savings while maintaining or improving generation quality by skipping zeros during training and inference. Across physics and biology benchmarks, SED matches or surpasses conventional DMs and domain-specific baselines, while vision experiments provide intuitive insights into the limitations of dense DMs and the benefits of SED.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces Sparsity-Exploiting Diffusion (SED), a modification to diffusion models for sparse continuous data. Standard DMs do not handle exact zeros (representing deliberate signal absence) and waste computation on zero entries while erasing sparsity patterns. SED models only non-zero values, skipping zeros during training and inference to preserve sparsity, deliver computational savings, and match or surpass conventional DMs and domain-specific baselines on physics, biology, and vision benchmarks.

Significance. If the results hold under scrutiny, SED addresses a practical limitation of dense diffusion models on sparse data common in physics simulations and biological signals, potentially enabling more efficient generation while maintaining distributional fidelity. The approach could be impactful for applications where sparsity is structurally important.

major comments (2)
  1. [Abstract / Method] The core modeling choice separates the sparsity pattern (zero locations) from non-zero magnitudes and treats them independently. This assumption is load-bearing for the claim of preserving the joint distribution and correct sparsity statistics, yet the manuscript provides no validation or discussion of cases where zero positions correlate with value ranges (e.g., thresholded fields).
  2. [Abstract / Experiments] The abstract asserts performance parity or gains across benchmarks, but the provided description contains no implementation details, error bars, ablation results on the mask/value separation, or quantitative comparison of sparsity statistics in generated samples. These omissions make it impossible to evaluate whether the reported improvements are robust or artifactual.
minor comments (1)
  1. [Abstract] Clarify in the abstract or introduction how the sparsity mask is generated or modeled at inference time, as this is central to the claimed computational savings.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment point by point below, indicating where revisions have been made to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract / Method] The core modeling choice separates the sparsity pattern (zero locations) from non-zero magnitudes and treats them independently. This assumption is load-bearing for the claim of preserving the joint distribution and correct sparsity statistics, yet the manuscript provides no validation or discussion of cases where zero positions correlate with value ranges (e.g., thresholded fields).

    Authors: We acknowledge that SED deliberately factors the sparsity mask and non-zero magnitudes as separate components to enable skipping zeros. This design choice is motivated by domains where sparsity patterns arise from structural or physical rules that are largely independent of magnitude values. However, the referee correctly notes that the manuscript contains no explicit validation or discussion of scenarios in which zero locations are correlated with value ranges, such as thresholded fields. We have added a dedicated paragraph in the Discussion section that states this modeling assumption, its scope of applicability, and outlines a possible extension using a joint mask-value model for strongly correlated cases. revision: yes

  2. Referee: [Abstract / Experiments] The abstract asserts performance parity or gains across benchmarks, but the provided description contains no implementation details, error bars, ablation results on the mask/value separation, or quantitative comparison of sparsity statistics in generated samples. These omissions make it impossible to evaluate whether the reported improvements are robust or artifactual.

    Authors: The referee is right that the abstract itself omits these elements due to length limits. The full manuscript already reports implementation details in Section 3, error bars from repeated runs in Tables 1–3, and an ablation on the mask/value separation in Section 4.3. To directly address the concern about sparsity statistics, we have added a new quantitative analysis (new Table 4 and Figure 5) that compares zero ratios, spatial distributions of non-zero entries, and non-zero value histograms between real and generated samples on all benchmarks. These additions allow readers to verify that sparsity patterns are preserved and that performance gains are not artifactual. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper introduces Sparsity-Exploiting Diffusion (SED) as a direct algorithmic modification to standard diffusion models, skipping zero entries during training and inference while modeling only non-zero values. No derivation step reduces a claimed prediction to a fitted parameter by construction, invokes a self-citation as a uniqueness theorem, or renames an existing result; the central claims rest on explicit changes to the forward/reverse processes and are validated empirically on external benchmarks rather than internally forced. The separation of sparsity mask from value magnitudes is presented as an explicit modeling assumption, not derived from prior equations within the paper.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

Based solely on abstract; no explicit free parameters, axioms, or invented entities detailed beyond standard diffusion model assumptions.

axioms (1)
  • domain assumption Diffusion models can be adapted by selectively processing non-zero entries without altering the underlying noise schedule or score matching objective.
    Implicit in the claim that skipping zeros preserves generation quality.
invented entities (1)
  • Sparsity-Exploiting Diffusion (SED) no independent evidence
    purpose: A modified diffusion process that ignores zero entries to exploit sparsity.
    New method name and approach introduced to address the stated limitation.

pith-pipeline@v0.9.0 · 5427 in / 1201 out tokens · 35703 ms · 2026-05-10T14:53:16.044252+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

57 extracted references · 7 canonical work pages

  1. [1]

    Hinton , title =

    Ting Chen and Ruixiang Zhang and Geoffrey E. Hinton , title =. The Eleventh International Conference on Learning Representations,. 2023 , url =

  2. [2]

    Neural Information Processing Systems, Machine Learning and the Physical Sciences Workshop , year=

    Sparse image generation with decoupled generative models , author=. Neural Information Processing Systems, Machine Learning and the Physical Sciences Workshop , year=

  3. [3]

    Physical Review D , volume=

    Sparse autoregressive models for scalable generation of sparse images in particle physics , author=. Physical Review D , volume=. 2021 , publisher=

  4. [4]

    Shadows can be

    Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Bj. High-Resolution Image Synthesis with Latent Diffusion Models , booktitle =. 2022 , url =. doi:10.1109/CVPR52688.2022.01042 , timestamp =

  5. [5]

    Genome biology , volume=

    Eleven grand challenges in single-cell data science , author=. Genome biology , volume=. 2020 , publisher=

  6. [6]

    Deep unsupervised learning using nonequilibrium thermodynamics , booktitle =

    Jascha Sohl. Deep unsupervised learning using nonequilibrium thermodynamics , booktitle =. 2015 , url =

  7. [7]

    Denoising diffusion probabilistic models , booktitle =

    Jonathan Ho and Ajay Jain and Pieter Abbeel , editor =. Denoising diffusion probabilistic models , booktitle =. 2020 , url =

  8. [8]

    Improved techniques for training score-based generative models , booktitle =

    Yang Song and Stefano Ermon , editor =. Improved techniques for training score-based generative models , booktitle =. 2020 , url =

  9. [9]

    Score-based generative modeling through stochastic differential equations , booktitle =

    Yang Song and Jascha Sohl. Score-based generative modeling through stochastic differential equations , booktitle =. 2021 , url =

  10. [10]

    Johnson and Jonathan Ho and Daniel Tarlow and Rianne van den Berg , editor =

    Jacob Austin and Daniel D. Johnson and Jonathan Ho and Daniel Tarlow and Rianne van den Berg , editor =. Structured denoising diffusion models in discrete state-spaces , booktitle =. 2021 , url =

  11. [11]

    Kingma , title =

    Tim Salimans and Andrej Karpathy and Xi Chen and Diederik P. Kingma , title =. 5th International Conference on Learning Representations,. 2017 , url =

  12. [12]

    Communications of the ACM , volume=

    Generative adversarial networks , author=. Communications of the ACM , volume=. 2020 , publisher=

  13. [13]

    Nature , volume=

    Single-cell transcriptomics of 20 mouse organs creates a Tabula Muris: The Tabula Muris Consortium , author=. Nature , volume=. 2018 , publisher=

  14. [14]

    Science advances , volume=

    Single-cell RNA sequencing reveals profibrotic roles of distinct epithelial and mesenchymal lineages in pulmonary fibrosis , author=. Science advances , volume=. 2020 , publisher=

  15. [15]

    http://yann.lecun.com/exdb/mnist/ , year=

    The MNIST database of handwritten digits , author=. http://yann.lecun.com/exdb/mnist/ , year=

  16. [16]

    2017 , eprint=

    Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms , author=. 2017 , eprint=

  17. [17]

    Advances in Neural Information Processing Systems , volume=

    Poisson-gamma dynamical systems , author=. Advances in Neural Information Processing Systems , volume=

  18. [18]

    Augment-and-Conquer Negative Binomial Processes , booktitle =

    Mingyuan Zhou and Lawrence Carin , editor =. Augment-and-Conquer Negative Binomial Processes , booktitle =. 2012 , url =

  19. [19]

    Deep dynamic

    Gong, Chengyue and others , journal=. Deep dynamic

  20. [20]

    Advances in Neural Information Processing Systems , volume=

    Deep Poisson gamma dynamical systems , author=. Advances in Neural Information Processing Systems , volume=

  21. [21]

    Linderman and Mingyuan Zhou and David M

    Aaron Schein and Scott W. Linderman and Mingyuan Zhou and David M. Blei and Hanna M. Wallach , editor =. Poisson-randomized gamma dynamical systems , booktitle =. 2019 , url =

  22. [22]

    Score-based Generative Modeling in Latent Space , booktitle =

    Arash Vahdat and Karsten Kreis and Jan Kautz , editor =. Score-based Generative Modeling in Latent Space , booktitle =. 2021 , url =

  23. [23]

    xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data , booktitle =

    Jing Gong and Minsheng Hao and Xingyi Cheng and Xin Zeng and Chiming Liu and Jianzhu Ma and Xuegong Zhang and Taifeng Wang and Le Song , editor =. xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data , booktitle =. 2023 , url =

  24. [24]

    Advances in Neural Information Processing Systems , editor=

    On Density Estimation with Diffusion Models , author=. Advances in Neural Information Processing Systems , editor=. 2021 , url=

  25. [25]

    Gomez and Lukasz Kaiser and Illia Polosukhin , editor =

    Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin , editor =. Attention is All you Need , booktitle =. 2017 , url =

  26. [26]

    U-Net: convolutional networks for biomedical image segmentation , booktitle =

    Olaf Ronneberger and Philipp Fischer and Thomas Brox , editor =. U-Net: convolutional networks for biomedical image segmentation , booktitle =. 2015 , url =. doi:10.1007/978-3-319-24574-4\_28 , timestamp =

  27. [27]

    beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , booktitle =

    Irina Higgins and Lo. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , booktitle =. 2017 , url =

  28. [28]

    Kingma and Max Welling , editor =

    Diederik P. Kingma and Max Welling , editor =. Auto-Encoding Variational Bayes , booktitle =. 2014 , url =

  29. [29]

    Weinberger , editor =

    Justin Lovelace and Varsha Kishore and Chao Wan and Eliot Shekhtman and Kilian Q. Weinberger , editor =. Latent Diffusion for Language Generation , booktitle =. 2023 , url =

  30. [30]

    Nature methods , volume=

    Large-scale foundation model on single-cell transcriptomics , author=. Nature methods , volume=. 2024 , publisher=

  31. [31]

    Nature Machine Intelligence , volume=

    scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data , author=. Nature Machine Intelligence , volume=. 2022 , publisher=

  32. [32]

    ACM computing surveys , volume=

    Diffusion models: A comprehensive survey of methods and applications , author=. ACM computing surveys , volume=. 2023 , publisher=

  33. [33]

    Dauphin , editor =

    Jonas Gehring and Michael Auli and David Grangier and Denis Yarats and Yann N. Dauphin , editor =. Convolutional Sequence to Sequence Learning , booktitle =. 2017 , url =

  34. [34]

    Vision transformers are parameter- efficient audio-visual learners

    Zhaoyang Lyu and Jinyi Wang and Yuwei An and Ya Zhang and Dahua Lin and Bo Dai , title =. 2023 , url =. doi:10.1109/CVPR52729.2023.00034 , timestamp =

  35. [35]

    9th International Conference on Learning Representations,

    Alexey Dosovitskiy and Lucas Beyer and Alexander Kolesnikov and Dirk Weissenborn and Xiaohua Zhai and Thomas Unterthiner and Mostafa Dehghani and Matthias Minderer and Georg Heigold and Sylvain Gelly and Jakob Uszkoreit and Neil Houlsby , title =. 9th International Conference on Learning Representations,. 2021 , url =

  36. [36]

    Proceedings of the IEEE/CVF international conference on computer vision , pages=

    Point transformer , author=. Proceedings of the IEEE/CVF international conference on computer vision , pages=

  37. [37]

    Advances in Neural Information Processing Systems , volume=

    Point transformer v2: Grouped vector attention and partition-based pooling , author=. Advances in Neural Information Processing Systems , volume=

  38. [38]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Point transformer v3: Simpler faster stronger , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  39. [39]

    Xiaohui Zeng and Arash Vahdat and Francis Williams and Zan Gojcic and Or Litany and Sanja Fidler and Karsten Kreis , editor =. Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , year =

  40. [40]

    Linqi Zhou and Yilun Du and Jiajun Wu , title =. 2021. 2021 , url =. doi:10.1109/ICCV48922.2021.00577 , timestamp =

  41. [41]

    2021 , url =

    Shitong Luo and Wei Hu , title =. 2021 , url =. doi:10.1109/CVPR46437.2021.00286 , timestamp =

  42. [42]

    Macke , editor =

    Jaivardhan Kapoor and Auguste Schulz and Julius Vetter and Felix Pei and Richard Gao and Jakob H. Macke , editor =. Latent Diffusion for Neural Spiking Data , booktitle =. 2024 , url =

  43. [43]

    The Journal of Machine Learning Research , volume=

    A kernel two-sample test , author=. The Journal of Machine Learning Research , volume=. 2012 , publisher=

  44. [44]

    Nature biotechnology , volume=

    Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors , author=. Nature biotechnology , volume=. 2018 , publisher=

  45. [45]

    Bioinformatics , volume=

    scDiffusion: conditional generation of high-quality single-cell data using diffusion model , author=. Bioinformatics , volume=. 2024 , publisher=

  46. [46]

    Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA,

    Martin Heusel and Hubert Ramsauer and Thomas Unterthiner and Bernhard Nessler and Sepp Hochreiter , editor =. Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA,. 2017 , url =

  47. [47]

    2009 IEEE conference on computer vision and pattern recognition , pages=

    Imagenet: A large-scale hierarchical image database , author=. 2009 IEEE conference on computer vision and pattern recognition , pages=. 2009 , organization=

  48. [48]

    Kingma and Jimmy Ba , editor =

    Diederik P. Kingma and Jimmy Ba , editor =. Adam:. 3rd International Conference on Learning Representations,. 2015 , url =

  49. [49]

    Sculley and Gary Holt and Daniel Golovin and Eugene Davydov and Todd Phillips and Dietmar Ebner and Vinay Chaudhary and Michael Young and Jean

    D. Sculley and Gary Holt and Daniel Golovin and Eugene Davydov and Todd Phillips and Dietmar Ebner and Vinay Chaudhary and Michael Young and Jean. Hidden Technical Debt in Machine Learning Systems , booktitle =. 2015 , url =

  50. [50]

    BERT: Pre-training of deep bidi- rectional transformers for language understanding

    Jacob Devlin and Ming. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,. 2019 , url =. doi:10.18653/V1/N19-1423 , timestamp =

  51. [51]

    Buntine and Dinh Phung and Mingyuan Zhou , editor =

    He Zhao and Piyush Rai and Lan Du and Wray L. Buntine and Dinh Phung and Mingyuan Zhou , editor =. Variational Autoencoders for Sparse and Overdispersed Discrete Data , booktitle =. 2020 , url =

  52. [52]

    On The Computational Complexity of Self-Attention , booktitle =

    Feyza Duman Keles and Pruthuvi Mahesakya Wijewardena and Chinmay Hegde , editor =. On The Computational Complexity of Self-Attention , booktitle =. 2023 , url =

  53. [53]

    Nature communications , volume=

    Embracing the dropouts in single-cell RNA-seq analysis , author=. Nature communications , volume=. 2020 , publisher=

  54. [54]

    Shadows can be

    Shuyang Gu and Dong Chen and Jianmin Bao and Fang Wen and Bo Zhang and Dongdong Chen and Lu Yuan and Baining Guo , title =. 2022 , url =. doi:10.1109/CVPR52688.2022.01043 , timestamp =

  55. [55]

    (No Title) , year=

    Statistics for experimenters: an introduction to design, data analysis, and model building , author=. (No Title) , year=

  56. [56]

    IEEE Transactions on information theory , volume=

    Compressed sensing , author=. IEEE Transactions on information theory , volume=. 2006 , publisher=

  57. [57]

    2025 , url=

    Sparse Data Diffusion for Scientific Simulations in Biology and Physics , author=. 2025 , url=