pith. machine review for the scientific record. sign in

arxiv: 2605.08566 · v1 · submitted 2026-05-08 · 💻 cs.CV · cs.LG· q-bio.QM

Recognition: 2 theorem links

· Lean Theorem

MicroDiffuse3D: A Foundation Model for 3D Microscopy Imaging Restoration

Brian Wong, Dan Fu, Erin Dunnington, Hanwen Xu, King Wai Chiu, Sheng Wang, Tangqi Fang, Yongkang Li

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:15 UTC · model grok-4.3

classification 💻 cs.CV cs.LGq-bio.QM
keywords 3D microscopyimage restorationfoundation modelsuper-resolutiondenoisingchemical imagingvolumetric reconstruction
0
0 comments X

The pith

A pretrained foundation model restores high-quality 3D structure from sparse and degraded microscopy measurements.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents MicroDiffuse3D as a pretrained foundation model that recovers clear volumetric images from low-resolution, noisy, or sparsely sampled inputs in chemical microscopy. Chemical imaging supplies label-free biochemical detail but is constrained by slow 3D acquisition times. The model enables higher-throughput scanning by learning restoration across multiple degradation types, including 16-fold sparse super-resolution, combined resolution and noise loss, and low-SNR denoising. In the sparse super-resolution case the restored volumes show improved depth continuity and fewer artifacts, raising segmentation accuracy by 10.58 percent and line-profile agreement by 15.59 percent.

Core claim

MicroDiffuse3D is a pretrained foundation model for 3D microscopy image restoration that recovers high-quality volumetric structure from degraded low-resolution measurements acquired at substantially higher throughput. Evaluated on three settings—16-fold sparse super-resolution, joint resolution-plus-noise degradation, and low-SNR denoising—the model outperforms strong baselines. In the sparse super-resolution regime it yields clearer continuity across depth with fewer artifacts.

What carries the argument

MicroDiffuse3D, a pretrained foundation model that learns to map degraded 3D inputs onto restored high-quality volumetric outputs.

If this is right

  • Pretrained 3D restoration becomes a broadly applicable strategy for overcoming throughput and SNR limits in volumetric chemical imaging.
  • High-resolution analysis becomes feasible at scales and speeds that were previously difficult to achieve.
  • Downstream tasks such as segmentation gain from the improved depth continuity and reduced artifacts.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pretraining strategy could be applied to other 3D modalities that face similar acquisition-speed versus resolution trade-offs.
  • Lower-resolution hardware combined with this restoration step could reduce equipment costs while maintaining usable output quality.
  • Faster low-resolution scans followed by model restoration might enable real-time or live-sample 3D workflows.

Load-bearing premise

The pretrained foundation model generalizes effectively to the three specific 3D degradation regimes in chemical microscopy without overfitting to the training distribution or requiring heavy task-specific adaptation.

What would settle it

Testing the model on a fourth unseen degradation type or on data acquired from an independent microscope system and checking whether performance still exceeds baselines would falsify the claim of broad applicability.

Figures

Figures reproduced from arXiv: 2605.08566 by Brian Wong, Dan Fu, Erin Dunnington, Hanwen Xu, King Wai Chiu, Sheng Wang, Tangqi Fang, Yongkang Li.

Figure 1
Figure 1. Figure 1: Overview of MicroDiffuse3D. a, Conditional Diffusion Architecture. MicroDiffuse3D takes anisotropic volumetric data as condition and processes them via a deep feature encoder. Conditioning features are then injected into the latent diffusion process for high-fidelity reconstruction. b, Large-scale pretraining strategy. The model is pretrained on 2.38 million unpaired SRS-derived slices using two complement… view at source ↗
Figure 2
Figure 2. Figure 2: 3D super-resolution across microscopy modalities. a, b. Quantitative evaluation of super￾resolution performance. Box plots detail PSNR, SSIM, MS-SSIM and LPIPS distributions across independent volumetric samples from SIM and SRS datasets. Our 3D diffusion model consistently outperforms baselines. c. Qualitative assessment of axial structural fidelity. Representative volumetric cross-sections demonstrate th… view at source ↗
Figure 3
Figure 3. Figure 3: 3D Denoising. a, b. Quantitative evaluation of denoising performance. Box plots detail PSNR, SSIM, MS-SSIM and LPIPS distributions across independent volumetric samples from SRS dataset. Our 3D diffusion model consistently outperforms 2D-based baselines. c. Qualitative assessment of recovery and axial structural fidelity. Representative cross-sections of baselines (represented by 3DRCAN, Row 2) and our mod… view at source ↗
Figure 4
Figure 4. Figure 4 [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
read the original abstract

Chemical imaging enables label-free visualization of cells, tissues and living systems while providing direct biochemical information that is difficult to obtain with conventional fluorescence microscopy. Despite its promise in applications ranging from intraoperative diagnosis to drug-response analysis, its broader use remains limited by slow data acquisition, particularly for three-dimensional imaging. Here we present MicroDiffuse3D, a pretrained foundation model for 3D microscopy image restoration that recovers high-quality volumetric structure from degraded low-resolution measurements acquired at substantially higher throughput. We evaluated MicroDiffuse3D across three challenging restoration settings, including 3D super-resolution under 16-fold volumetric sparsity, joint degradation in resolution and noise, and 3D denoising in the low signal-to-noise ratio (SNR) regime, where the model delivered clear gains over strong baselines. Under the sparse 3D super-resolution setting, MicroDiffuse3D produced clearer continuity across depth with fewer artifacts and improved segmentation quality by 10.58% and line-profile concordance by 15.59%. Together, our results establish pretrained 3D restoration as a broadly applicable strategy for overcoming the throughput and SNR limitations in volumetric chemical imaging, enabling high-resolution analysis at scales and speeds that were previously difficult to achieve.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces MicroDiffuse3D, a pretrained foundation model for 3D microscopy image restoration in chemical imaging. It claims to recover high-quality volumetric structure from degraded low-resolution measurements and reports gains over baselines across three settings: 16-fold volumetric sparse super-resolution (with clearer depth continuity, fewer artifacts, +10.58% segmentation quality, and +15.59% line-profile concordance), joint resolution+noise degradation, and low-SNR denoising. The work positions pretrained 3D restoration as a broadly applicable strategy to overcome throughput and SNR limits.

Significance. If the reported gains are shown to stem from pretraining-enabled generalization rather than task-specific factors, the work could meaningfully advance high-throughput 3D chemical microscopy for intraoperative diagnosis and drug-response studies by enabling high-resolution analysis at previously inaccessible scales and speeds.

major comments (2)
  1. [Abstract] The abstract reports quantitative gains (10.58% segmentation quality, 15.59% line-profile concordance) but supplies no information on training datasets, model architecture, baseline implementations, statistical testing, or cross-validation procedures. This information is load-bearing for the central claim that pretraining confers robustness to the three specific 3D degradation regimes.
  2. [Experiments] No ablation removing the pretraining step, pretraining corpus diversity metrics, or explicit distribution-shift tests are described. Without these, the reported improvements in depth continuity and artifact reduction cannot be confidently attributed to foundation-model generalization rather than architecture or in-distribution fine-tuning.
minor comments (1)
  1. [Abstract] The abstract would be clearer if it briefly noted the number of test volumes or the specific nature of the strong baselines used for comparison.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the major comments point by point below, agreeing where revisions are warranted to better support our claims about MicroDiffuse3D.

read point-by-point responses
  1. Referee: [Abstract] The abstract reports quantitative gains (10.58% segmentation quality, 15.59% line-profile concordance) but supplies no information on training datasets, model architecture, baseline implementations, statistical testing, or cross-validation procedures. This information is load-bearing for the central claim that pretraining confers robustness to the three specific 3D degradation regimes.

    Authors: We agree that the abstract would be strengthened by including concise references to these elements. In the revised manuscript we will expand the abstract to note the pretraining on diverse 3D chemical microscopy volumes, the 3D diffusion-based architecture, the baseline methods used, and the cross-validation with statistical testing employed. Full details remain in the Methods and Experiments sections. revision: yes

  2. Referee: [Experiments] No ablation removing the pretraining step, pretraining corpus diversity metrics, or explicit distribution-shift tests are described. Without these, the reported improvements in depth continuity and artifact reduction cannot be confidently attributed to foundation-model generalization rather than architecture or in-distribution fine-tuning.

    Authors: We acknowledge that an explicit ablation removing the pretraining step is absent from the current manuscript. We will add a discussion of this limitation in the revised Experiments section and, where computationally feasible, include a limited ablation on a data subset to help isolate the pretraining contribution. Corpus diversity metrics appear in the supplementary materials, and the three distinct degradation regimes (sparse super-resolution, joint degradation, and low-SNR denoising) function as distribution-shift tests; we will clarify this attribution and add explicit shift analysis in the revision. revision: partial

Circularity Check

0 steps flagged

No circularity: empirical performance claims with no derivations or self-referential reductions

full rationale

The paper reports empirical results from training and evaluating a pretrained 3D restoration model on three degradation regimes, with gains quantified via segmentation quality (+10.58%) and line-profile concordance (+15.59%) on held-out test settings. No equations, first-principles derivations, fitted parameters renamed as predictions, or load-bearing self-citations appear in the provided text. The central claims rest on experimental comparisons rather than any step that reduces by construction to its own inputs, satisfying the criteria for a self-contained non-circular finding.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Abstract-only review provides no explicit list of free parameters or axioms; the approach implicitly relies on standard deep-learning assumptions that large neural networks can learn useful image priors from pretraining data.

free parameters (1)
  • Neural network weights and training hyperparameters
    All model parameters are fitted during pretraining and any fine-tuning on microscopy data; exact count and values unknown from abstract.
axioms (1)
  • domain assumption Deep neural networks pretrained on large image corpora can serve as effective priors for restoring degraded 3D microscopy volumes.
    Central premise of the foundation-model strategy described in the abstract.

pith-pipeline@v0.9.0 · 5538 in / 1373 out tokens · 61877 ms · 2026-05-12T01:15:33.529050+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

44 extracted references · 44 canonical work pages · 1 internal anchor

  1. [2]

    Science376(6594), 5197 (2022) https://doi.org/10.1126/science

    Ji-Xin Cheng and X. Sunney Xie. “Vibrational spectroscopic imaging of living systems: An emerging platform for biology and medicine”. In:Science350.6264 (2015), aaa8870.doi:10.1126/science. aaa8870.url:http://science.sciencemag.org/content/sci/350/6264/aaa8870.full.pdf

  2. [3]

    Bond-selective imaging by optically sensing the mid-infrared photothermal effect

    Yeran Bai, Jiaze Yin, and Ji-Xin Cheng. “Bond-selective imaging by optically sensing the mid-infrared photothermal effect”. In:Science Advances7.20 (2021), eabg1559.doi:doi:10.1126/sciadv.abg1559. url:https://www.science.org/doi/abs/10.1126/sciadv.abg1559

  3. [4]

    A 20-year journey on the invention of vibrational photothermal microscopy

    J. X. Cheng. “A 20-year journey on the invention of vibrational photothermal microscopy”. In:Nat Methods22.5 (2025). 1548-7105 Cheng, Ji-Xin Orcid: 0000-0002-5607-6683 R35 GM 136223/U.S. De- partment of Health & Human Services — NIH — National Institute of General Medical Sciences (NIGMS)/ Journal Article United States 2025/05/14 Nat Methods. 2025 May;22(...

  4. [5]

    Rapid intraoperative histology of unprocessed surgical specimens via fibre- laser-based stimulated Raman scattering microscopy

    Daniel A. Orringer et al. “Rapid intraoperative histology of unprocessed surgical specimens via fibre- laser-based stimulated Raman scattering microscopy”. In:Nature Biomedical Engineering1.2 (2017), p. 0027.issn: 2157-846X.doi:10.1038/s41551-016-0027

  5. [6]

    The TRIPOD-LLM reporting guideline for studies using large language models

    Todd C. Hollon et al. “Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks”. In:Nature Medicine26.1 (2020), pp. 52–58.issn: 1546-170X.doi: 10.1038/s41591- 019- 0715- 9.url:https://doi.org/10.1038/s41591- 019- 0715- 9%20https: //escholarship.org/content/qt2bx2h425/qt2bx2h425.pdf?t=qeu5ld

  6. [8]

    Optical imaging of metabolic dynamics in animals

    Lingyan Shi et al. “Optical imaging of metabolic dynamics in animals”. In:Nature Communications 9.1 (2018), p. 2995.issn: 2041-1723.doi:10.1038/s41467-018-05401-3.url:https://doi.org/ 10.1038/s41467-018-05401-3

  7. [9]

    Spectral tracing of deuterium for imaging glucose metabolism

    Luyuan Zhang et al. “Spectral tracing of deuterium for imaging glucose metabolism”. In:Nature Biomedical Engineering3.5 (2019), pp. 402–413.issn: 2157-846X.doi:10.1038/s41551-019-0393-4. url:https://doi.org/10.1038/s41551-019-0393-4

  8. [10]

    Performance and Analysis of the Alchemical Transfer Method for Binding-Free-Energy Predictions of Diverse Ligands

    B. S. Wong et al. “Facilitated Transport of EGFR Inhibitors Plays an Important Role in Their Cellular Uptake”. In:Analytical Chemistry96.4 (2024), pp. 1547–1555.issn: 0003-2700.doi:10.1021/acs. analchem.3c04242. 18

  9. [11]

    Assessing drug uptake and response differences in 2D and 3D cellular environments using stimulated Raman scattering microscopy

    Fiona Xi Xu et al. “Assessing drug uptake and response differences in 2D and 3D cellular environments using stimulated Raman scattering microscopy”. In:bioRxiv(2024), p. 2024.04.22.590622.doi:10. 1101/2024.04.22.590622.url:https://www.biorxiv.org/content/biorxiv/early/2024/04/26/ 2024.04.22.590622.full.pdf

  10. [12]

    Quantitative Stimulated Raman Scattering Microscopy: Promises and Pitfalls

    Bryce Manifold and Dan Fu. “Quantitative Stimulated Raman Scattering Microscopy: Promises and Pitfalls”. In:Annual Reviews of Analytical Chemistry15.1 (2022), pp. 269–289.doi:10 . 1146 / annurev - anchem - 061020 - 015110.url:https : / / www . annualreviews . org / doi / abs / 10 . 1146 / annurev-anchem-061020-015110

  11. [13]

    Review of bio-optical imaging systems with a high space-bandwidth product

    Jongchan Park et al. “Review of bio-optical imaging systems with a high space-bandwidth product”. In:Advanced Photonics3.4 (2021), p. 044001.doi:10 . 1117 / 1 . AP . 3 . 4 . 044001.url:https : //doi.org/10.1117/1.AP.3.4.044001

  12. [14]

    AI Feynman: A physics-inspired method for symbolic regression.Science Advances, 6(16):eaay2631, 2020

    Wei Min and Xin Gao. “Absolute signal of stimulated Raman scattering microscopy: A quantum electrodynamics treatment”. In:Science Advances10.50 (2024), eadm8424.doi:10.1126/sciadv. adm8424. eprint:https : / / www . science . org / doi / pdf / 10 . 1126 / sciadv . adm8424.url:https : //www.science.org/doi/abs/10.1126/sciadv.adm8424

  13. [15]

    Content-aware image restoration: pushing the limits of fluorescence microscopy

    Martin Weigert, Uwe Schmidt, Tobias Boothe, et al. “Content-aware image restoration: pushing the limits of fluorescence microscopy”. In:Nature Methods15 (2018), pp. 1090–1097.doi:10 . 1038 / s41592-018-0216-7

  14. [16]

    Deep learning enables cross-modality super-resolution in fluorescence microscopy

    Hongda Wang et al. “Deep learning enables cross-modality super-resolution in fluorescence microscopy”. In:Nature Methods16 (2019), pp. 103–110.doi:10.1038/s41592-018-0239-0

  15. [17]

    Evaluation and development of deep neural networks for image super-resolution in optical microscopy

    Chang Qiao et al. “Evaluation and development of deep neural networks for image super-resolution in optical microscopy”. In:Nature Methods18.2 (2021), pp. 194–202.doi:10.1038/s41592-020-01048- 5

  16. [18]

    Zero-shot learning enables instant denoising and super- resolution in optical fluorescence microscopy

    Chang Qiao, Yunmin Zeng, Qian Meng, et al. “Zero-shot learning enables instant denoising and super- resolution in optical fluorescence microscopy”. In:Nature Communications15 (2024), p. 4180.doi: 10.1038/s41467-024-48575-9

  17. [19]

    Incorporating the image formation process into deep learning improves network performance

    Yijun Li, Yijun Su, Min Guo, et al. “Incorporating the image formation process into deep learning improves network performance”. In:Nature Methods19 (2022), pp. 1427–1437.doi:10.1038/s41592- 022-01652-7

  18. [20]

    Three-dimensional residual channel attention networks denoise and sharpen fluores- cence microscopy image volumes

    Jiji Chen et al. “Three-dimensional residual channel attention networks denoise and sharpen fluores- cence microscopy image volumes”. In:Nature Methods18.6 (2021), pp. 678–687

  19. [21]

    Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior

    Kyungryun Lee and Won-Ki Jeong. “Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior”. In:Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. arXiv:2408.08616. 2024

  20. [22]

    Addressing fairness in artificial intelligence for medical imaging.Nature Communications, 13:4581, 2022

    Hyoungjun Park et al. “Deep learning enables reference-free isotropic super-resolution for volumetric fluorescence microscopy”. In:Nature Communications13 (2022), p. 3297.doi:10.1038/s41467-022- 30949-6

  21. [23]

    Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy

    Kai Ning, Bin Lu, Xin Wang, et al. “Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy”. In:Light: Science & Applications12 (2023), p. 204. doi:10.1038/s41377-023-01230-2

  22. [24]

    InterpolAI: deep learning-based optical flow inter- polation and restoration of biomedical images for improved 3D tissue mapping

    Sushama Joshi, Amanda Forjaz, K. S. Han, et al. “InterpolAI: deep learning-based optical flow inter- polation and restoration of biomedical images for improved 3D tissue mapping”. In:Nature Methods 22 (2025), pp. 1556–1567.doi:10.1038/s41592-025-02712-4

  23. [25]

    Image Quality Assessment: From Error Visibility to Structural Similarity

    Zhou Wang et al. “Image Quality Assessment: From Error Visibility to Structural Similarity”. In: IEEE Transactions on Image Processing13.4 (2004), pp. 600–612

  24. [26]

    Multi-Scale Structural Similarity for Image Quality Assessment

    Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik. “Multi-Scale Structural Similarity for Image Quality Assessment”. In:The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers. Vol. 2. 2003, pp. 1398–1402. 19

  25. [27]

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    Richard Zhang et al. “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, pp. 586– 595

  26. [28]

    Scalable Diffusion Models with Transformers

    William Peebles and Saining Xie. “Scalable Diffusion Models with Transformers”. In:Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023, pp. 4195–4205

  27. [29]

    Sadler and Jiaman Wu and Wei

    William Peebles and Saining Xie. “Scalable Diffusion Models with Transformers”. In:Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023, pp. 4195–4205.doi: 10.1109/ICCV51070.2023.00387

  28. [30]

    Predicting transcriptional outcomes of novel multigene perturba- tions with GEARS

    Chang Qiao et al. “A neural network for long-term super-resolution imaging of live cells with reliable confidence quantification”. In:Nature Biotechnology44.1 (2026), pp. 110–119.doi:10.1038/s41587- 025-02553-8

  29. [31]

    Cellpose: A Generalist Algorithm for Cellular Segmentation

    Carsen Stringer et al. “Cellpose: A Generalist Algorithm for Cellular Segmentation”. In:Nature Meth- ods18.1 (2021), pp. 100–106

  30. [32]

    Cellpose-SAM: superhuman generalization for cellular segmentation

    Marius Pachitariu, Michael Rariden, and Carsen Stringer. “Cellpose-SAM: superhuman generalization for cellular segmentation”. In:BioRxiv(2025), pp. 2025–04

  31. [33]

    Panoptic Segmentation

    Alexander Kirillov et al. “Panoptic Segmentation”. In:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019, pp. 9404–9413

  32. [34]

    V-Net: Fully Convolutional Neural Net- works for Volumetric Medical Image Segmentation

    Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. “V-Net: Fully Convolutional Neural Net- works for Volumetric Medical Image Segmentation”. In:2016 Fourth International Conference on 3D Vision (3DV). IEEE. 2016, pp. 79–87

  33. [35]

    A Concordance Correlation Coefficient to Evaluate Reproducibility

    Lawrence I-Kuei Lin. “A Concordance Correlation Coefficient to Evaluate Reproducibility”. In:Bio- metrics45.1 (1989), pp. 255–268

  34. [36]

    Spectroscopic stimulated Raman scattering imaging of highly dynamic specimens through matrix completion

    Haonan Lin et al. “Spectroscopic stimulated Raman scattering imaging of highly dynamic specimens through matrix completion”. In:Light: Science & Applications7.5 (2018), pp. 17179–17179.issn: 2047-7538.doi:10.1038/lsa.2017.179.url:https://doi.org/10.1038/lsa.2017.179

  35. [37]

    Super-resolution SRS microscopy with A-PoD

    Hongje Jang et al. “Super-resolution SRS microscopy with A-PoD”. In:Nature Methods20.3 (2023), pp. 448–458.issn: 1548-7105.doi:10.1038/s41592-023-01779-1.url:https://doi.org/10.1038/ s41592-023-01779-1

  36. [38]

    Denoising of stimulated Raman scattering microscopy images via deep learning

    Bryce Manifold et al. “Denoising of stimulated Raman scattering microscopy images via deep learning”. In:Biomedical Optics Express10.8 (2019), pp. 3860–3874.doi:10.1364/BOE.10.003860.url:http: //www.osapublishing.org/boe/abstract.cfm?URI=boe-10-8-3860

  37. [39]

    Deep learning-driven super-resolution in Raman hyperspectral imaging: Efficient high-resolution reconstruction from low-resolution data

    Md Inzamam Ul Haque et al. “Deep learning-driven super-resolution in Raman hyperspectral imaging: Efficient high-resolution reconstruction from low-resolution data”. In:Applied Physics Letters125.20 (2024), p. 204104.issn: 0003-6951.doi:10.1063/5.0228645.url:https://doi.org/10.1063/5. 0228645

  38. [40]

    Super-resolution vibrational imaging based on photoswitchable Raman probe

    Jingwen Shou et al. “Super-resolution vibrational imaging based on photoswitchable Raman probe”. In:Science Advances9.24 (), eade9118.doi:10.1126/sciadv.ade9118.url:https://doi.org/10. 1126/sciadv.ade9118

  39. [41]

    Label-free super-resolution stimulated Raman scattering imaging of biomedical specimens

    Julien Guilbert et al. “Label-free super-resolution stimulated Raman scattering imaging of biomedical specimens”. In:Advanced Imaging1.11 (Jan. 2024), p. 011004.url:https://www.researching.cn/ articles/OJfff2f087cb065461

  40. [42]

    4Pi stimulated Raman scattering for label-free super-resolution chemical imaging

    Jonathan I. Kim et al. “4Pi stimulated Raman scattering for label-free super-resolution chemical imaging”. In:Science Advances12.1 (), eaec0523.doi:10 . 1126 / sciadv . aec0523.url:https : //doi.org/10.1126/sciadv.aec0523

  41. [43]

    Mean Flows for One-step Generative Modeling

    Zhengyang Geng et al. “Mean Flows for One-step Generative Modeling”. In:The Thirty-ninth Annual Conference on Neural Information Processing Systems. 2025.url:https://openreview.net/forum? id=uWj4s7rMnR

  42. [44]

    Generative Modeling via Drifting

    Mingyang Deng et al. “Generative Modeling via Drifting”. In:arXiv preprint arXiv:2602.04770(2026). 20

  43. [45]

    Learning a Deep Convolutional Network for Image Super-Resolution

    Chao Dong et al. “Learning a Deep Convolutional Network for Image Super-Resolution”. In:European Conference on Computer Vision (ECCV). 2014, pp. 184–199

  44. [46]

    SwinIR: Image Restoration Using Swin Transformer

    Jingyun Liang et al. “SwinIR: Image Restoration Using Swin Transformer”. In:IEEE/CVF Interna- tional Conference on Computer Vision Workshops (ICCVW). 2021, pp. 1833–1844. 21 Supplementary Figure 1:Point spread function illustration. A, Differences in the point spread function between high-resolution (HR) and low-resolution (LR) objectives.The LR objective...