pith. machine review for the scientific record. sign in

arxiv: 2603.28498 · v2 · submitted 2026-03-30 · 📡 eess.IV · cs.AI· cs.CV

Recognition: no theorem link

MRI-to-CT synthesis using drifting models

Authors on Pith no claims yet

Pith reviewed 2026-05-14 01:26 UTC · model grok-4.3

classification 📡 eess.IV cs.AIcs.CV
keywords MRI-to-CT synthesisdrifting modelssynthetic CTpelvic imagingimage generationdiffusion modelsradiotherapy planningmedical image translation
0
0 comments X

The pith

Drifting models outperform diffusion methods in synthesizing pelvis CT from MRI with faster one-step inference.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper evaluates drifting models for generating CT images from pelvic MRI scans to enable radiation-free MR-only workflows. It benchmarks them against CNNs, VAEs, GANs, PPFM, and diffusion techniques on the Gold Atlas and SynthRAD2023 datasets. The drifting model delivers superior SSIM and PSNR scores along with lower RMSE, plus visually sharper bone structures and fewer artifacts. Its main benefit is high quality achieved in milliseconds rather than through multiple iterative steps.

Core claim

Drifting models synthesize pelvis CT images from MRI with higher structural similarity index, peak signal-to-noise ratio, and reduced root-mean-square error compared to UNet, VAE, WGAN-GP, PPFM, FastDDPM, DDIM, and DDPM across two datasets. They provide clearer cortical bone edges and better sacral and femoral head geometry with reduced artifacts at tissue boundaries. These results come from single-step inference that runs in milliseconds, offering a better accuracy-speed balance than iterative diffusion sampling.

What carries the argument

Drifting models that perform image-to-image translation in one forward pass.

If this is right

  • MR-only pelvic radiotherapy planning becomes more feasible without CT scans.
  • Patient radiation exposure decreases by avoiding separate CT acquisitions.
  • Clinical systems can generate synthetic CTs quickly enough for routine use.
  • Drifting models offer an efficient alternative to diffusion models for medical image synthesis.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach may generalize to other anatomical sites beyond the pelvis.
  • Synthetic CTs could enhance accuracy in PET attenuation correction for hybrid imaging.
  • Direct validation on dose calculation accuracy would strengthen the case for clinical adoption.

Load-bearing premise

Improvements in quantitative image metrics and visual quality on public datasets will carry over to meaningful gains in clinical tasks such as radiotherapy dose calculation.

What would settle it

If dose calculations or attenuation corrections using the synthetic CT images produce larger discrepancies with real CT-based results than those from baseline methods, the claim of practical superiority would be refuted.

Figures

Figures reproduced from arXiv: 2603.28498 by Chirstopher T. Whitlow, Ge Wang, Jeremy Hudson, Jianxu Wang, Qing Lyu.

Figure 1
Figure 1. Figure 1: Comparison of CT synthesize results from different methods on Gold Atlas Dataset. [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison of result image quality versus inference time. [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of CT synthesize results from different methods on SynthRAD2023 Dataset. [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Pixel-vise standard deviation map based on 20 sampling results for each comparing approach. [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
read the original abstract

Accurate MRI-to-CT synthesis could enable MR-only pelvic workflows by providing CT-like images with bone details while avoiding additional ionizing radiation. In this work, we investigate recently proposed drifting models for synthesizing pelvis CT images from MRI and benchmark them against convolutional neural networks (UNet, VAE), a generative adversarial network (WGAN-GP), a physics-inspired probabilistic model (PPFM), and diffusion-based methods (FastDDPM, DDIM, DDPM). Experiments are performed on two complementary datasets: Gold Atlas Male Pelvis and the SynthRAD2023 pelvis subset. Image fidelity and structural consistency are evaluated with SSIM, PSNR, and RMSE, complemented by qualitative assessment of anatomically critical regions such as cortical bone and pelvic soft-tissue interfaces. Across both datasets, the proposed drifting model achieves high SSIM and PSNR and low RMSE, surpassing strong diffusion baselines and conventional CNN-, VAE-, GAN-, and PPFM-based methods. Visual inspection shows sharper cortical bone edges, improved depiction of sacral and femoral head geometry, and reduced artifacts or over-smoothing, particularly at bone-air-soft tissue boundaries. Moreover, the drifting model attains these gains with one-step inference and inference times on the order of milliseconds, yielding a more favorable accuracy-efficiency trade-off than iterative diffusion sampling while remaining competitive in image quality. These findings suggest that drifting models are a promising direction for fast, high-quality pelvic synthetic CT generation from MRI and warrant further investigation for downstream applications such as MRI-only radiotherapy planning and PET/MR attenuation correction.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript benchmarks drifting models for MRI-to-CT synthesis on pelvic images, comparing them to UNet, VAE, WGAN-GP, PPFM, FastDDPM, DDIM, and DDPM on the Gold Atlas Male Pelvis and SynthRAD2023 datasets. It reports that the drifting model attains higher SSIM and PSNR, lower RMSE, sharper bone edges, and reduced artifacts while using one-step inference with millisecond-scale times, offering a better accuracy-efficiency trade-off than iterative diffusion methods.

Significance. If the metric gains and visual improvements are reproducible, the work shows drifting models can deliver competitive image quality for synthetic CT at substantially lower inference cost than diffusion baselines. This accuracy-efficiency profile is relevant for MR-only radiotherapy planning and PET/MR attenuation correction, where fast synthesis without ionizing radiation is desirable.

major comments (3)
  1. [Methods] Methods section: No training hyperparameters, optimizer settings, dataset splits, or augmentation details are provided, preventing verification that the reported SSIM/PSNR/RMSE gains over baselines were obtained under matched conditions.
  2. [Results] Results section: Metric comparisons lack error bars, standard deviations across runs, or statistical tests (e.g., Wilcoxon signed-rank), so the claim that the drifting model 'surpasses' all listed baselines cannot be assessed for consistency.
  3. [Results] Evaluation: Reliance on qualitative visual inspection of cortical bone and pelvic geometry is not supported by quantitative edge-sharpness or contrast metrics, weakening the assertion of improved anatomical fidelity.
minor comments (2)
  1. [Abstract] Abstract: Replace vague phrases such as 'high SSIM and PSNR' with the actual numerical values reported in the tables.
  2. [Introduction] Notation: Ensure consistent use of acronyms (e.g., PPFM) on first appearance in the main text.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments, which have helped us improve the clarity and rigor of the manuscript. We address each major comment point by point below and have revised the manuscript to incorporate the suggested additions where appropriate.

read point-by-point responses
  1. Referee: [Methods] Methods section: No training hyperparameters, optimizer settings, dataset splits, or augmentation details are provided, preventing verification that the reported SSIM/PSNR/RMSE gains over baselines were obtained under matched conditions.

    Authors: We agree that these implementation details are necessary for full reproducibility and fair comparison. In the revised manuscript, we have added a new subsection titled 'Training Details' under Methods. This subsection now specifies the optimizer (Adam with learning rate 1e-4, betas 0.9 and 0.999), batch size (16), number of epochs (200 with early stopping), dataset splits (80/10/10 for training/validation/test on both Gold Atlas and SynthRAD2023), and augmentations (random horizontal/vertical flips, rotations within ±15°, and intensity normalization). All baselines were retrained under these identical settings to ensure the reported gains reflect model differences rather than training discrepancies. revision: yes

  2. Referee: [Results] Results section: Metric comparisons lack error bars, standard deviations across runs, or statistical tests (e.g., Wilcoxon signed-rank), so the claim that the drifting model 'surpasses' all listed baselines cannot be assessed for consistency.

    Authors: We acknowledge that reporting variability and statistical significance strengthens the claims. We have rerun all experiments across five independent random seeds and updated Tables 1 and 2 to report mean ± standard deviation for SSIM, PSNR, and RMSE on both datasets. We have also added a new paragraph in the Results section presenting Wilcoxon signed-rank test p-values, which confirm that the drifting model's improvements over each baseline are statistically significant (p < 0.05) across the key metrics. revision: yes

  3. Referee: [Results] Evaluation: Reliance on qualitative visual inspection of cortical bone and pelvic geometry is not supported by quantitative edge-sharpness or contrast metrics, weakening the assertion of improved anatomical fidelity.

    Authors: We agree that supplementing visual inspection with quantitative edge and contrast metrics would provide stronger support. In the revised Results section, we have added two new quantitative measures: (1) mean gradient magnitude (via Sobel filtering) computed specifically on cortical bone regions to quantify edge sharpness, and (2) contrast-to-noise ratio (CNR) between bone and adjacent soft tissue. The updated tables and text show that the drifting model achieves higher gradient magnitudes and CNR values than the baselines, corroborating the visual observations of sharper bone edges and reduced artifacts. revision: yes

Circularity Check

0 steps flagged

No significant circularity: purely empirical benchmark study

full rationale

The manuscript is an empirical benchmark study that trains and evaluates drifting models for MRI-to-CT synthesis against CNN, VAE, GAN, PPFM, and diffusion baselines on two public pelvis datasets. Performance is quantified via SSIM, PSNR, and RMSE plus qualitative visual checks; no derivation chain, self-referential equations, fitted-parameter predictions, or load-bearing self-citations appear in the reported methodology or results. All claims rest on direct experimental comparisons under stated training protocols, rendering the work self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No mathematical derivation or new entities; work rests on standard supervised image-to-image translation assumptions and the availability of paired MRI-CT training data.

pith-pipeline@v0.9.0 · 5581 in / 1090 out tokens · 42749 ms · 2026-05-14T01:26:41.451332+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

41 extracted references · 41 canonical work pages · 5 internal anchors

  1. [1]

    Recent and future directions in ct imaging.Annals of biomedical engineering, 42(2):260–268, 2014

    Norbert J Pelc. Recent and future directions in ct imaging.Annals of biomedical engineering, 42(2):260–268, 2014

  2. [2]

    Emerging clinical applications of computed tomography.Medical Devices: Evidence and Research, pages 265–278, 2015

    Carlo Liguori, Giulia Frauenfelder, Carlo Massaroni, Paola Saccomandi, Francesco Giurazza, Francesca Pitocco, Riccardo Marano, and Emiliano Schena. Emerging clinical applications of computed tomography.Medical Devices: Evidence and Research, pages 265–278, 2015

  3. [3]

    Computed tomography—an increasing source of radiation exposure.New England journal of medicine, 357(22):2277–2284, 2007

    David J Brenner and Eric J Hall. Computed tomography—an increasing source of radiation exposure.New England journal of medicine, 357(22):2277–2284, 2007

  4. [4]

    How to improve access to medical imaging in low-and middle-income countries?EClinicalMedicine, 38, 2021

    Guy Frija, Ivana Blaži´c, Donald P Frush, Monika Hierath, Michael Kawooya, Lluis Donoso-Bach, and Boris Brkljaˇci´c. How to improve access to medical imaging in low-and middle-income countries?EClinicalMedicine, 38, 2021

  5. [5]

    Comparing magnetic resonance imaging and computed tomography machine accessibility among urban and rural county hospitals.Journal of Public Health Research, 11(1):jphr–2021, 2022

    Benjamin T Burdorf. Comparing magnetic resonance imaging and computed tomography machine accessibility among urban and rural county hospitals.Journal of Public Health Research, 11(1):jphr–2021, 2022

  6. [6]

    A paired ct and mri dataset for advanced medical imaging applications.Data in Brief, 61:111768, 2025

    Zakaria Shams Siam, Md Younus Akon, Israt Jahan Munmun, Abdullah Al-Amin, Md Abdus Salam, and Ishtiak Al Mamoon. A paired ct and mri dataset for advanced medical imaging applications.Data in Brief, 61:111768, 2025

  7. [7]

    A review of substitute ct generation for mri-only radiation therapy.Radiation Oncology, 12(1):28, 2017

    Jens M Edmund and Tufve Nyholm. A review of substitute ct generation for mri-only radiation therapy.Radiation Oncology, 12(1):28, 2017

  8. [8]

    Advancements in synthetic ct generation from mri: A review of techniques, and trends in radiation therapy planning.Journal of Applied Clinical Medical Physics, 25(11):e14499, 2024

    Mohamed A Bahloul, Saima Jabeen, Sara Benoumhani, Habib Abdulmohsen Alsaleh, Zehor Belkhatir, and Areej Al-Wabil. Advancements in synthetic ct generation from mri: A review of techniques, and trends in radiation therapy planning.Journal of Applied Clinical Medical Physics, 25(11):e14499, 2024

  9. [9]

    Deep learning based synthesis of mri, ct and pet: Review and analysis.Medical image analysis, 92:103046, 2024

    Sanuwani Dayarathna, Kh Tohidul Islam, Sergio Uribe, Guang Yang, Munawar Hayat, and Zhaolin Chen. Deep learning based synthesis of mri, ct and pet: Review and analysis.Medical image analysis, 92:103046, 2024

  10. [10]

    Generative ai for synthetic data across multiple medical modalities: A systematic review of recent developments and challenges.Computers in biology and medicine, 189:109834, 2025

    Mahmoud Ibrahim, Yasmina Al Khalil, Sina Amirrajab, Chang Sun, Marcel Breeuwer, Josien Pluim, Bart Elen, Gökhan Ertaylan, and Michel Dumontier. Generative ai for synthetic data across multiple medical modalities: A systematic review of recent developments and challenges.Computers in biology and medicine, 189:109834, 2025

  11. [11]

    Evaluation of a multi-atlas ct synthesis approach for mri-only radiotherapy treatment planning

    F Guerreiro, Ninon Burgos, A Dunlop, K Wong, I Petkar, C Nutting, K Harrington, S Bhide, K Newbold, D Dearnaley, et al. Evaluation of a multi-atlas ct synthesis approach for mri-only radiotherapy treatment planning. Physica Medica, 35:7–17, 2017. 10 Running Title for Header

  12. [12]

    Mri-based treatment planning with pseudo ct generated through atlas registration.Medical physics, 41(5):051711, 2014

    Jinsoo Uh, Thomas E Merchant, Yimei Li, Xingyu Li, and Chiaho Hua. Mri-based treatment planning with pseudo ct generated through atlas registration.Medical physics, 41(5):051711, 2014

  13. [13]

    Ct synthesis in the head & neck region for pet/mr attenuation correction: an iterative multi-atlas approach.EJNMMI physics, 2(Suppl 1):A31, 2015

    Ninon Burgos, M Jorge Cardoso, Marc Modat, Shonit Punwani, David Atkinson, Simon R Arridge, Brian F Hutton, and Sébastien Ourselin. Ct synthesis in the head & neck region for pet/mr attenuation correction: an iterative multi-atlas approach.EJNMMI physics, 2(Suppl 1):A31, 2015

  14. [14]

    Attenuation correction of pet/mr imaging.Magnetic Resonance Imaging Clinics, 25(2):245–255, 2017

    Yasheng Chen and Hongyu An. Attenuation correction of pet/mr imaging.Magnetic Resonance Imaging Clinics, 25(2):245–255, 2017

  15. [15]

    Mr to ct registration of brains using image synthesis

    Snehashis Roy, Aaron Carass, Amod Jog, Jerry L Prince, and Junghoon Lee. Mr to ct registration of brains using image synthesis. InProceedings of SPIE, volume 9034, pages spie–org, 2014

  16. [16]

    Multi-atlas-based ct synthesis from conventional mri with patch-based refinement for mri-based radiotherapy planning

    Junghoon Lee, Aaron Carass, Amod Jog, Can Zhao, and Jerry L Prince. Multi-atlas-based ct synthesis from conventional mri with patch-based refinement for mri-based radiotherapy planning. InMedical Imaging 2017: Image Processing, volume 10133, pages 434–439. SPIE, 2017

  17. [17]

    Mri-to-ct synthesis with cranial suture segmentations using a variational autoencoder framework.arXiv preprint arXiv:2512.23894, 2025

    Krithika Iyer, Austin Tapp, Athelia Paulli, Gabrielle Dickerson, Syed Muhammad Anwar, Natasha Lepore, and Marius George Linguraru. Mri-to-ct synthesis with cranial suture segmentations using a variational autoencoder framework.arXiv preprint arXiv:2512.23894, 2025

  18. [18]

    Fddm: unsupervised medical image translation with a frequency-decoupled diffusion model.Machine Learning: Science and Technology, 6(2):025007, apr 2025

    Yunxiang Li, Hua-Chieh Shao, Xiaoxue Qian, and You Zhang. Fddm: unsupervised medical image translation with a frequency-decoupled diffusion model.Machine Learning: Science and Technology, 6(2):025007, apr 2025

  19. [19]

    Gans for medical image synthesis: An empirical study.Journal of Imaging, 9(3):69, 2023

    Youssef Skandarani, Pierre-Marc Jodoin, and Alain Lalande. Gans for medical image synthesis: An empirical study.Journal of Imaging, 9(3):69, 2023

  20. [20]

    Mri-only based synthetic ct generation using dense cycle consistent generative adversarial networks.Medical physics, 46(8):3565–3581, 2019

    Yang Lei, Joseph Harms, Tonghe Wang, Yingzi Liu, Hui-Kuo Shu, Ashesh B Jani, Walter J Curran, Hui Mao, Tian Liu, and Xiaofeng Yang. Mri-only based synthetic ct generation using dense cycle consistent generative adversarial networks.Medical physics, 46(8):3565–3581, 2019

  21. [21]

    Chun-Chieh Wang, Pei-Huan Wu, Gigin Lin, Yen-Ling Huang, Yu-Chun Lin, Yi-Peng Chang, and Jun-Cheng Weng. Magnetic resonance-based synthetic computed tomography using generative adversarial networks for intracranial tumor radiotherapy treatment planning.Journal of personalized medicine, 12(3):361, 2022

  22. [22]

    Synthetic ct generation for mri-guided adaptive radiotherapy in prostate cancer.Frontiers in Oncology, 12:969463, 2022

    Shu-Hui Hsu, Zhaohui Han, Jonathan E Leeman, Yue-Houng Hu, Raymond H Mak, and Atchar Sudhyadhom. Synthetic ct generation for mri-guided adaptive radiotherapy in prostate cancer.Frontiers in Oncology, 12:969463, 2022

  23. [23]

    Olga M Dona Lemus, Yi-Fang Wang, Fiona Li, Sachin Jambawalikar, David P Horowitz, Yuanguang Xu, and Cheng-Shie Wuu. Dosimetric assessment of patient dose calculation on a deep learning-based synthesized computed tomography image for adaptive radiotherapy.Journal of Applied Clinical Medical Physics, 23(7):e13595, 2022

  24. [24]

    Unsupervised mr- to-ct synthesis using structure-constrained cyclegan.IEEE transactions on medical imaging, 39(12):4249–4261, 2020

    Heran Yang, Jian Sun, Aaron Carass, Can Zhao, Junghoon Lee, Jerry L Prince, and Zongben Xu. Unsupervised mr- to-ct synthesis using structure-constrained cyclegan.IEEE transactions on medical imaging, 39(12):4249–4261, 2020

  25. [25]

    Unsupervised-learning-based method for chest mri– ct transformation using structure constrained unsupervised generative attention networks.Scientific reports, 12(1):11090, 2022

    Hidetoshi Matsuo, Mizuho Nishio, Munenobu Nogami, Feibi Zeng, Takako Kurimoto, Sandeep Kaushik, Florian Wiesinger, Atsushi K Kono, and Takamichi Murakami. Unsupervised-learning-based method for chest mri– ct transformation using structure constrained unsupervised generative attention networks.Scientific reports, 12(1):11090, 2022

  26. [26]

    Changfei Gong, Yuling Huang, Mingming Luo, Shunxiang Cao, Xiaochang Gong, Shenggou Ding, Xingxing Yuan, Wenheng Zheng, and Yun Zhang. Channel-wise attention enhanced and structural similarity constrained cyclegan for effective synthetic ct generation from head and neck mri images.Radiation Oncology, 19(1):37, 2024

  27. [27]

    Ct synthesis from mri using multi-cycle gan for head-and-neck radiation therapy.Computerized medical imaging and graphics, 91:101953, 2021

    Yanxia Liu, Anni Chen, Hongyu Shi, Sijuan Huang, Wanjia Zheng, Zhiqiang Liu, Qin Zhang, and Xin Yang. Ct synthesis from mri using multi-cycle gan for head-and-neck radiation therapy.Computerized medical imaging and graphics, 91:101953, 2021

  28. [28]

    Synthetic ct generation from mri using 3d transformer-based denoising diffusion model.Medical Physics, 51(4):2538–2548, 2024

    Shaoyan Pan, Elham Abouei, Jacob Wynne, Chih-Wei Chang, Tonghe Wang, Richard LJ Qiu, Yuheng Li, Junbo Peng, Justin Roper, Pretesh Patel, et al. Synthetic ct generation from mri using 3d transformer-based denoising diffusion model.Medical Physics, 51(4):2538–2548, 2024

  29. [29]

    Unsupervised medical image translation with adversarial diffusion models.IEEE Transactions on Medical Imaging, 42(12):3524–3539, 2023

    Muzaffer Özbey, Onat Dalmaz, Salman UH Dar, Hasan A Bedel, ¸ Saban Özturk, Alper Güngör, and Tolga Cukur. Unsupervised medical image translation with adversarial diffusion models.IEEE Transactions on Medical Imaging, 42(12):3524–3539, 2023. 11 Running Title for Header

  30. [30]

    Conversion between ct and mri images using diffusion and score-matching models.arXiv preprint arXiv:2209.12104, 2022

    Qing Lyu and Ge Wang. Conversion between ct and mri images using diffusion and score-matching models.arXiv preprint arXiv:2209.12104, 2022

  31. [31]

    Generative Modeling via Drifting

    Mingyang Deng, He Li, Tianhong Li, Yilun Du, and Kaiming He. Generative modeling via drifting.arXiv preprint arXiv:2602.04770, 2026

  32. [32]

    Muren, Hans von der Maase, Jinyi Wang, Sofie Ceberg, and Adalsteinn Gunnlaugsson

    Tufve Nyholm, Stina Svensson, Sebastian Andersson, Joakim Jonsson, Maja Sohlin, Christian Gustafsson, Elisabeth Kjellén, Ludvig P. Muren, Hans von der Maase, Jinyi Wang, Sofie Ceberg, and Adalsteinn Gunnlaugsson. Mr and ct data with multiobserver delineations of organs in the pelvic area – part of the gold atlas project.Medical Physics, 45(3):1295–1300, 2018

  33. [33]

    Adrian Thummerer, Erik van der Bijl, Arthur Galapon Jr, Joost J. C. Verhoeff, Johannes A. Langendijk, Stefan Both, Cornelis A. T. van den Berg, and Matteo Maspero. Synthrad2023 grand challenge dataset: Generating synthetic ct for radiotherapy.Medical Physics, 50(7):4664–4674, 2023

  34. [34]

    A simple framework for contrastive learning of visual representations

    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. InInternational conference on machine learning, pages 1597–1607. PmLR, 2020

  35. [35]

    U-Net: Convolutional Networks for Biomedical Image Segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation.arXiv e-prints, page arXiv:1505.04597, May 2015

  36. [36]

    Auto-Encoding Variational Bayes

    Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes.arXiv e-prints, page arXiv:1312.6114, December 2013

  37. [37]

    Improved training of wasserstein gans

    Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. InAdvances in neural information processing systems, volume 30, 2017

  38. [38]

    Maltz, Mats Danielsson, Ge Wang, and Mats Persson

    Dennis Hein, Staffan Holmin, Timothy Szczykutowicz, Jonathan S. Maltz, Mats Danielsson, Ge Wang, and Mats Persson. PPFM: Image Denoising in Photon-Counting CT Using Single-Step Posterior Sampling Poisson Flow Generative Models.IEEE Transactions on Radiation and Plasma Medical Sciences, 8(7):788–799, January 2024

  39. [39]

    Fast- DDPM: Fast Denoising Diffusion Probabilistic Models for Medical Image-to-Image Generation.arXiv e-prints, page arXiv:2405.14802, May 2024

    Hongxu Jiang, Muhammad Imran, Teng Zhang, Yuyin Zhou, Muxuan Liang, Kuang Gong, and Wei Shao. Fast- DDPM: Fast Denoising Diffusion Probabilistic Models for Medical Image-to-Image Generation.arXiv e-prints, page arXiv:2405.14802, May 2024

  40. [40]

    Denoising Diffusion Implicit Models

    Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models.arXiv e-prints, page arXiv:2010.02502, October 2020

  41. [41]

    Denoising Diffusion Probabilistic Models

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models.arXiv e-prints, page arXiv:2006.11239, June 2020. 12