pith. machine review for the scientific record. sign in

arxiv: 2604.22894 · v1 · submitted 2026-04-24 · 📡 eess.IV · cs.CV

Recognition: unknown

Generalizable CT-Free PET Attenuation and Scatter Correction for Pediatric Patients

Huanyu Luo, Jia-Mian Wu, Jigang Yang, Jun Liu, Lingling Zheng, Qiang Gao, Shibai Yin, Siqi Li, Tai-Xiang Jiang, Xiaoya Wang

Pith reviewed 2026-05-08 09:19 UTC · model grok-4.3

classification 📡 eess.IV cs.CV
keywords CT-free PET correctionattenuation and scatter correctionpediatric PET imagingdomain generalizationdeep learningquantitative PETwavelet multiscale decompositionFourier domain processing
0
0 comments X

The pith

A dual-domain network corrects pediatric PET without CT and holds accuracy across new scanners and tracers.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops GPCN to perform attenuation and scatter correction on PET images using only the PET data itself, avoiding the extra CT scan that adds radiation dose. It does so through a multi-band refinement step that breaks images into scales with wavelets and captures long-range context, paired with a Fourier-domain module that refines amplitude and phase separately. The goal is to keep the parts of the image that reflect real anatomy stable while discarding effects that change with scanner hardware or the injected tracer. Evaluation on 1085 pediatric whole-body scans from two scanners and five tracers shows the network beats standard baselines in both mixed training and completely unseen scanner-tracer pairs. Because the average CT dose in the usual protocol is 10.8 mSv, reliable CT-free correction would lower radiation burden for children who need repeated PET studies.

Core claim

GPCN achieves precise quantitative recovery of both anatomical organs and focal lesions by combining multi-band contextual refinement via wavelet-based multiscale decomposition and long-range spatial modeling with frequency-aware spectral decoupling that performs coordinate-conditioned amplitude and phase refinement in the Fourier domain, thereby separating invariant topological anatomical structures from domain-specific noise across heterogeneous scanner and radiotracer conditions.

What carries the argument

Dual-domain GPCN architecture whose multi-band contextual refinement module and frequency-aware spectral decoupling module together isolate invariant topological anatomical structures from domain-specific noise.

If this is right

  • GPCN outperforms representative baselines in both joint training and zero-shot cross-domain evaluation.
  • The network maintains stable quantitative accuracy on unseen scanner-tracer combinations.
  • Ablation, region-wise quantitative analysis, and downstream segmentation experiments all support the method.
  • Eliminating the CT scan removes an average effective dose of 10.8 mSv from the conventional pediatric PET protocol.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same separation of stable anatomy from variable acquisition effects could be tested on adult PET or on other modalities that face domain shifts.
  • Minimal additional fine-tuning on a handful of new-tracer examples might be sufficient if the frequency decoupling already captures most of the shift.
  • Region-wise error maps from the current experiments point to focal lesions as the most sensitive test case for any future generalization claim.

Load-bearing premise

The multi-band contextual refinement and frequency-aware spectral decoupling together can reliably keep anatomical structures fixed while removing scanner- and tracer-dependent variations.

What would settle it

A clear rise in quantitative error metrics such as SUV bias or lesion contrast loss when GPCN is tested zero-shot on a scanner-tracer pair never seen during training would disprove the claimed cross-domain stability.

Figures

Figures reproduced from arXiv: 2604.22894 by Huanyu Luo, Jia-Mian Wu, Jigang Yang, Jun Liu, Lingling Zheng, Qiang Gao, Shibai Yin, Siqi Li, Tai-Xiang Jiang, Xiaoya Wang.

Figure 1
Figure 1. Figure 1: Conceptual overview of PET correction paradigms. Comparison of (a) conventional CT-based correction, (b) standard direct DL approach with limited view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed generalizable PET correction network (GPCN). MBCR is the multi-band contextual refinement. FASD is the frequency-aware view at source ↗
Figure 3
Figure 3. Figure 3: Structure of the proposed MBCR. (a) Overview of the multi-band view at source ↗
Figure 5
Figure 5. Figure 5: frequency-aware spectral decoupling (FASD). (a) The dual-branch view at source ↗
Figure 6
Figure 6. Figure 6: Joint histogram analysis demonstrating voxel-wise correlation between view at source ↗
Figure 7
Figure 7. Figure 7: Quantitative analysis of clinical and radiomic metrics under joint training. Box plots show error of VOI-based SUVmax, SUVmean, GLCM Contrast, view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative visual results of the joint training strategy on the heterogeneous test set. Each patient case is presented as a two-row block, comprising view at source ↗
Figure 9
Figure 9. Figure 9: Quantitative generalization on external scanners and tracers. Bar plots show PSNR, SSIM, nMAE, and NMSE for methods trained only on Siemens view at source ↗
Figure 10
Figure 10. Figure 10: Quantitative generalization analysis of clinical and radiomic metrics on external datasets. Box plots display the relative error (%) of VOI-based view at source ↗
Figure 11
Figure 11. Figure 11: Qualitative visual comparison of generalization performance on view at source ↗
Figure 13
Figure 13. Figure 13: Lesion-specific quantitative ablation analysis. The bar chart compares view at source ↗
Figure 14
Figure 14. Figure 14: Absolute relative SUV error as a function of voxel depth from view at source ↗
Figure 15
Figure 15. Figure 15: Visual comparison of the downstream segmentation tasks. Top row: view at source ↗
read the original abstract

Computed tomography (CT)-based attenuation and scatter correction improves quantitative PET but adds radiation exposure that is particularly undesirable in pediatric imaging. Existing CT-free methods are commonly trained in homogeneous settings and often degrade under scanner or radiotracer shifts, which limits their clinical utility. We propose the Generalizable PET Correction Network (GPCN), a dual-domain network for domain-robust CT-free PET attenuation and scatter correction. GPCN combines a multi-band contextual refinement module, which models pediatric anatomical variability through wavelet-based multiscale decomposition and long-range spatial context modeling, with a frequency-aware spectral decoupling module, which performs coordinate-conditioned amplitude/phase refinement in the Fourier domain. By synergizing multi-band spatial contextual modeling with asymmetric frequency-spectrum decoupling, the network explicitly separates invariant topological structures from domain-specific noise, thereby achieving precise quantitative recovery of both anatomical organs and focal lesions. This design aims to separate anatomy-dominant structures from domain-sensitive spectral residuals and to improve robustness across heterogeneous imaging conditions. We train and evaluate the method on 1085 pediatric whole-body PET scans acquired with two scanners and five radiotracers. In both joint training and zero-shot cross-domain evaluation, GPCN outperforms representative baselines and maintains stable quantitative accuracy on unseen scanner-tracer combinations. The method is further supported by ablation, region-wise quantitative analysis, and downstream segmentation experiments. In our cohort, the CT component of the conventional protocol corresponded to an average effective dose of 10.8 mSv, indicating the potential clinical value of reliable CT-free correction for pediatric PET.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces the Generalizable PET Correction Network (GPCN), a dual-domain deep learning architecture for CT-free attenuation and scatter correction in pediatric PET. It combines a multi-band contextual refinement module (wavelet-based multiscale decomposition with long-range spatial context) and a frequency-aware spectral decoupling module (coordinate-conditioned amplitude/phase refinement in the Fourier domain). The central claim is that this design separates invariant topological anatomical structures from domain-specific spectral residuals, yielding robust quantitative performance across scanner and radiotracer shifts. The method is trained on 1085 pediatric whole-body PET scans from two scanners and five radiotracers; it reports outperformance over baselines in both joint-training and zero-shot cross-domain settings, with supporting ablation studies, region-wise quantitative analysis, and downstream segmentation experiments. Potential clinical benefit is noted via avoidance of an average 10.8 mSv CT dose.

Significance. If the reported zero-shot stability holds, the work has clear clinical significance for pediatric PET by removing ionizing CT exposure while preserving quantification accuracy for organs and lesions. Strengths include the large multi-domain cohort, explicit zero-shot protocol, ablation controls, and downstream task validation, which together provide a practical test of generalizability. The frequency-domain decoupling strategy is a potentially useful mechanism for domain invariance if its claimed separation can be directly verified.

major comments (2)
  1. [Methods (frequency-aware spectral decoupling module)] Methods (frequency-aware spectral decoupling module): The assertion that coordinate-conditioned amplitude/phase refinement 'explicitly separates invariant topological structures from domain-specific noise' is load-bearing for the generalizability claim, yet the results section provides only downstream metrics and ablations; no cross-domain distribution alignment statistics, invariant-branch visualizations, or residual-leakage analysis are reported to confirm the separation occurs as designed.
  2. [Results section] Results section: While outperformance and 'stable quantitative accuracy' are claimed for zero-shot scanner-tracer pairs, the reported metrics lack error bars, statistical significance tests (e.g., paired t-tests or Wilcoxon), and explicit exclusion criteria; without these the magnitude and reliability of the cross-domain improvements cannot be fully assessed.
minor comments (2)
  1. The abstract refers to 'representative baselines' without naming the specific methods or citing their original papers; this list should appear explicitly in the methods or experiments section.
  2. [Figure legends] Figure captions describing the decoupled spectral components would be clearer if they explicitly labeled which sub-bands or phases are intended to represent invariant anatomy versus domain residuals.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and insightful comments, which have helped us identify areas to strengthen the manuscript. We address each major comment point-by-point below, with proposed revisions where appropriate.

read point-by-point responses
  1. Referee: [Methods (frequency-aware spectral decoupling module)] Methods (frequency-aware spectral decoupling module): The assertion that coordinate-conditioned amplitude/phase refinement 'explicitly separates invariant topological structures from domain-specific noise' is load-bearing for the generalizability claim, yet the results section provides only downstream metrics and ablations; no cross-domain distribution alignment statistics, invariant-branch visualizations, or residual-leakage analysis are reported to confirm the separation occurs as designed.

    Authors: We agree that direct empirical verification of the separation mechanism would strengthen the generalizability claims. The frequency-aware spectral decoupling module is architecturally designed to achieve this by performing coordinate-conditioned refinement separately on amplitude and phase components in the Fourier domain, with the intent of isolating anatomy-dominant invariant structures from domain-sensitive residuals. The existing ablation studies and cross-domain performance gains provide indirect support for this design choice. In the revised manuscript, we will add: (i) visualizations of the amplitude/phase components pre- and post-refinement across scanner-tracer domains, (ii) quantitative cross-domain alignment statistics (e.g., maximum mean discrepancy on invariant-branch features), and (iii) residual-leakage analysis in the frequency domain comparing corrected PET outputs to ground truth. These additions will directly address the request for confirmation without changing the core method or results. revision: yes

  2. Referee: [Results section] Results section: While outperformance and 'stable quantitative accuracy' are claimed for zero-shot scanner-tracer pairs, the reported metrics lack error bars, statistical significance tests (e.g., paired t-tests or Wilcoxon), and explicit exclusion criteria; without these the magnitude and reliability of the cross-domain improvements cannot be fully assessed.

    Authors: We acknowledge the importance of statistical rigor and transparency for evaluating the reliability of the zero-shot results. In the revised manuscript, we will augment all quantitative tables and figures with error bars (reporting standard deviation across subjects) and will include paired statistical significance tests (Wilcoxon signed-rank test, chosen for robustness to non-normality) comparing GPCN against baselines in both joint-training and zero-shot settings. Regarding exclusion criteria, the cohort definition and quality control steps are already specified in the Methods (Section 3.1), including age range, scan completeness, and artifact exclusion. We will add an explicit summary paragraph and table in the Results section listing final case counts per domain, any exclusions applied during evaluation, and the rationale, to allow full assessment of the reported improvements. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation or evaluation chain

full rationale

The paper trains the GPCN architecture on 1085 pediatric PET scans and reports quantitative accuracy plus outperformance specifically on held-out zero-shot cross-domain test sets drawn from distinct scanners and radiotracers. These metrics are measured on data never seen during training and are not equivalent to any training loss term, fitted parameter, or architectural definition by construction. The claimed separation of invariant topological structures from domain-specific noise is presented as a design objective of the multi-band and frequency-aware modules, but the reported results (ablation studies, region-wise metrics, downstream segmentation) constitute independent empirical verification rather than a tautological reduction. No self-citations, uniqueness theorems, or ansatzes imported from prior author work appear as load-bearing steps in the provided text. The derivation chain therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on the empirical effectiveness of the proposed dual-domain modules rather than on physical first-principles derivations; the separation of invariant anatomy from domain noise is postulated as an architectural property without independent verification outside the training data.

axioms (2)
  • domain assumption Wavelet-based multiscale decomposition combined with long-range spatial context modeling captures pediatric anatomical variability
    Invoked to justify the multi-band contextual refinement module.
  • ad hoc to paper Coordinate-conditioned amplitude/phase refinement in the Fourier domain separates invariant topological structures from domain-specific spectral residuals
    Central design hypothesis of the frequency-aware spectral decoupling module.

pith-pipeline@v0.9.0 · 5604 in / 1480 out tokens · 58165 ms · 2026-05-08T09:19:58.232736+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references

  1. [1]

    Minimizing and communicating radiation risk in pediatric nuclear medicine,

    F. H. Fahey, S. T. Treves, and S. J. Adelstein, “Minimizing and communicating radiation risk in pediatric nuclear medicine,”Journal of Nuclear Medicine Technology, vol. 40, no. 1, pp. 13–24, 2012

  2. [2]

    Computed tomography—an increasing source of radiation exposure,

    D. J. Brenner and E. J. Hall, “Computed tomography—an increasing source of radiation exposure,”New England Journal of Medicine, vol. 357, no. 22, pp. 2277–2284, 2007

  3. [3]

    Radiation-related cancer risks at low doses among atomic bomb survivors,

    D. A. Pierce and D. L. Preston, “Radiation-related cancer risks at low doses among atomic bomb survivors,”Radiation Research, vol. 154, no. 2, pp. 178–186, 2000

  4. [4]

    Simultaneous reconstruction of activity and attenuation in time-of-flight PET,

    A. Rezaei, M. Defrise, G. Bal, C. Michel, M. Conti, C. Watson, and J. Nuyts, “Simultaneous reconstruction of activity and attenuation in time-of-flight PET,”IEEE Transactions on Medical Imaging, vol. 31, no. 12, pp. 2224–2233, 2012

  5. [5]

    Performance characteristics of the 3-D OSEM algorithm in the reconstruction of small animal PET images,

    R. Yao, J. Seidel, C. Johnson, M. Daube-Witherspoon, M. Green, and R. Carson, “Performance characteristics of the 3-D OSEM algorithm in the reconstruction of small animal PET images,”IEEE Transactions on Medical Imaging, vol. 19, no. 8, pp. 798–804, 2000

  6. [6]

    Design features and mutual compatibility studies of the time-of-flight PET capable GE SIGNA PET/mr system,

    C. S. Levin, S. H. Maramraju, M. M. Khalighi, T. W. Deller, G. Delso, and F. Jansen, “Design features and mutual compatibility studies of the time-of-flight PET capable GE SIGNA PET/mr system,”IEEE Transactions on Medical Imaging, vol. 35, no. 8, pp. 1907–1914, 2016

  7. [7]

    Ultra-low dose whole-body CT for attenuation correction in a dual tracer PET/CT protocol for multiple myeloma,

    E. Prieto, M. J. Garc ´ıa-Velloso, J. D. Aquerreta, J. J. Rosales, J. F. Bastidas, I. Soriano, L. Irazola, P. Rodr ´ıguez-Otero, G. Quincoces, and J. M. Mart ´ı-Climent, “Ultra-low dose whole-body CT for attenuation correction in a dual tracer PET/CT protocol for multiple myeloma,” Physica Medica, vol. 84, pp. 1–9, 2021

  8. [8]

    Ultra-low dose CT attenuation correction for PET/CT,

    T. Xia, A. M. Alessio, B. De Man, R. Manjeshwar, E. Asma, and P. E. Kinahan, “Ultra-low dose CT attenuation correction for PET/CT,” Physics in Medicine & Biology, vol. 57, no. 2, p. 309, 2011

  9. [9]

    Supplemental transmission aided attenuation correction for quantitative cardiac PET,

    M.-A. Park, V . G. Zaha, R. D. Badawi, and S. L. Bowen, “Supplemental transmission aided attenuation correction for quantitative cardiac PET,” IEEE Transactions on Medical Imaging, vol. 43, no. 3, pp. 1125–1137, 2024

  10. [10]

    Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG pet using a deep residual network,

    I. Shiri, H. Arabi, P. Geramifar, G. Hajianfar, P. Ghafarian, A. Rahmim, M. R. Ay, and H. Zaidi, “Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG pet using a deep residual network,” European journal of nuclear medicine and molecular imaging, vol. 47, no. 11, pp. 2533–2548, 2020

  11. [11]

    Eliminating CT radiation for clinical PET examination using deep learning,

    Q. Li, X. Zhu, S. Zou, N. Zhang, X. Liu, Y . Yang, H. Zheng, D. Liang, and Z. Hu, “Eliminating CT radiation for clinical PET examination using deep learning,”European Journal of Radiology, vol. 154, p. 110422, 2022. 12

  12. [12]

    High-quality fusion and visualization for MR-PET brain tumor images via multi-dimensional features,

    J. Wen, A. Khan, A. Chen, W. Peng, M. Fang, C. P. Chen, and P. Li, “High-quality fusion and visualization for MR-PET brain tumor images via multi-dimensional features,”IEEE Transactions on Image Processing, vol. 33, pp. 3550–3563, 2024

  13. [13]

    Deep-learning-based methods of attenuation correction for SPECT and PET,

    X. Chen and C. Liu, “Deep-learning-based methods of attenuation correction for SPECT and PET,”Journal of Nuclear Cardiology, vol. 30, no. 5, pp. 1859–1878, 2023

  14. [14]

    A deep learning approach for 18F-FDG PET attenuation correction,

    F. Liu, H. Jang, R. Kijowski, G. Zhao, T. Bradshaw, and A. B. McMillan, “A deep learning approach for 18F-FDG PET attenuation correction,” EJNMMI Physics, vol. 5, no. 1, p. 24, 2018

  15. [15]

    Comparison of deep learning-based emission-only attenuation correction methods for positron emission tomography,

    D. Hwang, S. K. Kang, K. Y . Kim, H. Choi, and J. S. Lee, “Comparison of deep learning-based emission-only attenuation correction methods for positron emission tomography,”European Journal of Nuclear Medicine and Molecular Imaging, vol. 49, no. 6, pp. 1833–1842, 2022

  16. [16]

    mDixon-Based synthetic CT generation for PET attenuation correction on abdomen and pelvis jointly using transfer fuzzy clustering and active learning-based classification,

    P. Qian, Y . Chen, J.-W. Kuo, Y .-D. Zhang, Y . Jiang, K. Zhao, R. Al Helo, H. Friel, A. Baydoun, F. Zhou, J. U. Heo, N. Avril, K. Herrmann, R. Ellis, B. Traughber, R. S. Jones, S. Wang, K.-H. Su, and R. F. Muzic, “mDixon-Based synthetic CT generation for PET attenuation correction on abdomen and pelvis jointly using transfer fuzzy clustering and active l...

  17. [17]

    Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network,

    K. D. Spuhler, J. Gardus, Y . Gao, C. DeLorenzo, R. Parsey, and C. Huang, “Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network,”Journal of Nuclear Medicine, vol. 60, no. 4, pp. 555– 560, 2019

  18. [18]

    Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC),

    I. Shiri, P. Ghafarian, P. Geramifar, K. H.-Y . Leung, M. Ghelichoghli, M. Oveisi, A. Rahmim, and M. R. Ay, “Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC),”European Radiology, vol. 29, no. 12, pp. 6867–6879, 2019

  19. [19]

    Image-to-image translation with conditional adversarial networks,

    P. Isola, J.-Y . Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” inConference on Computer Vision and Pattern Recognition, pp. 1125–1134, 2017

  20. [20]

    U-net: Convolutional networks for biomedical image segmentation,

    O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” inConference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, Springer, 2015

  21. [21]

    A deep learning-based approach for direct whole-body PET attenuation correction,

    H. Van Hemmen, H. Massa, S. Hurley, S. Cho, T. Bradshaw, and A. McMillan, “A deep learning-based approach for direct whole-body PET attenuation correction,”Journal of Nuclear Medicine, p. 569, 2019

  22. [22]

    Evaluation of deep learning-based scatter correction on a long-axial field-of-view PET scanner,

    B. Laurent, A. Bousse, T. Merlin, A. Rominger, K. Shi, and D. Visvikis, “Evaluation of deep learning-based scatter correction on a long-axial field-of-view PET scanner,”European Journal of Nuclear Medicine and Molecular Imaging, vol. 52, no. 7, pp. 2563–2576, 2025

  23. [23]

    High frequency detail accentuation in cnn image restoration,

    S. M. Ayyoubzadeh and X. Wu, “High frequency detail accentuation in cnn image restoration,”IEEE Transactions on Image Processing, vol. 30, pp. 8836–8846, 2021

  24. [24]

    The clinician and dataset shift in artificial intelligence,

    S. G. Finlayson, A. Subbaswamy, K. Singh, J. Bowers, A. Kupke, J. Zittrain, I. S. Kohane, and S. Saria, “The clinician and dataset shift in artificial intelligence,”New England Journal of Medicine, vol. 385, no. 3, pp. 283–286, 2021

  25. [25]

    Vmamba: Visual state space model,

    Y . Liu, Y . Tian, Y . Zhao, H. Yu, L. Xie, Y . Wang, Q. Ye, J. Jiao, and Y . Liu, “Vmamba: Visual state space model,”Advances in Neural Information Processing Systems, vol. 37, pp. 103031–103063, 2024

  26. [26]

    Spatial-frequency enhanced mamba for multi-modal image fusion,

    H. Sun, L. Lv, P. Zhang, T. Tang, F. Tian, W. Sun, and H. Lu, “Spatial-frequency enhanced mamba for multi-modal image fusion,” IEEE Transactions on Image Processing, vol. 34, pp. 7684–7696, 2025

  27. [27]

    Ph-mamba: Enhancing mamba with position encoding and harmonized attention for image deraining and beyond,

    K. Jiang, J. Jiang, X. Liu, H. Yao, and C.-W. Lin, “Ph-mamba: Enhancing mamba with position encoding and harmonized attention for image deraining and beyond,”IEEE Transactions on Image Processing, 2025

  28. [28]

    Mamba: Linear-time sequence modeling with selective state spaces,

    A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” inFirst conference on language modeling, 2023

  29. [29]

    Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging,

    X. Dong, Y . Lei, T. Wang, K. Higgins, T. Liu, W. J. Curran, H. Mao, J. A. Nye, and X. Yang, “Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging,”Physics in Medicine & Biology, vol. 65, no. 5, p. 055011, 2020

  30. [30]

    Using domain knowledge for robust and generalizable deep learning-based CT-free PET attenuation and scatter correction,

    R. Guo, S. Xue, J. Hu, H. Sari, C. Mingels, K. Zeimpekis, G. Prenosil, Y . Wang, Y . Zhang, M. Viscione,et al., “Using domain knowledge for robust and generalizable deep learning-based CT-free PET attenuation and scatter correction,”Nature Communications, vol. 13, no. 1, p. 5882, 2022

  31. [31]

    Synthetic CT generation via variant invertible network for brain PET attenuation correction,

    Y . Guan, B. Shen, S. Jiang, X. Shi, X. Zhang, B. Li, and Q. Liu, “Synthetic CT generation via variant invertible network for brain PET attenuation correction,”IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 9, no. 3, pp. 325–336, 2025

  32. [32]

    MambaIR: A simple baseline for image restoration with state-space model,

    H. Guo, J. Li, T. Dai, Z. Ouyang, X. Ren, and S.-T. Xia, “MambaIR: A simple baseline for image restoration with state-space model,” in European conference on computer vision, pp. 222–241, Springer, 2024

  33. [33]

    PET textural features stability and pattern discrimination power for radiomics analysis: An “ad-hoc

    L. Presotto, V . Bettinardi, E. De Bernardi, M. Belli, G. Cattaneo, S. Broggi, and C. Fiorino, “PET textural features stability and pattern discrimination power for radiomics analysis: An “ad-hoc” phantoms study,”Physica Medica, vol. 50, pp. 66–74, 2018

  34. [34]

    A non-invasive radiomic method using 18f- fdg PET predicts isocitrate dehydrogenase genotype and prognosis in patients with glioma,

    L. Li, W. Mu, Y . Wang, Z. Liu, Z. Liu, Y . Wang, W. Ma, Z. Kong, S. Wang, X. Zhou,et al., “A non-invasive radiomic method using 18f- fdg PET predicts isocitrate dehydrogenase genotype and prognosis in patients with glioma,”Frontiers in Oncology, vol. 9, p. 1183, 2019

  35. [35]

    Radi- ation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase 18F-FET PET accuracy without dynamic scans,

    P. Lohmann, G. Stoffels, G. Ceccon, M. Rapp, M. Sabel, C. P. Filss, M. A. Kamp, C. Stegmayr, B. Neumaier, N. J. Shah,et al., “Radi- ation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase 18F-FET PET accuracy without dynamic scans,”European Radiology, vol. 27, no. 7, pp. 2916–2927, 2017

  36. [36]

    SegAnyPET: Universal promptable segmentation from positron emission tomography images,

    Y . Zhang, L. Xue, W. Zhang, L. Li, Y . Liu, C. Jiang, Y . Cheng, and Y . Qi, “SegAnyPET: Universal promptable segmentation from positron emission tomography images,” inProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 21107–21116, October 2025

  37. [37]

    Assessment of patient dose in CT,

    P. C. Shrimpton, “Assessment of patient dose in CT,”EUR. European guidelines for multislice computed tomography funded by the European Commission, 2004

  38. [38]

    Size-specific dose estimates (SSDE) in pediatric and adult body CT examinations,

    J. Boone, K. Strauss, D. Cody, C. McCollough, M. McNitt-Gray, T. Toth, M. Goske, and D. Frush, “Size-specific dose estimates (SSDE) in pediatric and adult body CT examinations,”AAPM report, 2011

  39. [39]

    National survey of doses from CT in the UK: 2003,

    P. C. Shrimpton, M. Hillier, M. Lewis, and M. Dunn, “National survey of doses from CT in the UK: 2003,”The British journal of radiology, vol. 79, no. 948, pp. 968–980, 2006

  40. [40]

    Unpaired image-to-image translation using cycle-consistent adversarial networks,

    J.-Y . Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” inConference on Computer Vision, pp. 2223–2232, 2017

  41. [41]

    Iterative PET image reconstruction using convolutional neural network representation,

    K. Gong, J. Guan, K. Kim, X. Zhang, J. Yang, Y . Seo, G. El Fakhri, J. Qi, and Q. Li, “Iterative PET image reconstruction using convolutional neural network representation,”IEEE Transactions on Medical Imaging, vol. 38, no. 3, pp. 675–685, 2018

  42. [42]

    Deep learning for PET image reconstruction,

    A. J. Reader, G. Corda, A. Mehranian, C. da Costa-Luis, S. Ellis, and J. A. Schnabel, “Deep learning for PET image reconstruction,”IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 5, no. 1, pp. 1–25, 2020

  43. [43]

    Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning,

    I. Shiri, A. Vafaei Sadr, A. Akhavan, Y . Salimi, A. Sanaat, M. Amini, B. Razeghi, A. Saberi, H. Arabi, S. Ferdowsi,et al., “Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning,”European Journal of Nuclear Medicine and Molecular Imaging, vol. 50, no. 4, pp. 1034–1050, 2023

  44. [44]

    Artificial intelligence-based joint attenuation and scatter correction strategies for multi-tracer total-body PET,

    H. Sun, Y . Huang, D. Hu, X. Hong, Y . Salimi, W. Lv, H. Chen, H. Zaidi, H. Wu, and L. Lu, “Artificial intelligence-based joint attenuation and scatter correction strategies for multi-tracer total-body PET,”EJNMMI Physics, vol. 11, no. 1, p. 66, 2024

  45. [45]

    Region-adaptive deformable registration of CT/MRI pelvic images via learning-based image synthesis,

    X. Cao, J. Yang, Y . Gao, Q. Wang, and D. Shen, “Region-adaptive deformable registration of CT/MRI pelvic images via learning-based image synthesis,”IEEE Transactions on Image Processing, vol. 27, no. 7, pp. 3500–3512, 2018. Jia-Mian Wureceived the B.E. degree from Chongqing University of Science and Technology, Chongqing, China, in 2020, and the M.E. deg...

  46. [46]

    She was a vis- iting scholar at Memorial University of Newfound- land and the University of Alberta

    She is currently a Professor with the School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics. She was a vis- iting scholar at Memorial University of Newfound- land and the University of Alberta. Her research interests include computer vision, deep learning, and multimodal retrieval. She has published over 30 high...

  47. [47]

    Qiang Gaoreceived the B.S

    Her research focuses on the development of antibody-based radiotracers for precision imaging and targeted therapy of tumors. Qiang Gaoreceived the B.S. degree in management and the Ph.D. degree in software engineering from the University of Electronic Science and Technology of China (UESTC). From 2019 to 2020, he was a joint Ph.D. student with Northwester...