pith. machine review for the scientific record. sign in

arxiv: 2605.01741 · v1 · submitted 2026-05-03 · 💻 cs.CV

Recognition: unknown

Adaptive Texture-aware Masking for Self-Supervised Learning in 3D Dental CBCT Analysis

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:19 UTC · model grok-4.3

classification 💻 cs.CV
keywords adaptive maskingself-supervised learningdental CBCTmasked image modelingtexture variation3D volumetric analysisrepresentation learningmedical image pretraining
0
0 comments X

The pith

An adaptive masking strategy that targets complex texture regions in dental CBCT volumes produces stronger self-supervised representations than random masking.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper proposes ATMask to improve masked image modeling for 3D dental cone-beam CT scans. Instead of masking randomly, it calculates texture changes between slices to focus masks on areas with intricate structures or potential issues. Pretraining with this method on a new collection of 6314 scans allows the model to infer 3D shapes and transitions better. The result is more effective learning for tasks like analyzing teeth and jaws, which matters because labeled medical data is hard to get in large quantities.

Core claim

The core discovery is that computing an inter-slice texture variation map and selectively masking high-variation areas during pre-training forces the model to develop richer contextual representations for 3D morphological transitions in dental CBCT data. This adaptive approach outperforms standard random masking and other self-supervised baselines on three downstream tasks, enabling more data-efficient representation learning.

What carries the argument

The inter-slice texture variation map used to selectively mask high structural or textural complexity regions.

If this is right

  • Pretrained models show improved accuracy and efficiency on dental CBCT downstream tasks such as segmentation and detection.
  • The method makes better use of unlabeled data, reducing reliance on scarce annotated datasets.
  • High-variation areas corresponding to anatomical boundaries and subtle changes receive more focus in learning.
  • Releasing the 6,314-scan dataset supports further advances in dental AI pretraining.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar texture-based adaptive masking could enhance self-supervised learning in other 3D medical imaging fields like radiology or oncology scans.
  • Integrating this with attention mechanisms might further refine the identification of critical regions.
  • Testing the approach on datasets with known subtle pathologies could confirm its advantage in detecting early disease signs.

Load-bearing premise

The inter-slice texture variation map accurately identifies regions with high structural or textural complexity that are critical for diagnostic tasks.

What would settle it

If experiments on the downstream tasks reveal that models pretrained with ATMask perform no better or worse than those using random masking, the benefit of the adaptive strategy would be disproven.

Figures

Figures reproduced from arXiv: 2605.01741 by He Meng, Jianfeng Ren, Kian Ming Lim, Linlin Shen, Xinquan Yang, Xuguang Li, Yongqiang Deng.

Figure 1
Figure 1. Figure 1: Comparison of our adaptive masking with existing r [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The architecture of the proposed ATMask. [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of different masking ratios for variou [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Visual comparison of different methods under vari [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Sensitivity analyses of hyper-parameters [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
read the original abstract

Cone Beam Computed Tomography (CBCT) is pivotal for 3D diagnostic imaging in dentistry. However, the development of robust AI models for volumetric analysis is often constrained by the scarcity of large, annotated datasets. Self-supervised learning (SSL), particularly Masked Image Modeling (MIM), offers a promising pathway to leverage unlabeled data. A limitation of standard MIM is its reliance on random masking, which fails to prioritize diagnostically critical regions in dental CBCT volumes, such as subtle pathological changes and intricate anatomical boundaries. To address this, we propose ATMask, a novel adaptive masking strategy. Instead of applying random masks or employing computationally intensive attention modules, ATMask computes an inter-slice texture variation map to identify regions with high structural or textural complexity. These high-variation areas are then selectively masked during pre-training, compelling the model to learn richer contextual representations essential for inferring complex 3D morphological transitions. Furthermore, we contribute the first large-scale CBCT dataset, curated from both public and private sources, comprising 6,314 scans, for the dental AI model pretraining. Extensive experiments on three downstream dental CBCT tasks demonstrate that our ATMask enables more data-efficient and powerful representation learning than standard random masking and other advanced SSL baselines. The dataset and code will be released.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes ATMask, an adaptive masking strategy for masked image modeling (MIM) in self-supervised pretraining on 3D dental CBCT volumes. Instead of random masking, it computes an inter-slice texture variation map to selectively mask high structural/textural complexity regions, with the goal of forcing the model to learn richer 3D contextual representations for downstream tasks. The authors also release a new large-scale unlabeled CBCT dataset of 6,314 scans curated from public and private sources, and claim that ATMask yields more data-efficient and powerful representations than standard random masking and other SSL baselines on three downstream dental CBCT tasks.

Significance. If the central claims are supported by rigorous experiments, the work could advance data-efficient SSL for volumetric medical imaging by directing pretraining attention toward diagnostically relevant regions without heavy attention modules. The release of a large CBCT pretraining corpus and code would be a concrete community contribution, addressing the scarcity of annotated dental data.

major comments (2)
  1. [Abstract] Abstract: the claim that 'extensive experiments on three downstream dental CBCT tasks demonstrate that our ATMask enables more data-efficient and powerful representation learning' is unsupported by any quantitative metrics, ablation results, or experimental details in the manuscript text. Without these, it is impossible to determine whether observed gains arise from the adaptive masking logic, the 6,314-scan corpus, or other implementation choices.
  2. [Method] Method description (inter-slice texture variation map): the core assumption that high-variation voxels reliably coincide with subtle pathological changes and intricate anatomical boundaries (rather than scanner artifacts, uniform bone, or low-signal regions) is stated but not validated. No quantitative overlap analysis against pathology annotations, expert-annotated validation set, or ablation that isolates the adaptive component while holding the dataset fixed is provided, which is load-bearing for the claim that selective masking improves 3D contextual learning.
minor comments (2)
  1. The paper states that the dataset and code will be released; specifying the exact public/private sources, curation criteria, and any preprocessing steps for the 6,314 scans would strengthen reproducibility.
  2. Notation for the texture variation map computation could be clarified with an explicit equation or pseudocode to allow independent reimplementation.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback and the recommendation for major revision. We address each major comment point by point below, providing clarifications from the manuscript and committing to targeted revisions that strengthen the presentation of results and validation without altering the core claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that 'extensive experiments on three downstream dental CBCT tasks demonstrate that our ATMask enables more data-efficient and powerful representation learning' is unsupported by any quantitative metrics, ablation results, or experimental details in the manuscript text. Without these, it is impossible to determine whether observed gains arise from the adaptive masking logic, the 6,314-scan corpus, or other implementation choices.

    Authors: We agree that the abstract is too concise and does not reference specific metrics or sections. The full manuscript details the experimental protocol, quantitative results, and ablations in Section 4 (including Tables 1–3 comparing ATMask against random masking and other SSL baselines on the three downstream tasks, with data-efficiency curves) and Section 4.3 (ablations isolating the masking strategy). To make the abstract claim self-contained and traceable, we will revise it to include key quantitative gains (e.g., average Dice improvement and accuracy uplift) while explicitly citing the experimental sections. This revision will clarify that performance differences are measured against controlled baselines on the same corpus. revision: yes

  2. Referee: [Method] Method description (inter-slice texture variation map): the core assumption that high-variation voxels reliably coincide with subtle pathological changes and intricate anatomical boundaries (rather than scanner artifacts, uniform bone, or low-signal regions) is stated but not validated. No quantitative overlap analysis against pathology annotations, expert-annotated validation set, or ablation that isolates the adaptive component while holding the dataset fixed is provided, which is load-bearing for the claim that selective masking improves 3D contextual learning.

    Authors: We acknowledge that the assumption requires stronger empirical support. The current manuscript provides qualitative validation via visualizations (Figure 4) showing alignment of high-variation regions with anatomical boundaries and complex structures, and the main results already include an ablation comparing ATMask to random masking. However, we did not report quantitative overlap with pathology annotations or a fully isolated ablation holding the exact 6,314-scan corpus fixed while varying only the masking strategy. In the revision we will add: (1) a controlled ablation study using the identical pretraining corpus and downstream splits to isolate the adaptive component, and (2) overlap analysis (e.g., Dice/IoU) between high-variation voxels and expert-annotated pathological regions on the subset of scans where such annotations exist. These additions will directly address the validation gap. revision: yes

Circularity Check

0 steps flagged

No circularity: ATMask masking logic and performance claims are independent of inputs

full rationale

The paper defines ATMask via an inter-slice texture variation map computed directly from unlabeled CBCT volumes (no labels or downstream targets involved), then applies selective masking during pretraining on the 6,314-scan corpus before evaluating on separate downstream tasks. No equation, parameter fit, or self-citation reduces the claimed gains to a tautology or construction from the evaluation metrics themselves. The assumption that high-variation regions align with critical anatomy is an empirical hypothesis tested via ablation and baselines, not a definitional equivalence. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that texture variation correlates with diagnostic importance and that MIM benefits from masking such regions; no free parameters or invented entities are mentioned in the abstract.

axioms (1)
  • domain assumption Self-supervised masked image modeling can learn useful representations from unlabeled 3D medical volumes.
    Standard premise in SSL literature invoked to justify the pretraining approach.

pith-pipeline@v0.9.0 · 5552 in / 1232 out tokens · 39126 ms · 2026-05-10T15:19:50.433399+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 3 canonical work pages · 1 internal anchor

  1. [1]

    BEiT: BERT Pre-Training of Image Transformers

    Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021. 3

  2. [2]

    Segment- ing the inferior alveolar canal in cbcts volumes: the toothfairy challenge

    Federico Bolelli, Luca Lumetti, Shankeeth Vinaya- halingam, Mattia Di Bartolomeo, Arrigo Pellacani, Kevin Marchesini, Niels V an Nistelrooij, Pieter V an Lierop, Tong Xi, Y usheng Liu, et al. Segment- ing the inferior alveolar canal in cbcts volumes: the toothfairy challenge. IEEE Transactions on Medical Imaging, 2024. 5, 6

  3. [3]

    Segmenting maxillofacial structures in cbct volumes

    Federico Bolelli, Kevin Marchesini, Niels van Nis- telrooij, Luca Lumetti, Vittorio Pipoli, Elisa Ficarra, Shankeeth Vinayahalingam, and Costantino Grana. Segmenting maxillofacial structures in cbct volumes. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 5238–5248, 2025. 5, 6

  4. [4]

    Un- supervised learning of visual features by contrasting cluster assignments

    Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Un- supervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912–9924, 2020. 2

  5. [5]

    A simple framework for con- trastive learning of visual representations

    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for con- trastive learning of visual representations. In Interna- tional conference on machine learning , pages 1597–

  6. [6]

    Masked image modeling advances 3d medi- cal image analysis

    Zekai Chen, Devansh Agarwal, Kshitij Aggarwal, Wiem Safta, Mariann Micsinai Balan, and Kevin Brown. Masked image modeling advances 3d medi- cal image analysis. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vi- sion, pages 1970–1980, 2023. 2

  7. [7]

    Deep segmentation of the mandibu- lar canal: a new 3d annotated dataset of cbct volumes

    Marco Cipriano, Stefano Allegretti, Federico Bolelli, Mattia Di Bartolomeo, Federico Pollastri, Arrigo Pellacani, Paolo Minafra, Alexandre Anesi, and Costantino Grana. Deep segmentation of the mandibu- lar canal: a new 3d annotated dataset of cbct volumes. Ieee Access, 10:11500–11510, 2022. 5, 6

  8. [8]

    Ctooth: a fully annotated 3d dataset and benchmark for tooth volume segmentation on cone beam computed tomography images

    Weiwei Cui, Y aqi Wang, Qianni Zhang, Huiyu Zhou, Dan Song, Xingyong Zuo, Gangyong Jia, and Liaoyuan Zeng. Ctooth: a fully annotated 3d dataset and benchmark for tooth volume segmentation on cone beam computed tomography images. In Inter- 10 national Conference on Intelligent Robotics and Ap- plications, pages 191–200. Springer, 2022. 5, 6

  9. [9]

    A fully automatic ai system for tooth and alveolar bone segmentation from cone- beam ct images

    Zhiming Cui, Y u Fang, Lanzhuju Mei, Bojun Zhang, Bo Y u, Jiameng Liu, Caiwen Jiang, Y uhang Sun, Lei Ma, Jiawei Huang, et al. A fully automatic ai system for tooth and alveolar bone segmentation from cone- beam ct images. Nature communications, 13(1):2096, 2022. 5, 6

  10. [10]

    Mcmae: Masked convolution meets masked autoencoders

    Peng Gao, Teli Ma, Hongsheng Li, Ziyi Lin, Jifeng Dai, and Y u Qiao. Mcmae: Masked convolution meets masked autoencoders. Advances in Neural Informa- tion Processing Systems , 35:35632–35644, 2022. 2, 3

  11. [11]

    Bootstrap your own latent-a new approach to self- supervised learning

    Jean-Bastien Grill, Florian Strub, Florent Altch´ e, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self- supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020. 2, 3

  12. [12]

    Masked autoencoders are scalable vision learners

    Kaiming He, Xinlei Chen, Saining Xie, Y anghao Li, Piotr Doll´ ar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022. 1, 3, 7, 8

  13. [13]

    Momentum contrast for unsupervised visual representation learning

    Kaiming He, Haoqi Fan, Y uxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738, 2020. 1

  14. [14]

    Den- tal caries detection in children using intraoral scans and deep learning

    Bree Jones, Mathias Lambach, Tong Chen, Stavroula Michou, Nicky Kilpatrick, Nigel Curtis, David P Burgner, Christoph V annahme, and Mihiri Silva. Den- tal caries detection in children using intraoral scans and deep learning. Journal of Dentistry , 160:105906, 2025. 1

  15. [15]

    What to hide from your students: Attention-guided masked image modeling

    Ioannis Kakogeorgiou, Spyros Gidaris, Bill Pso- mas, Y annis Avrithis, Andrei Bursuc, Konstantinos Karantzalos, and Nikos Komodakis. What to hide from your students: Attention-guided masked image modeling. In European Conference on Computer Vi- sion, pages 300–318. Springer, 2022. 2, 3, 7, 8

  16. [16]

    Comparison of vertical measurements be- tween panoramic radiograph images and correspond- ing cone-beam computed tomography scans

    Ali-Reza Ketabi, Stefan Hassfeld, Laurentia Schus- ter, Sandra Ketabi, Julius Stueben, and Andree Pi- wowarczyk. Comparison of vertical measurements be- tween panoramic radiograph images and correspond- ing cone-beam computed tomography scans. Prosthe- sis, 7(6):131, 2025. 1

  17. [17]

    Comparison of the diagnostic accuracy of an ai-based system for dental caries de- tection and clinical evaluation conducted by dentists

    Jakub Kwiatek, Marta Le´ sna, Wiktor Pisk´ orz, and Justyna Kaczewiak. Comparison of the diagnostic accuracy of an ai-based system for dental caries de- tection and clinical evaluation conducted by dentists. Journal of Clinical Medicine , 14(5):1566, 2025. 1

  18. [18]

    Pre- clinical investigation of artificial intelligence-assist ed implant surgery planning for single tooth defects: A case series study

    Hongyang Ma, Y uwei Wu, Hailong Bai, Zineng Xu, Peng Ding, Xuliang Deng, and Zhihui Tang. Pre- clinical investigation of artificial intelligence-assist ed implant surgery planning for single tooth defects: A case series study. Journal of Oral Rehabilitation , 52(8):1220–1227, 2025. 1

  19. [19]

    A systematic review comparing the effective radiation dose of musculoskeletal cone beam com- puted tomography to other diagnostic imaging modal- ities

    K Mason, Gareth Iball, D Hinchcliffe, and Beverly Snaith. A systematic review comparing the effective radiation dose of musculoskeletal cone beam com- puted tomography to other diagnostic imaging modal- ities. European Journal of Radiology , 177:111558, 2024. 1

  20. [20]

    Structure-aware semantic discrepancy and consistency for 3d medical image self-supervised learning

    Tan Pan, Zhaorui Tan, Kaiyu Guo, Dongli Xu, Weidi Xu, Chen Jiang, Xin Guo, Y uan Qi, and Y uan Cheng. Structure-aware semantic discrepancy and consistency for 3d medical image self-supervised learning. In Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, pages 20257–20267, 2025. 1, 2, 7, 8

  21. [21]

    Segmentation of 3d craniofacial imaging and volumetric measurement

    Nalla Maheswara Rao and Bala Chakravarthy Nee- lapu. Segmentation of 3d craniofacial imaging and volumetric measurement. In Applications of Three- dimensional Imaging for Craniofacial Region , pages 131–141. Springer, 2024. 1

  22. [22]

    Exploring chatgpt’s potential in diag - nosing oral and maxillofacial pathologies: a study of 123 challenging cases

    Melek Tassoker. Exploring chatgpt’s potential in diag - nosing oral and maxillofacial pathologies: a study of 123 challenging cases. BMC oral health , 25(1):1187, 2025. 1

  23. [23]

    Hard patches mining for masked image modeling

    Haochen Wang, Kaiyou Song, Junsong Fan, Y uxi Wang, Jin Xie, and Zhaoxiang Zhang. Hard patches mining for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10375–10385, 2023. 2

  24. [24]

    Swinmm: masked multi-view with swin transformers for 3d medical image segmen- tation

    Yiqing Wang, Zihan Li, Jieru Mei, Zihao Wei, Li Liu, Chen Wang, Shengtian Sang, Alan L Y uille, Cihang Xie, and Y uyin Zhou. Swinmm: masked multi-view with swin transformers for 3d medical image segmen- tation. In International conference on medical image computing and computer-assisted intervention , pages 486–496. Springer, 2023. 7, 8

  25. [25]

    Miccai stsr 2025 challenge: Semi- supervised teeth and pulp segmentation and cbct-ios registration

    Y aqi Wang, Zhi Li, Chengyu Wu, Jun Liu, Yifan Zhang, Jialuo Chen, Jiaxue Ni, Qian Luo, Jin Liu, Can Han, et al. Miccai stsr 2025 challenge: Semi- supervised teeth and pulp segmentation and cbct-ios registration. arXiv preprint arXiv:2512.02867 , 2025. 5, 6

  26. [26]

    V oco: A simple-yet-effective volume contrastive learning 11 framework for 3d medical image analysis

    Linshan Wu, Jiaxin Zhuang, and Hao Chen. V oco: A simple-yet-effective volume contrastive learning 11 framework for 3d medical image analysis. In Proceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22873–22882, 2024. 1, 7, 8

  27. [27]

    Medim: Boost medical image representation via radiology report-guided masking

    Y utong Xie, Lin Gu, Tatsuya Harada, Jianpeng Zhang, Y ong Xia, and Qi Wu. Medim: Boost medical image representation via radiology report-guided masking. In International conference on medical image comput- ing and computer-assisted intervention , pages 13–23. Springer, 2023. 2

  28. [28]

    Rethinking masked image mod- elling for medical image representation

    Y utong Xie, Lin Gu, Tatsuya Harada, Jianpeng Zhang, Y ong Xia, and Qi Wu. Rethinking masked image mod- elling for medical image representation. Medical Im- age Analysis, 98:103304, 2024. 2

  29. [29]

    Sim- mim: A simple framework for masked image model- ing

    Zhenda Xie, Zheng Zhang, Y ue Cao, Y utong Lin, Jian- min Bao, Zhuliang Y ao, Qi Dai, and Han Hu. Sim- mim: A simple framework for masked image model- ing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9653– 9663, 2022. 1, 3

  30. [30]

    Hybrid masked image modeling for 3d medical image segmentation

    Zhaohu Xing, Lei Zhu, Lequan Y u, Zhiheng Xing, and Liang Wan. Hybrid masked image modeling for 3d medical image segmentation. IEEE Journal of Biomedical and Health Informatics , 28(4):2115– 2125, 2024. 2

  31. [31]

    Self-supervised medical image seg- mentation using deep reinforced adaptive masking

    Zhenghua Xu, Y unxin Liu, Gang Xu, and Thomas Lukasiewicz. Self-supervised medical image seg- mentation using deep reinforced adaptive masking. IEEE Transactions on Medical Imaging , 44(1):180– 193, 2024. 2

  32. [32]

    Two-stream regression network for dental implant po- sition prediction

    Xinquan Y ang, Xuguang Li, Xuechen Li, Wenting Chen, Linlin Shen, Xin Li, and Y ongqiang Deng. Two-stream regression network for dental implant po- sition prediction. Expert Systems with Applications , 235:121135, 2024. 1

  33. [33]

    Regfreenet: A registration-free network for cbct-based 3d dental implant planning

    Xinquan Y ang, Xuguang Li, Mianjie Zheng, Xuefen Liu, Kun Tang, Kian Ming Lim, He Meng, Jianfeng Ren, and Linlin Shen. Regfreenet: A registration-free network for cbct-based 3d dental implant planning. arXiv preprint arXiv:2601.14703 , 2026. 5

  34. [34]

    Tceip: Text condition embedded regression network for dental im- plant position prediction

    Xinquan Y ang, Jinheng Xie, Xuguang Li, Xuechen Li, Xin Li, Linlin Shen, and Y ongqiang Deng. Tceip: Text condition embedded regression network for dental im- plant position prediction. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 317–326. Springer, 2023. 1

  35. [35]

    Analysis of digitalized panorama and cone beam computed tomographic image distor- tion for the diagnosis of dental implant surgery

    Jin-hyuk Yim, Dong-mok Ryu, Baek-soo Lee, and Y ong-dae Kwon. Analysis of digitalized panorama and cone beam computed tomographic image distor- tion for the diagnosis of dental implant surgery. Jour- nal of Craniofacial Surgery , 22(2):669–673, 2011. 1

  36. [36]

    Comparing radiation doses in cbct and medical ct imaging for dental applications

    Mahmud U Zaman. Comparing radiation doses in cbct and medical ct imaging for dental applications. Jour- nal of Pharmacy and Bioallied Sciences , 16(Suppl 1):S883–S885, 2024. 1

  37. [37]

    A unified visual informa- tion preservation framework for self-supervised pre- training in medical image analysis

    Hong-Y u Zhou, Chixiang Lu, Chaoqi Chen, Sibei Y ang, and Yizhou Y u. A unified visual informa- tion preservation framework for self-supervised pre- training in medical image analysis. IEEE Transac- tions on Pattern Analysis and Machine Intelligence , 45(7):8020–8035, 2023. 7, 8

  38. [38]

    Advancing volumetric medical image segmentation via global-local masked autoencoders

    Jiaxin Zhuang, Luyang Luo, Qiong Wang, Mingxiang Wu, Lin Luo, and Hao Chen. Advancing volumetric medical image segmentation via global-local masked autoencoders. IEEE Transactions on Medical Imag- ing, 2025. 2 12