Recognition: 2 theorem links
· Lean TheoremAdaptive Dual Residual U-Net with Attention Gate and Multiscale Spatial Attention Mechanisms (ADRUwAMS)
Pith reviewed 2026-05-10 17:57 UTC · model grok-4.3
The pith
The ADRUwAMS model segments glioma tumors in MRI scans by combining adaptive dual residual blocks with attention gates and multiscale spatial attention to reach Dice scores of 0.9229 on whole tumor, 0.8432 on tumor core, and 0.8004 on the 0
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The ADRUwAMS architecture integrates adaptive dual residual networks that preserve both high-level semantic information and low-level details, attention gates that compute coefficients from gating and input signals, and multiscale spatial attention that produces scaled maps to retain the most relevant tumor features. When trained on BraTS 2020 this yields the reported Dice coefficients of 0.9229 for whole tumor, 0.8432 for tumor core, and 0.8004 for enhancing tumor.
What carries the argument
ADRUwAMS, an Adaptive Dual Residual U-Net augmented with attention gates and multiscale spatial attention that computes attention coefficients and combines scaled feature maps to emphasize tumor regions.
If this is right
- The model can delineate whole tumor, tumor core, and enhancing tumor regions simultaneously in a single forward pass.
- Attention coefficients generated by the gates can be inspected to show which image regions drive each sub-region prediction.
- The multiscale spatial attention maps can be reused to highlight salient tumor boundaries at different resolutions.
- Training for 200 epochs on BraTS 2019 and 2020 produces stable convergence under the ReLU activation used.
Where Pith is reading between the lines
- If the architecture generalizes, it could be applied to other multi-modal medical segmentation tasks such as liver or prostate lesion delineation without major redesign.
- The attention maps might serve as a starting point for uncertainty estimation by measuring how consistently the model focuses on the same voxels across training runs.
- Replacing the residual blocks with other lightweight skip-connection variants could test whether the dual adaptive structure is the primary driver of the observed scores.
Load-bearing premise
That the reported Dice scores arise mainly from the new residual and attention modules rather than from unstated choices in preprocessing, data augmentation, or the exact training schedule.
What would settle it
A controlled ablation study that trains the identical base U-Net with and without the adaptive dual residual blocks, attention gates, and multiscale spatial attention on the same BraTS 2020 split and reports the resulting Dice differences.
Figures
read the original abstract
Glioma is a harmful brain tumor that requires early detection to ensure better health results. Early detection of this tumor is key for effective treatment and requires an automated segmentation process. However, it is a challenging task to find tumors due to tumor characteristics like location and size. A reliable method to accurately separate tumor zones from healthy tissues is deep learning models, which have shown promising results over the last few years. In this research, an Adaptive Dual Residual U-Net with Attention Gate and Multiscale Spatial Attention Mechanisms (ADRUwAMS) is introduced. This model is an innovative combination of adaptive dual residual networks, attention mechanisms, and multiscale spatial attention. The dual adaptive residual network architecture captures high-level semantic and intricate low-level details from brain images, ensuring precise segmentation of different tumor parts, types, and hard regions. The attention gates use gating and input signals to compute attention coefficients for the input features, and multiscale spatial attention generates scaled attention maps and combines these features to hold the most significant information about the brain tumor. We trained the model for 200 epochs using the ReLU activation function on BraTS 2020 and BraTS 2019 datasets. These improvements resulted in high accuracy for tumor detection and segmentation on BraTS 2020, achieving dice scores of 0.9229 for the whole tumor, 0.8432 for the tumor core, and 0.8004 for the enhancing tumor.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes ADRUwAMS, a U-Net variant that adds adaptive dual residual blocks, attention gates, and multiscale spatial attention for multi-class brain tumor segmentation. Trained for 200 epochs with ReLU on BraTS 2019/2020, it reports Dice scores of 0.9229 (whole tumor), 0.8432 (tumor core), and 0.8004 (enhancing tumor) on BraTS 2020.
Significance. If the reported Dice scores can be shown to result from the architectural additions rather than training or preprocessing choices, the work would represent an incremental but potentially useful contribution to medical image segmentation, where small gains in tumor core and enhancing tumor overlap can matter for radiotherapy planning. The current manuscript, however, supplies no evidence that isolates the proposed modules, so the significance remains unestablished.
major comments (2)
- [Abstract] Abstract: the central performance claim (Dice 0.9229/0.8432/0.8004) is presented without any baseline (plain U-Net, nnU-Net, or prior attention U-Net) or ablation table, so the attribution of gains to the adaptive dual residual, attention gate, and multiscale spatial attention components cannot be evaluated.
- [Abstract] Abstract / Methods: the training protocol is limited to “200 epochs using ReLU”; no loss function, optimizer, learning-rate schedule, patch sampling strategy, intensity normalization, or data augmentation details are supplied, all of which are load-bearing for the reported BraTS numbers.
minor comments (2)
- [Abstract] Abstract: the phrase “hard regions” is undefined and should be replaced by a concrete description (e.g., small or low-contrast enhancing tumor voxels).
- [Abstract] Abstract: the sentence beginning “This model is an innovative combination…” is promotional; replace with a factual statement of the architectural components.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address each major comment below and will revise the manuscript to strengthen the evidence for our claims.
read point-by-point responses
-
Referee: [Abstract] Abstract: the central performance claim (Dice 0.9229/0.8432/0.8004) is presented without any baseline (plain U-Net, nnU-Net, or prior attention U-Net) or ablation table, so the attribution of gains to the adaptive dual residual, attention gate, and multiscale spatial attention components cannot be evaluated.
Authors: We agree that baseline comparisons and ablation studies are necessary to isolate the contribution of each proposed module. The revised manuscript will include a results table comparing ADRUwAMS against a standard U-Net, Attention U-Net, and nnU-Net on the BraTS 2020 validation set using identical training conditions. We will also add an ablation study that systematically removes the adaptive dual residual blocks, attention gates, and multiscale spatial attention one at a time, reporting the resulting Dice scores for whole tumor, tumor core, and enhancing tumor to quantify each component's impact. revision: yes
-
Referee: [Abstract] Abstract / Methods: the training protocol is limited to “200 epochs using ReLU”; no loss function, optimizer, learning-rate schedule, patch sampling strategy, intensity normalization, or data augmentation details are supplied, all of which are load-bearing for the reported BraTS numbers.
Authors: We acknowledge the lack of detail in the current description. The revised Methods section will specify the complete protocol: combined Dice and cross-entropy loss, Adam optimizer with initial learning rate 1e-4 and cosine annealing, 128x128x128 patch sampling with overlap, per-modality z-score normalization, and augmentations consisting of random rotations, flips, scaling, and intensity shifts. These additions will make the experimental setup fully reproducible and allow readers to assess whether the reported scores depend on architecture or training choices. revision: yes
Circularity Check
No significant circularity: empirical results with no derivation chain
full rationale
The paper introduces the ADRUwAMS architecture (adaptive dual residual U-Net plus attention gates and multiscale spatial attention) and reports empirical Dice scores on BraTS 2020/2019 after 200 epochs of training. No equations, first-principles derivations, or predictive claims appear in the provided text; performance numbers are direct measurements on public data rather than outputs derived from fitted parameters or self-referential definitions. No self-citation load-bearing steps, ansatz smuggling, or renaming of known results reduce any central claim to its own inputs. The absence of ablations affects causal attribution but does not create circularity in a non-existent derivation chain.
Axiom & Free-Parameter Ledger
free parameters (1)
- training hyperparameters (learning rate, batch size, 200 epochs)
axioms (1)
- domain assumption BraTS 2019/2020 datasets are sufficiently representative for clinical generalization
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
ADRUwAMS ... adaptive dual residual networks, attention mechanisms, and multiscale spatial attention ... trained ... on BraTS 2020 ... dice scores of 0.9229 / 0.8432 / 0.8004
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
dual adaptive residual network architecture captures high-level semantic and intricate low-level details
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
e315–e329, 2017, Elsevier
Michael Weller, Martin Van Den Bent, J¨ org C Tonn, Roger Stupp, Matthias Preusser, Elizabeth Cohen-Jonathan-Moyal, Roger Henriks- son, Emilie Le Rhun, Carmen Balana, Olivier Chinot, et al.European Association for Neuro-Oncology (EANO) guideline on the diagnosis and treatment of adult astrocytic and oligodendroglial gliomas.The lancet on- cology, 18(6), p...
2017
-
[2]
18–31, 2017, Elsevier
Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, Hugo Larochelle.Brain tumor segmentation with deep neural networks.Medi- cal image analysis, 35, pp. 18–31, 2017, Elsevier
2017
-
[3]
InMedical Image Comput- ing and Computer-Assisted Intervention–MICCAI 2015: 18th Interna- tional Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp
Olaf Ronneberger, Philipp Fischer, Thomas Brox.U-net: Convolutional networks for biomedical image segmentation. InMedical Image Comput- ing and Computer-Assisted Intervention–MICCAI 2015: 18th Interna- tional Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015
2015
-
[4]
InProceedings of the IEEE conference on computer vision and pattern recognition, pp
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016
2016
-
[5]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin.Attention is all you need.Advances in neural information processing systems, 30, 2017
2017
-
[6]
1993–2024, 2014, IEEE
Bjoern H Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy- Cramer, Keyvan Farahani, Justin Kirby, Yuliya Burren, Nicole Porz, Johannes Slotboom, Roland Wiest, et al.The multimodal brain tumor image segmentation benchmark (BRATS).IEEE transactions on medical imaging, 34(10), pp. 1993–2024, 2014, IEEE
1993
-
[7]
Cancer genetics, 205(12), pp
McKinsey L Goodenberger, Robert B Jenkins.Genetics of adult glioma. Cancer genetics, 205(12), pp. 613–621, 2012, Elsevier. 39
2012
-
[8]
803–820, 2016, Springer
David N Louis, Arie Perry, Guido Reifenberger, Andreas Von Deimling, Dominique Figarella-Branger, Webster K Cavenee, Hiroko Ohgaki, Ot- mar D Wiestler, Paul Kleihues, David W Ellison.The 2016 World Health Organization classification of tumors of the central nervous system: a summary.Acta neuropathologica, 131, pp. 803–820, 2016, Springer
2016
-
[9]
Artificial intelligence in medicine, 45(2-3), pp
Paola Campadelli, Elena Casiraghi, Andrea Esposito.Liver segmenta- tion from computed tomography scans: a survey and a new algorithm. Artificial intelligence in medicine, 45(2-3), pp. 185–196, 2009, Elsevier
2009
-
[10]
1000–1010, 2009, IEEE
Ivana Isgum, Marius Staring, Annemarieke Rutten, Mathias Prokop, Max A Viergever, Bram Van Ginneken.Multi-atlas-based segmentation with local decision fusion—application to cardiac and aortic segmen- tation in CT scans.IEEE transactions on medical imaging, 28(7), pp. 1000–1010, 2009, IEEE
2009
-
[11]
arXiv preprint arXiv:1704.06857 , year=
Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor Villena-Martinez, Jose Garcia-Rodriguez.A review on deep learning techniques applied to semantic segmentation. arXiv 2017.arXiv preprint arXiv:1704.06857, 2020
-
[12]
193–202, 1980, Springer
Kunihiko Fukushima.Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in po- sition.Biological cybernetics, 36(4), pp. 193–202, 1980, Springer
1980
-
[13]
2278–2324, 1998, Ieee
Yann LeCun, L´ eon Bottou, Yoshua Bengio, Patrick Haffner.Gradient- based learning applied to document recognition.Proceedings of the IEEE, 86(11), pp. 2278–2324, 1998, Ieee
1998
-
[14]
60– 88, 2017, Elsevier
Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, Clara I S´ anchez.A survey on deep learning in medical image analysis.Medical image analysis, 42, pp. 60– 88, 2017, Elsevier
2017
-
[15]
449–459, 2017, Springer
Zeynettin Akkus, Alfiia Galimzianova, Assaf Hoogi, Daniel L Rubin, Bradley J Erickson.Deep learning for brain MRI segmentation: state of the art and future directions.Journal of digital imaging, 30, pp. 449–459, 2017, Springer. 40
2017
-
[16]
61–78, 2017, Elsevier
Konstantinos Kamnitsas, Christian Ledig, Virginia FJ Newcombe, Joanna P Simpson, Andrew D Kane, David K Menon, Daniel Rueckert, Ben Glocker.Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.Medical image analysis, 36, pp. 61–78, 2017, Elsevier
2017
-
[17]
Deep Learning-based Type Identification of Volumetric MRI Sequences
Jean Pablo Vieira de Mello, Thiago M Paix˜ ao, Rodrigo Berriel, Mauricio Reyes, Claudine Badue, Alberto F De Souza, Thiago Oliveira-Santos. Deep Learning-based Type Identification of Volumetric MRI Sequences. In2020 25th International Conference on Pattern Recognition (ICPR), pp. 1–8, IEEE, 2021
2021
-
[18]
1–10, 2021, Springer
Aheli Saha, Yu-Dong Zhang, Suresh Chandra Satapathy.Brain tumour segmentation with a muti-pathway ResNet based UNet.Journal of Grid Computing, 19, pp. 1–10, 2021, Springer
2021
-
[19]
732–740, 2023, Elsevier
Gayathri Ramasamy, Tripty Singh, Xiaohui Yuan.Multi-Modal Seman- tic Segmentation Model using Encoder Based Link-Net Architecture for BraTS 2020 Challenge.Procedia Computer Science, 218, pp. 732–740, 2023, Elsevier
2020
-
[20]
Cambridge University Press, 2023
Aston Zhang, Zachary C Lipton, Mu Li, Alexander J Smola.Dive into deep learning. Cambridge University Press, 2023
2023
-
[21]
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio.Neural machine translation by jointly learning to align and translate.arXiv preprint arXiv:1409.0473, 2014
work page internal anchor Pith review arXiv 2014
-
[22]
InProceedings of the European conference on computer vision (ECCV), pp
Sanghyun Woo, Jongchan Park, Joon-Young Lee, In So Kweon.Cbam: Convolutional block attention module. InProceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018
2018
-
[23]
InProceed- ings of the IEEE conference on computer vision and pattern recognition, pp
Jie Hu, Li Shen, Gang Sun.Squeeze-and-excitation networks. InProceed- ings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141, 2018
2018
-
[25]
arXiv preprint arXiv:2305.17937, 2023
Yutong Xie, Bing Yang, Qingbiao Guan, Jianpeng Zhang, Qi Wu, Yong Xia.Attention Mechanisms in Medical Image Segmentation: A Survey. arXiv preprint arXiv:2305.17937, 2023
-
[26]
197–207, 2019, Elsevier
Jo Schlemper, Ozan Oktay, Michiel Schaap, Mattias Heinrich, Bernhard Kainz, Ben Glocker, Daniel Rueckert.Attention gated networks: Learn- ing to leverage salient regions in medical images.Medical image analysis, 53, pp. 197–207, 2019, Elsevier
2019
-
[27]
Ozan Oktay, Jo Schlemper, LL Folgoc, Matthew Lee, M Heinrich, K Misawa, K Mori, S McDonagh, NY Hammerla, B Kainz, et al.Learn- ing where to look for the pancreas.arXiv preprint arXiv:1804.03999, 18, 2018
work page internal anchor Pith review arXiv 2018
-
[28]
102958, 2021, Elsevier
Zheng Huang, Yiwen Zhao, Yunhui Liu, Guoli Song.GCAUNet: A group cross-channel attention residual UNet for slice based brain tumor segmen- tation.Biomedical Signal Processing and Control, 70, pp. 102958, 2021, Elsevier
2021
-
[29]
103077, 2022, Elsevier
Dhiraj Maji, Prarthana Sigedar, Munendra Singh.Attention Res-UNet with Guided Decoder for semantic segmentation of brain tumors.Biomed- ical Signal Processing and Control, 71, pp. 103077, 2022, Elsevier
2022
-
[30]
103907, 2022, Elsevier
Siva Koteswara Rao Chinnam, Venkatramaphanikumar Sistla, Venkata Krishna Kishore Kolli.Multimodal attention-gated cascaded U-Net model for automatic brain tumor detection and segmentation.Biomedical Sig- nal Processing and Control, 78, pp. 103907, 2022, Elsevier
2022
-
[31]
58533–58545, 2020, IEEE
Jianxin Zhang, Zongkang Jiang, Jing Dong, Yaqing Hou, Bin Liu.Atten- tion gate resU-Net for automatic MRI brain tumor segmentation.IEEE Access, 8, pp. 58533–58545, 2020, IEEE
2020
-
[32]
3676, 2022, MDPI
Alyaa Amer, Tryphon Lambrou, Xujiong Ye.MDA-unet: a multi-scale dilated attention U-net for medical image segmentation.Applied Sciences, 12(7), pp. 3676, 2022, MDPI
2022
-
[33]
Applied Sciences, 13(17), pp
Kuan-Hsien Liu, Bo-Yen Lin.MSCSA-Net: Multi-scale channel spatial attention network for semantic segmentation of remote sensing images. Applied Sciences, 13(17), pp. 9491, 2023, MDPI. 42
2023
-
[34]
243–254, 2022, Elsevier
Indrajit Mazumdar, Jayanta Mukherjee.Fully automatic MRI brain tu- mor segmentation using efficient spatial attention convolutional networks with composite loss.Neurocomputing, 500, pp. 243–254, 2022, Elsevier
2022
-
[35]
InProceedings of the 3rd International Symposium on Artificial Intelligence for Medicine Sciences, pp
Mengxian Chi, Hong An, Xu Jin, Ke Wen, Zhenguo Nie.SCAR U-Net: A 3D Spatial-Channel Attention ResU-Net for Brain Tumor Segmenta- tion. InProceedings of the 3rd International Symposium on Artificial Intelligence for Medicine Sciences, pp. 497–501, 2022
2022
-
[36]
In2020 IEEE International Conference on progress in informatics and computing (PIC), pp
Yufeng Nie, Hui Ding, Yuanyuan Shang, Zhuhong Shao, Tie Liu.Spatial attention-based efficiently features fusion network for 3D-MR brain tu- mor segmentation. In2020 IEEE International Conference on progress in informatics and computing (PIC), pp. 67–74, IEEE, 2020
2020
-
[37]
In 2020 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB), pp
Shruti Jadon.A survey of loss functions for semantic segmentation. In 2020 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB), pp. 1–7, IEEE, 2020
2020
-
[38]
1–13, 2017, Nature Publishing Group
Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Mar- tin Rozycki, Justin S Kirby, John B Freymann, Keyvan Farahani, Chris- tos Davatzikos.Advancing the cancer genome atlas glioma MRI collec- tions with expert segmentation labels and radiomic features.Scientific data, 4(1), pp. 1–13, 2017, Nature Publishing Group
2017
-
[39]
Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, et al.Identifying the best ma- chine learning algorithms for brain tumor segmentation, progression as- sessment, and overall survival prediction in the BRATS challenge.arXiv preprint arXiv...
work page Pith review arXiv 2018
-
[40]
InMedical Image Computing and Computer- Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, pp
¨Ozg¨ un C ¸ i¸ cek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, Olaf Ronneberger.3D U-Net: learning dense volumetric segmentation from sparse annotation. InMedical Image Computing and Computer- Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, pp. 424– 432, Springer, 2016
2016
-
[41]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Brad- bury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, 43 Luca Antiga, et al.Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems, 32, 2019
2019
-
[42]
Springer, 1996
William Rucklidge.Efficient visual recognition using the Hausdorff dis- tance. Springer, 1996
1996
-
[43]
Wen Jun, Xu Haoxiang, Zhang Wang.Brain tumor segmentation using dual-path attention U-net in 3D MRI images. InBrainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th Interna- tional Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part I 6, pp. 183–193, Springer, 2021
2020
-
[44]
Linmin Pei, AK Murat, Rivka Colen.Multimodal brain tumor segmenta- tion and survival prediction using a 3D self-ensemble ResUNet. InBrain- lesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part I 6, ...
2020
-
[45]
Bhavesh Parmar, Mehul Parikh.brain tumor segmentation and survival prediction using patch based modified 3D U-Net. InBrainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th Interna- tional Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6, pp. 398–409, S...
2020
-
[46]
Wenxuan Wang, Chen Chen, Meng Ding, Hong Yu, Sen Zha, Jiangyun Li.Transbts: Multimodal brain tumor segmentation using transformer. InMedical Image Computing and Computer Assisted Intervention– MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, pp. 109–119, Springer, 2021
2021
-
[47]
1834–1848, 2021, Wiley Online Library
Wenbo Zhang, Guang Yang, He Huang, Weiji Yang, Xiaomei Xu, Yongkai Liu, Xiaobo Lai.ME-Net: multi-encoder net framework for brain tumor segmentation.International Journal of Imaging Systems and Technology, 31(4), pp. 1834–1848, 2021, Wiley Online Library. 44
2021
-
[48]
S Rosas Gonz´ alez, Ilyess Zemmoura, Clovis Tauber.3D brain tumor segmentation and survival prediction using ensembles of convolutional neural networks. InBrainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Select...
2020
-
[49]
797, 2022, MDPI
Yun Jiang, Yuan Zhang, Xin Lin, Jinkun Dong, Tongtong Cheng, Jing Liang.SwinBTS: A method for 3D multimodal brain tumor segmentation using swin transformer.Brain sciences, 12(6), pp. 797, 2022, MDPI
2022
-
[50]
103784, 2022, El- sevier
Junjie Liang, Cihui Yang, Lingguo Zeng.3D PSwinBTS: An efficient transformer-based Unet using 3D parallel shifted windows for brain tu- mor segmentation.Digital Signal Processing, 131, pp. 103784, 2022, El- sevier
2022
-
[51]
340–361, 2023, Wiley Online Library
Gurinderjeet Kaur, Prashant Singh Rana, Vinay Arora.Deep learning and machine learning-based early survival predictions of glioblastoma pa- tients using pre-operative three-dimensional brain magnetic resonance imaging modalities.International Journal of Imaging Systems and Tech- nology, 33(1), pp. 340–361, 2023, Wiley Online Library
2023
-
[52]
538, 2023, MDPI
Ruifeng Zhang, Shasha Jia, Mohammed Jajere Adamu, Weizhi Nie, Qiang Li, Ting Wu.HMNet: Hierarchical Multi-Scale Brain Tumor Seg- mentation Network.Journal of Clinical Medicine, 12(2), pp. 538, 2023, MDPI
2023
-
[53]
1000587, 2022, Fron- tiers
Yu Liu, Fuhao Mu, Yu Shi, Juan Cheng, Chang Li, Xun Chen.Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion.Frontiers in Neuroscience, 16, pp. 1000587, 2022, Fron- tiers
2022
-
[54]
103861, 2023, Elsevier
Rehan Raza, Usama Ijaz Bajwa, Yasar Mehmood, Muhammad Waqas Anwar, M Hassan Jamal.dResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI.Biomedical Signal Pro- cessing and Control, 79, pp. 103861, 2023, Elsevier
2023
-
[55]
2346, 2023, MDPI
Azka Rehman, Muhammad Usman, Abdullah Shahid, Siddique Latif, Junaid Qadir.Selective Deeply Supervised Multi-Scale Attention Net- work for Brain Tumor Segmentation.Sensors, 23(4), pp. 2346, 2023, MDPI. 45
2023
-
[56]
950706, 2022, Frontiers
Liang Zhao, Jiajun Ma, Yu Shao, Chaoran Jia, Jingyuan Zhao, Hong Yuan.MM-UNet: A multimodality brain tumor segmentation network in MRI images.Frontiers in oncology, 12, pp. 950706, 2022, Frontiers
2022
-
[57]
InProceedings of 2021 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2021) Medical Imag- ing and Computer-Aided Diagnosis, pp
Tongxue Zhou, St´ ephane Canu, Pierre Vera, Su Ruan.A dual super- vision guided attentional network for multimodal mr brain tumor seg- mentation. InProceedings of 2021 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2021) Medical Imag- ing and Computer-Aided Diagnosis, pp. 3–11, Springer, 2022
2021
-
[58]
1253–1262, 2021, Springer
Antonio Di Ieva, Carlo Russo, Sidong Liu, Anne Jian, Michael Y Bai, Yi Qian, John S Magnussen.Application of deep learning for automatic segmentation of brain tumors on magnetic resonance imaging: a heuris- tic approach in the clinical scenario.Neuroradiology, 63, pp. 1253–1262, 2021, Springer
2021
-
[59]
RAAGR2-Net: A brain tumor segmentation network using parallel pro- cessing of multiple spatial frames.Computers in Biology and Medicine, 152, pp
Mobeen Ur Rehman, Jihyoung Ryu, Imran Fareed Nizami, Kil To Chong. RAAGR2-Net: A brain tumor segmentation network using parallel pro- cessing of multiple spatial frames.Computers in Biology and Medicine, 152, pp. 106426, 2023, Elsevier
2023
-
[60]
103979, 2022, Elsevier
Pengyu Li, Wenhao Wu, Lanxiang Liu, Fardad Michael Serry, Jinjia Wang, Hui Han.Automatic brain tumor segmentation from Multipara- metric MRI based on cascaded 3D U-Net and 3D U-Net++.Biomedical Signal Processing and Control, 78, pp. 103979, 2022, Elsevier
2022
-
[61]
Multiscale lightweight 3D segmentation algorithm with attention mech- anism: Brain tumor image segmentation.Expert Systems with Applica- tions, 214, pp
Hengxin Liu, Guoqiang Huo, Qiang Li, Xin Guan, Ming-Lang Tseng. Multiscale lightweight 3D segmentation algorithm with attention mech- anism: Brain tumor image segmentation.Expert Systems with Applica- tions, 214, pp. 119166, 2023, Elsevier. 46
2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.