pith. machine review for the scientific record. sign in

arxiv: 2604.15542 · v1 · submitted 2026-04-16 · 💻 cs.CV · cs.LG

Recognition: unknown

UA-Net: Uncertainty-Aware Network for TRISO Image Semantic Segmentation

Authors on Pith no claims yet

Pith reviewed 2026-05-10 10:50 UTC · model grok-4.3

classification 💻 cs.CV cs.LG
keywords TRISOsemantic segmentationuncertainty estimationdeep learningnuclear fuelpost-irradiation examinationimage analysiscoated particle fuels
0
0 comments X

The pith

UA-Net segments TRISO fuel micrographs into five regions at 95.5 percent mIoU while a meta-model flags uncertain predictions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes UA-Net as a deep learning framework to segment characteristic regions in TRISO particle cross-section images and produce uncertainty maps for those segmentations. Manual expert review of thousands of such images after high-temperature neutron irradiation is tedious and subjective, so automation would speed up assessment of coating integrity and fission product retention. The approach relies on multi-stage pretraining that first learns general features from ImageNet then adapts to TRISO micrographs from multiple irradiation experiments, plus a separate meta-model trained to detect misclassifications. On a held-out test set of 102 images the model reaches 95.5 percent mean IoU and 97.3 percent mean precision, while the meta-model reaches 91.8 percent specificity and 93.5 percent sensitivity.

Core claim

UA-Net performs semantic segmentation of five regions in TRISO fuel micrographs using a multi-stage pretraining strategy on ImageNet followed by fine-tuning on TRISO images from various irradiation experiments and AGR-5/6/7 particles, integrates a meta-model that predicts uncertainty to identify small defects, and achieves 95.5 percent mIoU and 97.3 percent mP on 102 test images with the meta-model showing 91.8 percent specificity and 93.5 percent sensitivity; the model also extracts layer regions accurately on new qualitative images.

What carries the argument

UA-Net, a segmentation network with multi-stage pretraining and an integrated meta-model that generates uncertainty maps to flag misclassifications.

Load-bearing premise

The multi-stage pretraining on ImageNet followed by fine-tuning on TRISO micrographs produces features that generalize to new particle cross-sections without significant domain shift or label noise.

What would settle it

Evaluating the model on a fresh set of TRISO cross-section images from a previously unseen irradiation campaign and obtaining mIoU below 90 percent or meta-model sensitivity below 85 percent would show the claimed generalization does not hold.

Figures

Figures reproduced from arXiv: 2604.15542 by John D. Stempien, Kyle Lucke, Lu Cai, Min Xian, Shoukun Sun, Zuzanna Krajewska-Travar.

Figure 1
Figure 1. Figure 1: (a) Loose TRISO particles post-irradiation. (b) Optical microscopic image of a TRISO particle cross￾section. (c) An AGR-5/6/7 fuel compact. Segmentation Approaches Fully Convolutional Network11 (FCN), was the first fully convolutional network for image segmentation. FCN provides the basis for all modern convolution-based segmentation networks. Its introduction of deconvolution layers (now colloquially know… view at source ↗
Figure 2
Figure 2. Figure 2: Disc mount representing one-quarter of each AGR-5/6/7 compact. Also shown is an example of TRISO particle taken from each compact. To capture local and global context, hybrid CNN transformer segmentation models have also been proposed. These typically utilize either a transformer-based encoder with a CNN-based decoder22–24, a dual-branch encoder (i.e., one CNN-based encoder and one transformer-based encode… view at source ↗
Figure 3
Figure 3. Figure 3: TRISO particle cross sections, representing varying quality of optical microscopy imaging. Data and Materials Data Collection and Preparation Optical images were collected for two types of samples: AGR-2 TRISO particles and AGR-5/6/7 TRISO compacts. AGR-2 was the second irradiation experiment in the U.S. Advanced Gas Reactor (AGR) Fuel Development and Qualification program, and AGR-5/6/7 the final one1 . F… view at source ↗
Figure 4
Figure 4. Figure 4: Architecture of the proposed framework. (a) Overall architecture of the segmentation model. (b) Overall architecture of the meta-model for UQ. (c) Encoder block structure. (d) Decoder block structure. Solid lines indicate the forward process of the model; dashed lines represent skip connections. In the segmentation map, red indicates the kernel layer, green the buffer layer, blue the IPyC layer, yellow the… view at source ↗
Figure 5
Figure 5. Figure 5: shows the qualitative results of the model, both with and without finetuning. Without finetuning, the model struggles to accurately segment the OPyC layer, as it was absent in a large number of the AGR-2 images, and hence the poor IoU performance in [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Segmentation results obtained by applying the proposed method to varying levels of image quality. Misclassification Detection Performance In this section, we evaluate the misclassification detection performance of the meta-model. We achieve an AP, AP-E, MSE, Spec, Sens, and F1-SS of 0.999, 0.395, 0.128, 0.918, 0.935, and 0.926, respectively. The high Spec and Sens values indicate that the uncertainty model… view at source ↗
Figure 7
Figure 7. Figure 7: Uncertainty results obtained from the meta-model. In the error column, purple and yellow represent pixels that are correctly and incorrectly predicted by the segmentation model, respectively. In the uncertainty column, blue, white, and red represent pixels predicted to have low, medium, and high uncertainty, respectively. The insets below each image show enlarged local details. IoU mIoU Model BG Kernel Buf… view at source ↗
Figure 8
Figure 8. Figure 8: Segmentation results obtained from the proposed UA-Net model and the seven other models. The insets below each image show enlarged local details. Acknowledgments The authors gratefully acknowledge the financial support from the U.S. Department of Energy, Advanced Fuels Campaign (AFC) of the Nuclear Technology Research and Development program in the Office of Nuclear Energy. This manuscript has been authori… view at source ↗
read the original abstract

Tristructural isotropic (TRISO)-coated particle fuels undergo dimensional changes and chemical reactions during high-temperature neutron irradiation. Post-irradiation materialography helps understand processes that impact fuel performance, such as coating integrity and fission product retention. Conventionally, experts manually evaluate features in thousands of cross sections of sub-mm-sized samples, which is tedious and subjective. In this work, we propose UA-Net, a deep learning framework that segments five characteristic regions of TRISO fuel micrographs and generates an uncertainty map for predictions. The model uses a multi-stage pretraining strategy, starting with general image representations learned from ImageNet, followed by fine-tuning on TRISO micrographs from various irradiation experiments and AGR-5/6/7 particle cross sections. A meta-model for uncertainty prediction is integrated to identify small defects in TRISO images. UA-Net was evaluated on a test set of 102 images, achieving mean Intersection over Union (mIoU) and mean Precision (mP) of 95.5% and 97.3%, respectively. The meta-model achieved a specificity of 91.8% and sensitivity of 93.5%, demonstrating strong performance in detecting misclassifications. The model was also applied to new TRISO images for qualitative evaluation, showing high accuracy in extracting layer regions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes UA-Net, a deep learning framework for semantic segmentation of five characteristic regions in TRISO-coated particle fuel micrographs. It employs a multi-stage pretraining strategy (ImageNet followed by fine-tuning on TRISO data from various irradiation experiments and AGR-5/6/7 particles), integrates a meta-model to generate uncertainty maps and detect misclassifications, and reports mIoU of 95.5% and mean precision of 97.3% on a held-out test set of 102 images, with the meta-model achieving 93.5% sensitivity and 91.8% specificity; qualitative results are also shown on additional new TRISO images.

Significance. If the generalization claims hold under more rigorous testing, the work could meaningfully reduce the manual effort and subjectivity involved in post-irradiation examination of thousands of sub-mm TRISO particle cross-sections, aiding analysis of coating integrity and fission product retention in nuclear fuel research. The uncertainty-aware component is a constructive addition for practical reliability.

major comments (2)
  1. [Abstract] Abstract: The reported mIoU (95.5%) and mP (97.3%) on the 102-image test set, along with meta-model sensitivity/specificity, are presented without error bars, baseline comparisons (e.g., to standard U-Net or other segmentation models), cross-validation details, or information on data splits and exclusion criteria. This undermines verification of the central performance claims and leaves open the possibility of post-hoc selection or overfitting.
  2. [Abstract] Abstract and evaluation description: Quantitative metrics are provided exclusively for the 102-image test set, while application to 'new TRISO images' is limited to qualitative inspection. This does not provide numerical evidence for generalization across domain shifts from different irradiation experiments, which is load-bearing for the multi-stage pretraining strategy and the claim of applicability to unseen particle cross-sections.
minor comments (2)
  1. The abstract and methods would benefit from explicit statements on the network architecture details, loss functions, and training hyperparameters to improve reproducibility.
  2. Figure captions for qualitative results on new images could include more context on the specific defects or regions highlighted to aid reader interpretation.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The comments highlight important aspects of rigor in reporting performance metrics and generalization. We address each major comment point by point below, indicating planned revisions where appropriate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The reported mIoU (95.5%) and mP (97.3%) on the 102-image test set, along with meta-model sensitivity/specificity, are presented without error bars, baseline comparisons (e.g., to standard U-Net or other segmentation models), cross-validation details, or information on data splits and exclusion criteria. This undermines verification of the central performance claims and leaves open the possibility of post-hoc selection or overfitting.

    Authors: We agree that the abstract and evaluation sections would benefit from greater transparency. The full manuscript describes the 102-image test set as a held-out collection drawn from the overall pool of TRISO micrographs (including data from multiple irradiation experiments), but we will revise both the abstract and the methods/evaluation sections to report error bars (standard deviation across repeated training runs with different random seeds), direct baseline comparisons against a standard U-Net and at least one additional segmentation architecture, k-fold cross-validation results on the training portion, and explicit details on the train/validation/test split ratios together with any exclusion criteria applied to images from the various sources. These additions will allow independent verification and reduce concerns about post-hoc selection. revision: yes

  2. Referee: [Abstract] Abstract and evaluation description: Quantitative metrics are provided exclusively for the 102-image test set, while application to 'new TRISO images' is limited to qualitative inspection. This does not provide numerical evidence for generalization across domain shifts from different irradiation experiments, which is load-bearing for the multi-stage pretraining strategy and the claim of applicability to unseen particle cross-sections.

    Authors: The new TRISO images referenced in the abstract and results are drawn from irradiation experiments outside the training and test distributions, and ground-truth pixel-level annotations were unavailable for those particular images, limiting us to qualitative demonstration. The held-out test set of 102 images already incorporates micrographs from multiple distinct irradiation campaigns and AGR-5/6/7 particles, providing some quantitative support for the multi-stage pretraining approach. In the revision we will expand the methods section to quantify the diversity of source experiments in the pretraining data and, where feasible, add a small number of newly annotated images from an additional experiment to supply numerical generalization metrics. We will also clarify the limitation that full quantitative evaluation on every new domain requires fresh annotations. revision: partial

Circularity Check

0 steps flagged

No significant circularity; empirical metrics on held-out test set provide independent grounding

full rationale

The paper describes a standard multi-stage pretraining pipeline for a segmentation network followed by evaluation on an explicitly separated test set of 102 images, with additional qualitative checks on new images. No equations, uniqueness theorems, or self-referential definitions appear in the abstract or described workflow. Performance numbers are reported as direct measurements on held-out data rather than derived by construction from the training inputs or prior self-citations. The meta-model for uncertainty is presented as an integrated component whose outputs are validated separately, without evidence that its training reduces to the same fitted values used for the main claims.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The reported performance rests on the unverified assumption that the chosen training distribution and uncertainty meta-model training procedure produce reliable generalization and error detection on unseen TRISO images; no independent evidence for these assumptions is supplied in the abstract.

free parameters (1)
  • Network architecture and training hyperparameters
    Standard deep-learning parameters optimized during ImageNet pretraining and TRISO fine-tuning; their specific values are not reported.
axioms (2)
  • domain assumption ImageNet pretraining yields transferable features for microscopic TRISO images
    Invoked by the multi-stage pretraining strategy described in the abstract.
  • domain assumption The 102-image test set is representative of future TRISO micrographs from new irradiation experiments
    Required for the generalization claim implicit in the reported metrics.

pith-pipeline@v0.9.0 · 5548 in / 1584 out tokens · 85136 ms · 2026-05-10T10:50:10.959954+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

60 extracted references · 5 canonical work pages · 1 internal anchor

  1. [1]

    Demkowicz, P. A. & Hunn, J. D. Two-decade DOE investment lays foundation for TRISO-fueled reactors in the US. Nuclear News 63, 66–77 (2020)

  2. [2]

    A., Liu, B

    Demkowicz, P. A., Liu, B. & Hunn, J. D. Coated particle fuel: Historical perspectives and current progress. Journal of Nuclear Materials 515, 434–450 (2019)

  3. [3]

    Petti, D. et al. The DOE Advanced Gas Reactor Fuel Development and Qualification Program. JOM 62, 62–66 (2010)

  4. [4]

    K., Hunn, J

    Kercher, A. K., Hunn, J. D., Price, J. R. & Pappano, P. Automated optical microscopy of coated particle fuel. Journal of Nuclear Materials 380, 76–84 (2008)

  5. [5]

    Zhang, H. et al. Design of a deep learning visual system for the thickness measurement of each coating layer of TRISO-coated fuel particles. Measurement (Lond). 191, (2022)

  6. [6]

    Hu, Z. et al. A context-ensembled refinement network for image segmentation of coated fuel particles. Applied Soft Computing Journal 162, 111835 (2024)

  7. [7]

    Cai, L. et al. RU-net for automatic characterization of TRISO fuel cross sections. Mater. Charact. 232, (2026)

  8. [8]

    & Brox, T

    Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. in MICCAI vol. 9351 234–241 (2015)

  9. [9]

    & Wang, Y

    Zhang, Z., Liu, Q. & Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geoscience and Remote Sensing Letters 15, 749–753 (2018)

  10. [10]

    Oktay, O. et al. Attention U-Net: Learning Where to Look for the Pancreas. in Medical Imaging with Deep Learning (2018)

  11. [11]

    & Darrell, T

    Long, J., Shelhamer, E. & Darrell, T. Fully Convolutional Networks for Semantic Segmentation

  12. [12]

    A., Sikdar, S

    Guan, S., Khan, A. A., Sikdar, S. & Chitnis, P. V. Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal. IEEE J. Biomed. Health Inform. 24, 568–576 (2020)

  13. [13]

    & Geng, H

    Cui, R., Yang, R., Liu, F. & Geng, H. HD 2 A-Net: A novel dual gated attention network using comprehensive hybrid dilated convolutions for medical image segmentation. Comput. Biol. Med. 152, 106384 (2023)

  14. [14]

    M., Tajbakhsh, N

    Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N. & Liang, J. Unet++: A nested u-net architecture for medical image segmentation. in LNCS vol. 11045 LNCS 3–11 (2018)

  15. [15]

    Rethinking Atrous Convolution for Semantic Image Segmentation

    Chen, L.-C., Papandreou, G., Schroff, F. & Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. http://arxiv.org/abs/1706.05587 (2017)

  16. [16]

    Howard, A. et al. Searching for MobileNetV3. in ICCV (2019)

  17. [17]

    & Chen, L.-C

    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. & Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. in CVPR (2018)

  18. [18]

    Yang, T.-J. et al. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications. in ECCV (2018)

  19. [19]

    Cao, H. et al. Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation. Lecture Notes in Computer Science 13803 LNCS, 205–218 (2023)

  20. [20]

    & Schmid Inria, C

    Strudel, R., Garcia, R., Laptev Inria, I. & Schmid Inria, C. Segmenter: Transformer for Semantic Segmentation. in ICCV (2021)

  21. [21]

    Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. in ICLR (2021)

  22. [22]

    Xu, G., Zhang, X., He, X. & Wu, X. LeViT-UNet: Make Faster Encoders with Transformer for Medical Image Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 14432 LNCS, 42–53 (2024)

  23. [23]

    Hatamizadeh, A. et al. UNETR: Transformers for 3D Medical Image Segmentation. in WACV (2022)

  24. [24]

    Chen, J. et al. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 97, (2024)

  25. [25]

    Zhang, Y., Liu, H. & Hu, Q. TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12901 LNCS, 14–24 (2021)

  26. [26]

    N., Kaleybar, J

    Manzari, O. N., Kaleybar, J. M., Saadat, H. & Maleki, S. BEFUnet: A Hybrid CNN-Transformer Architecture for Precise Medical Image Segmentation. http://arxiv.org/abs/2402.08793 (2024)

  27. [27]

    & Chen, J

    Liu, X., Hu, Y. & Chen, J. Hybrid CNN-Transformer model for medical image segmentation with pyramid convolution and multi-layer perceptron. Biomed. Signal Process. Control 86, 105331 (2023)

  28. [28]

    & Gimpel, K

    Hendrycks, D. & Gimpel, K. A BASELINE FOR DETECTING MISCLASSIFIED AND OUT- OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS. in ICLR (2017)

  29. [29]

    & Weinberger, K

    Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On Calibration of Modern Neural Networks. (2017)

  30. [30]

    & Blundell, C

    Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. in NIPS (2017)

  31. [31]

    Ovadia, Y. et al. Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift. in NeurIPS (2019)

  32. [32]

    Laurent, O. et al. Packed-Ensembles for Efficient Uncertainty Estimation. in ICLR (2023)

  33. [33]

    Wen, Y., Tran, D. & Ba, J. BATCHENSEMBLE: AN ALTERNATIVE APPROACH TO EFFICIENT ENSEMBLE AND LIFELONG LEARNING. in ICLR (2020)

  34. [34]

    Havasi, M. et al. TRAINING INDEPENDENT SUBNETWORKS FOR ROBUST PREDICTION. in ICLR (2021)

  35. [35]

    & Fua, P

    Durasov, N., Bagautdinov, T., Baque, P. & Fua, P. Masksembles for Uncertainty Estimation. in CVPR (2021)

  36. [36]

    & Wierstra, D

    Blundell, C., Cornebise, J., Kavukcuoglu, K. & Wierstra, D. Weight Uncertainty in Neural Networks Daan Wierstra. in ICML (2015)

  37. [37]

    Shen, M. et al. Post-hoc Uncertainty Learning Using a Dirichlet Meta-Model. in AAAI (2023)

  38. [38]

    & Shanmugam, K

    Chen, T., Navrátil, J., Iyengar, V. & Shanmugam, K. Confidence Scoring Using Whitebox Meta- models with Linear Classifier Probes. in AISTATS (2019)

  39. [39]

    & Xian, M

    Lucke, K., Vakanski, A. & Xian, M. Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification. Computers https://doi.org/10.3390/computers14010012 (2025) doi:10.3390/computers14010012

  40. [40]

    & Pérez, P

    Corbière, C., Thome, N., Bar-Hen, A., Cord, M. & Pérez, P. Addressing Failure Prediction by Learning Model Confidence. in NeurIPS (2019)

  41. [41]

    D., Plummer, M

    Stempien, J. D., Plummer, M. A., Schulthess, J. L. & Demkowicz, P. A. Measurement of kernel swelling and buffer densification in irradiated AGR-2 UCO and UO2 TRISO fuels. in 10th International Topical Meeting on High Temperature Reactor Technology (2021)

  42. [42]

    D., Plummer, M

    Stempien, J. D., Plummer, M. A. & Schulthess, J. L. Measurement of Kernel Swelling and Buffer Densification in Irradiated UCO and UO 2 TRISO Fuel Particles from AGR-2. http://www.inl.gov (2019)

  43. [43]

    SPC-1352, AGR-5/6/7 FUEL SPECIFICATION

    Idaho National Labratory. SPC-1352, AGR-5/6/7 FUEL SPECIFICATION. (2017)

  44. [44]

    Pham, B. T. et al. AGR 5/6/7 Irradiation Test Final As-Run Report. http://www.ART.INL.gov (2021)

  45. [45]

    J., Stempien, J

    Rice, F. J., Stempien, J. D. & Demkowicz, P. A. Ceramography of irradiated TRISO fuel from the AGR-2 experiment. Nuclear Engineering and Design 329, 73–81 (2018)

  46. [46]

    & Sun, J

    He, K., Zhang, X., Ren, S. & Sun, J. Deep Residual Learning for Image Recognition. in CVPR (2016)

  47. [47]

    & Szegedy, C

    Ioffe, S. & Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. in ICML (PMLR, 2015)

  48. [48]

    Householder, A. S. A theory of steady-state activity in nerve-fiber networks: I. Definitions and preliminary lemmas. Bull. Math. Biophys. 3, 63–69 (1941)

  49. [49]

    2016, in 2016 Fourth International Conference on 3D Vision (3DV), IEEE, 565–571, doi: 10.1109/3DV.2016.79

    Milletari, F., Navab, N. & Ahmadi, S. A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016 565–571 (2016) doi:10.1109/3DV.2016.79

  50. [50]

    & Katabi, D

    Yang, Y., Zha, K., Chen, Y.-C., Wang, H. & Katabi, D. Delving into Deep Imbalanced Regression. in International Conference on Machine Learning (PMLR, 2021)

  51. [51]

    Deng, J. et al. ImageNet: A Large-Scale Hierarchical Image Database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009 248–255 (2009) doi:10.1109/CVPR.2009.5206848

  52. [52]

    Kingma, D. P. & Ba, J. Adam: A Method for Stochastic Optimization. in ICLR (2017)

  53. [53]

    Li, Q., Yang, W., Liu, W., Yu, Y. & He, S. From Contexts to Locality: Ultra-high Resolution Image Segmentation via Locality-aware Contextual Correlation. in ICCV (2021)

  54. [54]

    & Qian, X

    Chen, W., Jiang, Z., Wang, Z., Cui, K. & Qian, X. Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images. in CVPR (2019)

  55. [55]

    Guo, S. et al. ISDNet: Integrating Shallow and Deep Networks for Efficient Ultra-high Resolution Segmentation. in CVPR (2022)

  56. [56]

    Li, Y. et al. MFVNet: a deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation. Science China Information Sciences 66, (2023)

  57. [57]

    & Research, V

    Huynh, C., Tuan Tran, A., Luu, K., Hoai, M. & Research, V. Progressive Semantic Segmentation. in CVPR (2021)

  58. [58]

    Zhu, G. et al. RFNet: A Refinement Network for Semantic Segmentation. in ICPR (2022)

  59. [59]

    & Girshick, R

    Kirillov, A., Wu, Y., He, K. & Girshick, R. PointRend: Image Segmentation as Rendering. in CVPR (2020)

  60. [60]

    Deng, L. et al. Irregular adaptive refinement network for semantic segmentation of high-resolution remote sensing images. Journal of Intelligent and Fuzzy Systems 46, 11235–11246 (2024)