Recognition: unknown
UA-Net: Uncertainty-Aware Network for TRISO Image Semantic Segmentation
Pith reviewed 2026-05-10 10:50 UTC · model grok-4.3
The pith
UA-Net segments TRISO fuel micrographs into five regions at 95.5 percent mIoU while a meta-model flags uncertain predictions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
UA-Net performs semantic segmentation of five regions in TRISO fuel micrographs using a multi-stage pretraining strategy on ImageNet followed by fine-tuning on TRISO images from various irradiation experiments and AGR-5/6/7 particles, integrates a meta-model that predicts uncertainty to identify small defects, and achieves 95.5 percent mIoU and 97.3 percent mP on 102 test images with the meta-model showing 91.8 percent specificity and 93.5 percent sensitivity; the model also extracts layer regions accurately on new qualitative images.
What carries the argument
UA-Net, a segmentation network with multi-stage pretraining and an integrated meta-model that generates uncertainty maps to flag misclassifications.
Load-bearing premise
The multi-stage pretraining on ImageNet followed by fine-tuning on TRISO micrographs produces features that generalize to new particle cross-sections without significant domain shift or label noise.
What would settle it
Evaluating the model on a fresh set of TRISO cross-section images from a previously unseen irradiation campaign and obtaining mIoU below 90 percent or meta-model sensitivity below 85 percent would show the claimed generalization does not hold.
Figures
read the original abstract
Tristructural isotropic (TRISO)-coated particle fuels undergo dimensional changes and chemical reactions during high-temperature neutron irradiation. Post-irradiation materialography helps understand processes that impact fuel performance, such as coating integrity and fission product retention. Conventionally, experts manually evaluate features in thousands of cross sections of sub-mm-sized samples, which is tedious and subjective. In this work, we propose UA-Net, a deep learning framework that segments five characteristic regions of TRISO fuel micrographs and generates an uncertainty map for predictions. The model uses a multi-stage pretraining strategy, starting with general image representations learned from ImageNet, followed by fine-tuning on TRISO micrographs from various irradiation experiments and AGR-5/6/7 particle cross sections. A meta-model for uncertainty prediction is integrated to identify small defects in TRISO images. UA-Net was evaluated on a test set of 102 images, achieving mean Intersection over Union (mIoU) and mean Precision (mP) of 95.5% and 97.3%, respectively. The meta-model achieved a specificity of 91.8% and sensitivity of 93.5%, demonstrating strong performance in detecting misclassifications. The model was also applied to new TRISO images for qualitative evaluation, showing high accuracy in extracting layer regions.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes UA-Net, a deep learning framework for semantic segmentation of five characteristic regions in TRISO-coated particle fuel micrographs. It employs a multi-stage pretraining strategy (ImageNet followed by fine-tuning on TRISO data from various irradiation experiments and AGR-5/6/7 particles), integrates a meta-model to generate uncertainty maps and detect misclassifications, and reports mIoU of 95.5% and mean precision of 97.3% on a held-out test set of 102 images, with the meta-model achieving 93.5% sensitivity and 91.8% specificity; qualitative results are also shown on additional new TRISO images.
Significance. If the generalization claims hold under more rigorous testing, the work could meaningfully reduce the manual effort and subjectivity involved in post-irradiation examination of thousands of sub-mm TRISO particle cross-sections, aiding analysis of coating integrity and fission product retention in nuclear fuel research. The uncertainty-aware component is a constructive addition for practical reliability.
major comments (2)
- [Abstract] Abstract: The reported mIoU (95.5%) and mP (97.3%) on the 102-image test set, along with meta-model sensitivity/specificity, are presented without error bars, baseline comparisons (e.g., to standard U-Net or other segmentation models), cross-validation details, or information on data splits and exclusion criteria. This undermines verification of the central performance claims and leaves open the possibility of post-hoc selection or overfitting.
- [Abstract] Abstract and evaluation description: Quantitative metrics are provided exclusively for the 102-image test set, while application to 'new TRISO images' is limited to qualitative inspection. This does not provide numerical evidence for generalization across domain shifts from different irradiation experiments, which is load-bearing for the multi-stage pretraining strategy and the claim of applicability to unseen particle cross-sections.
minor comments (2)
- The abstract and methods would benefit from explicit statements on the network architecture details, loss functions, and training hyperparameters to improve reproducibility.
- Figure captions for qualitative results on new images could include more context on the specific defects or regions highlighted to aid reader interpretation.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comments highlight important aspects of rigor in reporting performance metrics and generalization. We address each major comment point by point below, indicating planned revisions where appropriate.
read point-by-point responses
-
Referee: [Abstract] Abstract: The reported mIoU (95.5%) and mP (97.3%) on the 102-image test set, along with meta-model sensitivity/specificity, are presented without error bars, baseline comparisons (e.g., to standard U-Net or other segmentation models), cross-validation details, or information on data splits and exclusion criteria. This undermines verification of the central performance claims and leaves open the possibility of post-hoc selection or overfitting.
Authors: We agree that the abstract and evaluation sections would benefit from greater transparency. The full manuscript describes the 102-image test set as a held-out collection drawn from the overall pool of TRISO micrographs (including data from multiple irradiation experiments), but we will revise both the abstract and the methods/evaluation sections to report error bars (standard deviation across repeated training runs with different random seeds), direct baseline comparisons against a standard U-Net and at least one additional segmentation architecture, k-fold cross-validation results on the training portion, and explicit details on the train/validation/test split ratios together with any exclusion criteria applied to images from the various sources. These additions will allow independent verification and reduce concerns about post-hoc selection. revision: yes
-
Referee: [Abstract] Abstract and evaluation description: Quantitative metrics are provided exclusively for the 102-image test set, while application to 'new TRISO images' is limited to qualitative inspection. This does not provide numerical evidence for generalization across domain shifts from different irradiation experiments, which is load-bearing for the multi-stage pretraining strategy and the claim of applicability to unseen particle cross-sections.
Authors: The new TRISO images referenced in the abstract and results are drawn from irradiation experiments outside the training and test distributions, and ground-truth pixel-level annotations were unavailable for those particular images, limiting us to qualitative demonstration. The held-out test set of 102 images already incorporates micrographs from multiple distinct irradiation campaigns and AGR-5/6/7 particles, providing some quantitative support for the multi-stage pretraining approach. In the revision we will expand the methods section to quantify the diversity of source experiments in the pretraining data and, where feasible, add a small number of newly annotated images from an additional experiment to supply numerical generalization metrics. We will also clarify the limitation that full quantitative evaluation on every new domain requires fresh annotations. revision: partial
Circularity Check
No significant circularity; empirical metrics on held-out test set provide independent grounding
full rationale
The paper describes a standard multi-stage pretraining pipeline for a segmentation network followed by evaluation on an explicitly separated test set of 102 images, with additional qualitative checks on new images. No equations, uniqueness theorems, or self-referential definitions appear in the abstract or described workflow. Performance numbers are reported as direct measurements on held-out data rather than derived by construction from the training inputs or prior self-citations. The meta-model for uncertainty is presented as an integrated component whose outputs are validated separately, without evidence that its training reduces to the same fitted values used for the main claims.
Axiom & Free-Parameter Ledger
free parameters (1)
- Network architecture and training hyperparameters
axioms (2)
- domain assumption ImageNet pretraining yields transferable features for microscopic TRISO images
- domain assumption The 102-image test set is representative of future TRISO micrographs from new irradiation experiments
Reference graph
Works this paper leans on
-
[1]
Demkowicz, P. A. & Hunn, J. D. Two-decade DOE investment lays foundation for TRISO-fueled reactors in the US. Nuclear News 63, 66–77 (2020)
2020
-
[2]
A., Liu, B
Demkowicz, P. A., Liu, B. & Hunn, J. D. Coated particle fuel: Historical perspectives and current progress. Journal of Nuclear Materials 515, 434–450 (2019)
2019
-
[3]
Petti, D. et al. The DOE Advanced Gas Reactor Fuel Development and Qualification Program. JOM 62, 62–66 (2010)
2010
-
[4]
K., Hunn, J
Kercher, A. K., Hunn, J. D., Price, J. R. & Pappano, P. Automated optical microscopy of coated particle fuel. Journal of Nuclear Materials 380, 76–84 (2008)
2008
-
[5]
Zhang, H. et al. Design of a deep learning visual system for the thickness measurement of each coating layer of TRISO-coated fuel particles. Measurement (Lond). 191, (2022)
2022
-
[6]
Hu, Z. et al. A context-ensembled refinement network for image segmentation of coated fuel particles. Applied Soft Computing Journal 162, 111835 (2024)
2024
-
[7]
Cai, L. et al. RU-net for automatic characterization of TRISO fuel cross sections. Mater. Charact. 232, (2026)
2026
-
[8]
& Brox, T
Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. in MICCAI vol. 9351 234–241 (2015)
2015
-
[9]
& Wang, Y
Zhang, Z., Liu, Q. & Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geoscience and Remote Sensing Letters 15, 749–753 (2018)
2018
-
[10]
Oktay, O. et al. Attention U-Net: Learning Where to Look for the Pancreas. in Medical Imaging with Deep Learning (2018)
2018
-
[11]
& Darrell, T
Long, J., Shelhamer, E. & Darrell, T. Fully Convolutional Networks for Semantic Segmentation
-
[12]
A., Sikdar, S
Guan, S., Khan, A. A., Sikdar, S. & Chitnis, P. V. Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal. IEEE J. Biomed. Health Inform. 24, 568–576 (2020)
2020
-
[13]
& Geng, H
Cui, R., Yang, R., Liu, F. & Geng, H. HD 2 A-Net: A novel dual gated attention network using comprehensive hybrid dilated convolutions for medical image segmentation. Comput. Biol. Med. 152, 106384 (2023)
2023
-
[14]
M., Tajbakhsh, N
Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N. & Liang, J. Unet++: A nested u-net architecture for medical image segmentation. in LNCS vol. 11045 LNCS 3–11 (2018)
2018
-
[15]
Rethinking Atrous Convolution for Semantic Image Segmentation
Chen, L.-C., Papandreou, G., Schroff, F. & Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. http://arxiv.org/abs/1706.05587 (2017)
work page internal anchor Pith review arXiv 2017
-
[16]
Howard, A. et al. Searching for MobileNetV3. in ICCV (2019)
2019
-
[17]
& Chen, L.-C
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. & Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. in CVPR (2018)
2018
-
[18]
Yang, T.-J. et al. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications. in ECCV (2018)
2018
-
[19]
Cao, H. et al. Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation. Lecture Notes in Computer Science 13803 LNCS, 205–218 (2023)
2023
-
[20]
& Schmid Inria, C
Strudel, R., Garcia, R., Laptev Inria, I. & Schmid Inria, C. Segmenter: Transformer for Semantic Segmentation. in ICCV (2021)
2021
-
[21]
Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. in ICLR (2021)
2021
-
[22]
Xu, G., Zhang, X., He, X. & Wu, X. LeViT-UNet: Make Faster Encoders with Transformer for Medical Image Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 14432 LNCS, 42–53 (2024)
2024
-
[23]
Hatamizadeh, A. et al. UNETR: Transformers for 3D Medical Image Segmentation. in WACV (2022)
2022
-
[24]
Chen, J. et al. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 97, (2024)
2024
-
[25]
Zhang, Y., Liu, H. & Hu, Q. TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12901 LNCS, 14–24 (2021)
2021
-
[26]
Manzari, O. N., Kaleybar, J. M., Saadat, H. & Maleki, S. BEFUnet: A Hybrid CNN-Transformer Architecture for Precise Medical Image Segmentation. http://arxiv.org/abs/2402.08793 (2024)
-
[27]
& Chen, J
Liu, X., Hu, Y. & Chen, J. Hybrid CNN-Transformer model for medical image segmentation with pyramid convolution and multi-layer perceptron. Biomed. Signal Process. Control 86, 105331 (2023)
2023
-
[28]
& Gimpel, K
Hendrycks, D. & Gimpel, K. A BASELINE FOR DETECTING MISCLASSIFIED AND OUT- OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS. in ICLR (2017)
2017
-
[29]
& Weinberger, K
Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On Calibration of Modern Neural Networks. (2017)
2017
-
[30]
& Blundell, C
Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. in NIPS (2017)
2017
-
[31]
Ovadia, Y. et al. Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift. in NeurIPS (2019)
2019
-
[32]
Laurent, O. et al. Packed-Ensembles for Efficient Uncertainty Estimation. in ICLR (2023)
2023
-
[33]
Wen, Y., Tran, D. & Ba, J. BATCHENSEMBLE: AN ALTERNATIVE APPROACH TO EFFICIENT ENSEMBLE AND LIFELONG LEARNING. in ICLR (2020)
2020
-
[34]
Havasi, M. et al. TRAINING INDEPENDENT SUBNETWORKS FOR ROBUST PREDICTION. in ICLR (2021)
2021
-
[35]
& Fua, P
Durasov, N., Bagautdinov, T., Baque, P. & Fua, P. Masksembles for Uncertainty Estimation. in CVPR (2021)
2021
-
[36]
& Wierstra, D
Blundell, C., Cornebise, J., Kavukcuoglu, K. & Wierstra, D. Weight Uncertainty in Neural Networks Daan Wierstra. in ICML (2015)
2015
-
[37]
Shen, M. et al. Post-hoc Uncertainty Learning Using a Dirichlet Meta-Model. in AAAI (2023)
2023
-
[38]
& Shanmugam, K
Chen, T., Navrátil, J., Iyengar, V. & Shanmugam, K. Confidence Scoring Using Whitebox Meta- models with Linear Classifier Probes. in AISTATS (2019)
2019
-
[39]
Lucke, K., Vakanski, A. & Xian, M. Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification. Computers https://doi.org/10.3390/computers14010012 (2025) doi:10.3390/computers14010012
-
[40]
& Pérez, P
Corbière, C., Thome, N., Bar-Hen, A., Cord, M. & Pérez, P. Addressing Failure Prediction by Learning Model Confidence. in NeurIPS (2019)
2019
-
[41]
D., Plummer, M
Stempien, J. D., Plummer, M. A., Schulthess, J. L. & Demkowicz, P. A. Measurement of kernel swelling and buffer densification in irradiated AGR-2 UCO and UO2 TRISO fuels. in 10th International Topical Meeting on High Temperature Reactor Technology (2021)
2021
-
[42]
D., Plummer, M
Stempien, J. D., Plummer, M. A. & Schulthess, J. L. Measurement of Kernel Swelling and Buffer Densification in Irradiated UCO and UO 2 TRISO Fuel Particles from AGR-2. http://www.inl.gov (2019)
2019
-
[43]
SPC-1352, AGR-5/6/7 FUEL SPECIFICATION
Idaho National Labratory. SPC-1352, AGR-5/6/7 FUEL SPECIFICATION. (2017)
2017
-
[44]
Pham, B. T. et al. AGR 5/6/7 Irradiation Test Final As-Run Report. http://www.ART.INL.gov (2021)
2021
-
[45]
J., Stempien, J
Rice, F. J., Stempien, J. D. & Demkowicz, P. A. Ceramography of irradiated TRISO fuel from the AGR-2 experiment. Nuclear Engineering and Design 329, 73–81 (2018)
2018
-
[46]
& Sun, J
He, K., Zhang, X., Ren, S. & Sun, J. Deep Residual Learning for Image Recognition. in CVPR (2016)
2016
-
[47]
& Szegedy, C
Ioffe, S. & Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. in ICML (PMLR, 2015)
2015
-
[48]
Householder, A. S. A theory of steady-state activity in nerve-fiber networks: I. Definitions and preliminary lemmas. Bull. Math. Biophys. 3, 63–69 (1941)
1941
-
[49]
Milletari, F., Navab, N. & Ahmadi, S. A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016 565–571 (2016) doi:10.1109/3DV.2016.79
-
[50]
& Katabi, D
Yang, Y., Zha, K., Chen, Y.-C., Wang, H. & Katabi, D. Delving into Deep Imbalanced Regression. in International Conference on Machine Learning (PMLR, 2021)
2021
-
[51]
Deng, J. et al. ImageNet: A Large-Scale Hierarchical Image Database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009 248–255 (2009) doi:10.1109/CVPR.2009.5206848
-
[52]
Kingma, D. P. & Ba, J. Adam: A Method for Stochastic Optimization. in ICLR (2017)
2017
-
[53]
Li, Q., Yang, W., Liu, W., Yu, Y. & He, S. From Contexts to Locality: Ultra-high Resolution Image Segmentation via Locality-aware Contextual Correlation. in ICCV (2021)
2021
-
[54]
& Qian, X
Chen, W., Jiang, Z., Wang, Z., Cui, K. & Qian, X. Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images. in CVPR (2019)
2019
-
[55]
Guo, S. et al. ISDNet: Integrating Shallow and Deep Networks for Efficient Ultra-high Resolution Segmentation. in CVPR (2022)
2022
-
[56]
Li, Y. et al. MFVNet: a deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation. Science China Information Sciences 66, (2023)
2023
-
[57]
& Research, V
Huynh, C., Tuan Tran, A., Luu, K., Hoai, M. & Research, V. Progressive Semantic Segmentation. in CVPR (2021)
2021
-
[58]
Zhu, G. et al. RFNet: A Refinement Network for Semantic Segmentation. in ICPR (2022)
2022
-
[59]
& Girshick, R
Kirillov, A., Wu, Y., He, K. & Girshick, R. PointRend: Image Segmentation as Rendering. in CVPR (2020)
2020
-
[60]
Deng, L. et al. Irregular adaptive refinement network for semantic segmentation of high-resolution remote sensing images. Journal of Intelligent and Fuzzy Systems 46, 11235–11246 (2024)
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.