Recognition: 3 theorem links
· Lean TheoremUnGAP: Uncertainty-Guided Affine Prompting for Real-Time Crack Segmentation
Pith reviewed 2026-05-08 19:09 UTC · model grok-4.3
The pith
Treating aleatoric uncertainty as an active prompt via pixel-wise affine transformations closes the loop to improve real-time crack segmentation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
UnGAP builds a closed-loop system in which the Uncertainty-Prompted Feature Modulator treats aleatoric uncertainty as an active visual prompt and applies pixel-wise affine transformations to dynamically calibrate feature distributions, converting the gradient-suppression effect of high variance into a constructive rectification signal for ambiguous crack regions, with an added boundary-aware detection head to tighten prediction precision.
What carries the argument
The Uncertainty-Prompted Feature Modulator (UPFM), which uses predicted uncertainty to drive pixel-wise affine transformations that recalibrate features and reverse the heteroscedastic optimization pathology.
If this is right
- Segmentation accuracy rises on complex crack boundaries while inference speed stays real-time.
- Uncertainty shifts from a passive post-hoc metric to an integrated calibration tool inside the network.
- The boundary-aware detection head adds explicit constraints that tighten edge predictions.
- The closed-loop design directly counters the tendency of high-variance pixels to be ignored during training.
Where Pith is reading between the lines
- The same uncertainty-to-affine modulation pattern could be tested on other tasks that rely on fine local gradients rather than global context.
- Combining the modulator with video inputs might enable continuous crack tracking across frames without extra post-processing.
- The approach may reduce reliance on heavy data augmentation by internally emphasizing hard examples through variance signals.
Load-bearing premise
High predicted variance can be turned into a reliable constructive signal for feature rectification through affine modulation without introducing artifacts or instability.
What would settle it
An ablation that disables the uncertainty-guided affine modulation and measures whether boundary precision drops specifically on high-variance pixels while overall speed remains unchanged.
Figures
read the original abstract
Real-time crack segmentation is vital for structural health monitoring but is plagued by aleatoric uncertainties arising from varying lighting, blur, and texture ambiguity. Current uncertainty-aware approaches typically treat uncertainty estimation as a passive endpoint for post-hoc analysis, failing to close the loop by feeding this information back to refine feature representations. We contend that independent pixel-wise heteroscedastic modeling is uniquely suited for crack segmentation, as cracks are defined by fine-grained local gradients rather than the global semantic coherence relied upon in general object segmentation. However, this approach suffers from a structural optimization pathology: high predicted variance attenuates loss gradients, effectively causing the model to ignore difficult samples and under-fit complex boundaries. To address these challenges, we propose UnGAP, a novel framework that establishes a closed-loop mechanism between uncertainty estimation and feature learning. Central to our approach is the Uncertainty-Prompted Feature Modulator (UPFM), which treats aleatoric uncertainty as an active visual prompt rather than a mere output. UPFM dynamically calibrates feature distributions through pixel-wise affine transformations. Crucially, this mechanism mitigates the heteroscedastic pathology by transforming high variance, which would otherwise indicate gradient suppression, into a constructive signal for stronger feature rectification in ambiguous regions. Additionally, a boundary-aware detection head is introduced to further constrain prediction precision. Extensive experiments demonstrate that UnGAP balances superior segmentation accuracy with real-time inference speed, effectively validating the benefit of transforming uncertainty from a passive metric into an active calibration tool.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes UnGAP, a framework for real-time crack segmentation that introduces the Uncertainty-Prompted Feature Modulator (UPFM) to treat aleatoric uncertainty as an active visual prompt. UPFM applies pixel-wise affine transformations to dynamically calibrate feature distributions, aiming to mitigate the structural optimization pathology where high predicted variance attenuates loss gradients and causes under-fitting on complex boundaries. A boundary-aware detection head is added to constrain prediction precision. The abstract asserts that this closed-loop mechanism between uncertainty estimation and feature learning yields superior segmentation accuracy while maintaining real-time inference speed, with independent pixel-wise heteroscedastic modeling claimed to be particularly suited to crack segmentation due to its reliance on fine-grained local gradients.
Significance. If the central mechanism is shown to work as described, the work could advance uncertainty-aware segmentation by converting uncertainty estimation from a passive post-hoc tool into an active modulator of feature learning. This is especially relevant for tasks defined by local gradient structures rather than global semantics, and the emphasis on countering heteroscedastic gradient attenuation offers a concrete direction for improving boundary precision in real-time settings. No machine-checked proofs, reproducible code releases, or parameter-free derivations are described.
major comments (3)
- [Abstract] Abstract (paragraph on UPFM): the claim that UPFM 'mitigates the heteroscedastic pathology by transforming high variance... into a constructive signal for stronger feature rectification' is asserted without any explicit mapping from the variance map to the affine scale and shift parameters, without a derivation of the composite gradient flow through the modulator, and without analysis showing that the modulation increases rather than attenuates or destabilizes gradients on ambiguous boundaries.
- [Abstract] Abstract (final sentence): the assertion that 'extensive experiments demonstrate that UnGAP balances superior segmentation accuracy with real-time inference speed' is unsupported by any reported metrics, ablation results, dataset descriptions, baseline comparisons, or quantitative tables; the central performance claim therefore rests on unverified assertions.
- [Abstract] Abstract (paragraph on independent pixel-wise heteroscedastic modeling): the statement that this modeling 'is uniquely suited for crack segmentation, as cracks are defined by fine-grained local gradients rather than the global semantic coherence relied upon in general object segmentation' is presented without comparative experiments or justification against alternative uncertainty formulations.
minor comments (2)
- The abstract introduces the boundary-aware detection head but provides no architectural details on its design, loss formulation, or integration with UPFM.
- Notation for the affine transformation (scale and shift) and the uncertainty map should be defined explicitly with equations even in the abstract or introduction to allow immediate assessment of the proposed mechanism.
Simulated Author's Rebuttal
We thank the referee for the constructive comments on our manuscript. The feedback correctly identifies that the abstract makes high-level claims that would benefit from more explicit support drawn from the full paper. We address each major comment below and will incorporate revisions to strengthen the presentation.
read point-by-point responses
-
Referee: [Abstract] Abstract (paragraph on UPFM): the claim that UPFM 'mitigates the heteroscedastic pathology by transforming high variance... into a constructive signal for stronger feature rectification' is asserted without any explicit mapping from the variance map to the affine scale and shift parameters, without a derivation of the composite gradient flow through the modulator, and without analysis showing that the modulation increases rather than attenuates or destabilizes gradients on ambiguous boundaries.
Authors: The referee is correct that the abstract presents this claim at a summary level. Section 3.2 of the manuscript defines the UPFM, where the predicted variance map is fed into a lightweight module that produces the per-pixel scale and shift parameters for the affine transformation. We agree that an explicit derivation of the gradient flow through this modulator and an analysis of its effect on boundary gradients would strengthen the work. We will add this derivation (showing the composite gradient expression) and supporting analysis or visualization in the revised method section. revision: yes
-
Referee: [Abstract] Abstract (final sentence): the assertion that 'extensive experiments demonstrate that UnGAP balances superior segmentation accuracy with real-time inference speed' is unsupported by any reported metrics, ablation results, dataset descriptions, baseline comparisons, or quantitative tables; the central performance claim therefore rests on unverified assertions.
Authors: We acknowledge that the abstract summarizes results without numbers. The full manuscript contains quantitative tables (Section 4) reporting mIoU, F1-score, and FPS on multiple crack datasets, ablations isolating the UPFM and boundary head, and comparisons against real-time baselines. To address the comment, we will revise the abstract to include the key supporting metrics (e.g., achieved mIoU and inference speed) while keeping it concise. revision: yes
-
Referee: [Abstract] Abstract (paragraph on independent pixel-wise heteroscedastic modeling): the statement that this modeling 'is uniquely suited for crack segmentation, as cracks are defined by fine-grained local gradients rather than the global semantic coherence relied upon in general object segmentation' is presented without comparative experiments or justification against alternative uncertainty formulations.
Authors: The justification appears in the introduction, which contrasts the local-gradient nature of cracks with global semantic tasks. The experiments section demonstrates the benefit of the pixel-wise heteroscedastic formulation through ablations. However, direct comparisons to alternative uncertainty models (e.g., homoscedastic or global) are limited. We will add a short discussion and, if space permits, a targeted ablation in the revised experiments to provide stronger comparative justification. revision: partial
Circularity Check
No circularity: architectural proposal with independent components
full rationale
The paper introduces UPFM as a novel pixel-wise affine modulator that converts uncertainty maps into scale/shift parameters, along with a boundary head. These are presented as design choices to address heteroscedastic gradient attenuation, without any equations or derivations that reduce the claimed performance gains to fitted inputs, self-cited uniqueness theorems, or renamed prior results by construction. The contention that pixel-wise heteroscedastic modeling is uniquely suited to cracks is stated directly rather than imported via self-citation. No load-bearing steps collapse the closed-loop claim to tautology; the mechanism is defined externally to the final metric.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.lean (J(x) = ½(x+x⁻¹)−1, washburn_uniqueness_aczel)washburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
L_β-NLL = sg(σ^{2β}) [ ½ exp(−s)‖y−ŷ‖² + ½ s ]
-
Foundation/BranchSelection.lean (RCLCombiner, IsCouplingCombiner)RCLCombiner_isCoupling_iff unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
F_refined = F_in ⊙ (1+γ) + ω, with γ, ω predicted by 1×1 convolutions from the uncertainty map h
-
Foundation/AlphaCoordinateFixation.lean (parameter-free α=1 pin)alpha_pin_under_high_calibration unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
we set ... loss weight w1=0.87, w2=0.13, w3=0.001 ... β=0.5
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Experimental research on the degradation law of the bond performance between steel bars and concrete with rust expansion cracking,
T. Liu, Z. Xu, T. Huang, J. Yang, Y . Huang, N. Xu, C. Pan, L. Hua, and J. Li, “Experimental research on the degradation law of the bond performance between steel bars and concrete with rust expansion cracking,”Construction and Building Materials, vol. 450, p. 138544, 2024
2024
-
[2]
Research on the strength influence and crack evolution law of layered backfill based on macro and meso mechanical response,
S. Zhang, W. Sun, Z. Hou, A. Wu, Z. Li, Y . He, B. Liu, M. Jiang, and S. Wang, “Research on the strength influence and crack evolution law of layered backfill based on macro and meso mechanical response,”Construction and Building Ma- terials, vol. 449, p. 138493, 2024
2024
-
[3]
The influence of high temperature exposure on the tensile and cracking behavior of crimped-textile reinforced mortar composites (trms),
K. Junaid, N. Algourdin, Z. Mesticou, G. Cai, and A. Si Larbi, “The influence of high temperature exposure on the tensile and cracking behavior of crimped-textile reinforced mortar composites (trms),”Construction and Building Materials, vol. 439, p. 137350, 2024. 24
2024
-
[4]
Aleatory or epistemic? does it matter?
A. D. Kiureghian and O. Ditlevsen, “Aleatory or epistemic? does it matter?” Structural Safety, vol. 31, no. 2, pp. 105–112, 2009, risk Acceptance and Risk Communication
2009
-
[5]
On the Treatment of Uncertainties and Probabilities in Engineering Decision Analysis,
M. H. Faber, “On the Treatment of Uncertainties and Probabilities in Engineering Decision Analysis,”Journal of Offshore Mechanics and Arctic Engineering, vol. 127, no. 3, pp. 243–248, 03 2005
2005
-
[6]
’in-between’ uncertainty in bayesian neural networks,
A. Y . K. Foong, Y . Li, J. M. Hernández-Lobato, and R. E. Turner, “’in-between’ uncertainty in bayesian neural networks,” 2019
2019
-
[7]
Bayesian neural networks for uncertainty quantification in data-driven materials modeling,
A. Olivier, M. D. Shields, and L. Graham-Brady, “Bayesian neural networks for uncertainty quantification in data-driven materials modeling,”Computer Methods in Applied Mechanics and Engineering, vol. 386, p. 114079,
-
[8]
Available: https://www.sciencedirect.com/science/article/pii/ S0045782521004102
[Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0045782521004102
-
[9]
Adaptive sparse dropout: Learning the certainty and uncer- tainty in deep neural networks,
Y . Chen and Z. Yi, “Adaptive sparse dropout: Learning the certainty and uncer- tainty in deep neural networks,”Neurocomputing, vol. 450, pp. 354–361, 2021
2021
-
[10]
Uncertainty estimation with a vae- classifier hybrid model,
S. Lin, R. Clark, N. Trigoni, and S. Roberts, “Uncertainty estimation with a vae- classifier hybrid model,” inICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 3548–3552
2022
-
[11]
Detecting out-of-distribution sam- ples via variational auto-encoder with reliable uncertainty estimation,
X. Ran, M. Xu, L. Mei, Q. Xu, and Q. Liu, “Detecting out-of-distribution sam- ples via variational auto-encoder with reliable uncertainty estimation,”Neural Networks, vol. 145, pp. 199–208, 2022
2022
-
[12]
What uncertainties do we need in bayesian deep learn- ing for computer vision?
A. Kendall and Y . Gal, “What uncertainties do we need in bayesian deep learn- ing for computer vision?”Advances in neural information processing systems, vol. 30, 2017
2017
-
[13]
On the pitfalls of heteroscedastic uncertainty estimation with probabilistic neural networks,
M. Seitzer, A. Tavakoli, D. Antic, and G. Martius, “On the pitfalls of heteroscedastic uncertainty estimation with probabilistic neural networks,” inInternational Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=aPOpXlnV1T 25
2022
-
[14]
Understanding pathologies of deep heteroskedastic regression,
E. Wong-Toi, A. J. Boyd, V . Fortuin, and S. Mandt, “Understanding pathologies of deep heteroskedastic regression,” inThe 40th Conference on Uncertainty in Artificial Intelligence, 2024. [Online]. Available: https: //openreview.net/forum?id=n5faLvrsA0
2024
-
[15]
An image is worth 16x16 words: Transformers for image recognition at scale,
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” inInternational Conference on Learning Representations,
-
[16]
Available: https://openreview.net/forum?id=YicbFdNTTy
[Online]. Available: https://openreview.net/forum?id=YicbFdNTTy
-
[18]
Rethinking Atrous Convolution for Semantic Image Segmentation
[Online]. Available: http://arxiv.org/abs/1706.05587
work page internal anchor Pith review arXiv
-
[19]
Deepcrack: A deep hierarchical fea- ture learning architecture for crack segmentation,
Y . Liu, J. Yao, X. Lu, R. Xie, and L. Li, “Deepcrack: A deep hierarchical fea- ture learning architecture for crack segmentation,”Neurocomputing, vol. 338, pp. 139–153, 2019
2019
-
[20]
U-net: Convolutional networks for biomedical image segmentation,
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” inMedical Image Computing and Computer- Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds. Cham: Springer International Publishing, 2015, pp. 234–241
2015
-
[21]
Deep high-resolution representation learning for visual recognition,
J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y . Zhao, D. Liu, Y . Mu, M. Tan, X. Wang, W. Liu, and B. Xiao, “Deep high-resolution representation learning for visual recognition,”IEEE Transactions on Pattern Analysis and Machine Intelli- gence, vol. 43, no. 10, pp. 3349–3364, 2021
2021
-
[22]
Real-time high-resolution neural network with semantic guidance for crack segmentation,
Y . Li, R. Ma, H. Liu, and G. Cheng, “Real-time high-resolution neural network with semantic guidance for crack segmentation,”Automation in Construction, vol. 156, p. 105112, 2023
2023
-
[23]
Crackformer: Transformer 26 network for fine-grained crack detection,
H. Liu, X. Miao, C. Mertz, C. Xu, and H. Kong, “Crackformer: Transformer 26 network for fine-grained crack detection,” in2021 IEEE/CVF International Con- ference on Computer Vision (ICCV), 2021, pp. 3763–3772
2021
-
[24]
Psanet: Point- wise spatial attention network for scene parsing,
H. Zhao, Y . Zhang, S. Liu, J. Shi, C. C. Loy, D. Lin, and J. Jia, “Psanet: Point- wise spatial attention network for scene parsing,” inProceedings of the European Conference on Computer Vision (ECCV), September 2018
2018
-
[25]
Crack segmentation network via difference convolution-based encoder and hybrid cnn-mamba multi-scale attention,
J. Zhang, S. Zhang, D. Li, J. Wang, and J. Wang, “Crack segmentation network via difference convolution-based encoder and hybrid cnn-mamba multi-scale attention,”Pattern Recognition, vol. 167, p. 111723, 2025. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320325003838
2025
-
[26]
Ccdformer: A dual-backbone complex crack detection network with transformer,
X. Hu, H. Li, Y . Feng, S. Qian, J. Li, and S. Li, “Ccdformer: A dual-backbone complex crack detection network with transformer,”Pattern Recognition, vol. 161, p. 111251, 2025. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S0031320324010021
2025
-
[27]
New bayesian focal loss target- ing aleatoric uncertainty estimate: Pollen image recognition,
N. Khanzhina, M. Kashirin, and A. Filchenkov, “New bayesian focal loss target- ing aleatoric uncertainty estimate: Pollen image recognition,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2023, pp. 4253–4262
2023
-
[28]
E. Hüllermeier and W. Waegeman, “Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods,”Machine Learning, vol. 110, no. 3, pp. 457–506, 2021. [Online]. Available: https://doi.org/10.1007/s10994-021-05946-3
-
[29]
A. Kendall, V . Badrinarayanan, and R. Cipolla, “Bayesian segnet: Model uncer- tainty in deep convolutional encoder-decoder architectures for scene understand- ing,”CoRR, vol. abs/1511.02680, 2015
-
[30]
A probabilistic u-net for segmentation of ambiguous images,
S. Kohl, B. Romera-Paredes, C. Meyer, J. De Fauw, J. R. Ledsam, K. Maier-Hein, S. M. A. Eslami, D. Jimenez Rezende, and O. Ronneberger, “A probabilistic u-net for segmentation of ambiguous images,” inAdvances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, 27 N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31. Curra...
2018
-
[31]
Phiseg: Capturing uncertainty in medical image segmentation,
C. F. Baumgartner, K. C. Tezcan, K. Chaitanya, A. M. Hötker, U. J. Muehlematter, K. Schawkat, A. S. Becker, O. Donati, and E. Konukoglu, “Phiseg: Capturing uncertainty in medical image segmentation,” inMedical Image Computing and Computer Assisted Intervention – MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings,...
-
[32]
Uncertainty quantification for a deep learning models for image-based crack segmentation,
K. R. M. dos Santos, A. G. J. Chassignet, B. G. Pantoja-Rosero, A. Rezaie, O. J. Savary, and K. Beyer, “Uncertainty quantification for a deep learning models for image-based crack segmentation,”Journal of Civil Structural Health Monitoring, vol. 15, no. 4, pp. 1231–1269, April 2025. [Online]. Available: https://doi.org/10.1007/s13349-024-00879-6
-
[33]
Epistemic and aleatoric uncertainty quan- tification for crack detection using a bayesian boundary aware convolutional net- work,
R. Rathnakumar, Y . Pang, and Y . Liu, “Epistemic and aleatoric uncertainty quan- tification for crack detection using a bayesian boundary aware convolutional net- work,”Reliability Engineering And System Safety, vol. 240, p. 109547, 2023
2023
-
[34]
Uncertainty quantification in medi- cal image segmentation with normalizing flows,
R. Selvan, F. Faye, J. Middleton, and A. Pai, “Uncertainty quantification in medi- cal image segmentation with normalizing flows,” inMachine Learning in Medical Imaging, M. Liu, P. Yan, C. Lian, and X. Cao, Eds., Cham, 2020, pp. 80–90
2020
-
[35]
W. Zhang, X. Zhang, S. Huang, Y . Lu, and K. Wang, “A probabilistic model for controlling diversity and accuracy of ambiguous medical image segmentation,” inProceedings of the 30th ACM International Conference on Multimedia, ser. MM ’22. New York, NY , USA: Association for Computing Machinery, 2022, p. 4751–4759. [Online]. Available: https://doi.org/10.11...
-
[36]
Film: Visual reasoning with a general conditioning layer,
E. Perez, F. Strub, H. de Vries, V . Dumoulin, and A. Courville, “Film: Visual reasoning with a general conditioning layer,”Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, Apr. 2018. 28
2018
-
[37]
Gener- alised dice overlap as a deep learning loss function for highly unbalanced segmen- tations,
C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. Jorge Cardoso, “Gener- alised dice overlap as a deep learning loss function for highly unbalanced segmen- tations,” inDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, M. J. Cardoso, T. Arbel, G. Carneiro, T. Syeda- Mahmood, J. M. R. Tavares, M. Moradi, A....
2017
-
[38]
Feature pyramid and hierarchical boosting network for pavement crack detection,
F. Yang, L. Zhang, S. Yu, D. Prokhorov, X. Mei, and H. Ling, “Feature pyramid and hierarchical boosting network for pavement crack detection,”IEEE Transac- tions on Intelligent Transportation Systems, vol. 21, no. 4, pp. 1525–1535, 2020
2020
-
[39]
Cracktree: Automatic crack detection from pavement images,
Q. Zou, Y . Cao, Q. Li, Q. Mao, and S. Wang, “Cracktree: Automatic crack detection from pavement images,”Pattern Recognition Letters, vol. 33, no. 3, pp. 227–238, 2012. [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S0167865511003795
2012
-
[40]
Automatic road crack detection us- ing random structured forests,
Y . Shi, L. Cui, Z. Qi, F. Meng, and Z. Chen, “Automatic road crack detection us- ing random structured forests,”IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 12, pp. 3434–3445, 2016
2016
-
[41]
Crackseg9k: A collection and benchmark for crack segmentation datasets and frameworks,
S. Kulkarni, S. Singh, D. Balakrishnan, S. Sharma, S. Devunuri, and S. C. R. Kor- lapati, “Crackseg9k: A collection and benchmark for crack segmentation datasets and frameworks,” inComputer Vision – ECCV 2022 Workshops. Cham: Springer Nature Switzerland, 2023, pp. 179–195
2022
-
[42]
J. M. Goo, X. Milidonis, A. Artusi, J. Boehm, and C. Ciliberto, “Hybrid- segmentor: A hybrid approach to automated fine-grained crack segmentation in civil infrastructure,” 2024. [Online]. Available: https://arxiv.org/abs/2409.02866
-
[43]
Mind marginal non-crack regions: Clustering-inspired representation learning for crack segmentation,
Z. Chen, Z. Lai, J. Chen, and J. Li, “Mind marginal non-crack regions: Clustering-inspired representation learning for crack segmentation,” inProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 12 698–12 708. 29
2024
-
[44]
Joint topology-preserving and feature-refinement network for curvilinear structure segmentation,
M. Cheng, K. Zhao, X. Guo, Y . Xu, and J. Guo, “Joint topology-preserving and feature-refinement network for curvilinear structure segmentation,” in2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 7127–7136
2021
-
[45]
Uncertainty-guided progressive learning for pavement crack segmentation under challenging visual degradation in smartphone-based imaging,
M. Guo, H. Huang, F. Zeng, M. Nie, and M. Dang, “Uncertainty-guided progressive learning for pavement crack segmentation under challenging visual degradation in smartphone-based imaging,”Measurement, vol. 273, p. 121178,
-
[46]
Available: https://www.sciencedirect.com/science/article/pii/ S0263224126008870
[Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0263224126008870
-
[47]
Selectseg: Uncertainty-based selective training and prediction for accurate crack segmentation under limited data and noisy annotations,
C. Zhang, M. Bahrami, D. K. Mishra, M. M. Yuen, Y . Yu, and J. Zhang, “Selectseg: Uncertainty-based selective training and prediction for accurate crack segmentation under limited data and noisy annotations,”Reliability Engineering System Safety, vol. 259, p. 110909, 2025. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0951832025001127
2025
-
[48]
Stochastic segmentation with conditional categorical diffusion models,
L. Zbinden, L. Doorenbos, T. Pissas, A. T. Huber, R. Sznitman, and P. Márquez- Neila, “Stochastic segmentation with conditional categorical diffusion models,” in2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 1119–1129
2023
-
[49]
Image segmentation with cas- caded hierarchical models and logistic disjunctive normal networks,
M. Seyedhosseini, M. Sajjadi, and T. Tasdizen, “Image segmentation with cas- caded hierarchical models and logistic disjunctive normal networks,” in2013 IEEE International Conference on Computer Vision, 2013, pp. 2168–2175
2013
-
[50]
X. Hu, L. Fuxin, D. Samaras, and C. Chen,Topology-preserving deep image segmentation. Red Hook, NY , USA: Curran Associates Inc., 2019. 30
2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.