The C-Score quantifies intra-class explanation consistency for CAM methods via confidence-weighted pairwise soft IoU and detects AUC-consistency dissociation as an early warning for model instability on chest X-ray classification.
The disagreement problem in explainable machine learning: A practi- tioner’s perspective
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
This survey synthesizes XAI methods with surrogate modeling workflows for simulations and outlines a research agenda to embed explainability into simulation-driven design and decision-making.
ToxiTrace combines CuSA for LLM-refined toxic spans, GCLoss for gradient-focused saliency, and ARCL for contrastive toxic/non-toxic boundaries to improve Chinese toxicity classification and explainable span extraction.
An architecture stores XAI explanations persistently in searchable storage and uses RAG to synthesize multiple methods conversationally, cutting hallucination rates by 36% in a FinBERT financial sentiment demo.
citing papers explorer
-
Quantifying Explanation Consistency: The C-Score Metric for CAM-Based Explainability in Medical Image Classification
The C-Score quantifies intra-class explanation consistency for CAM methods via confidence-weighted pairwise soft IoU and detects AUC-consistency dissociation as an early warning for model instability on chest X-ray classification.
-
Interpretable and Explainable Surrogate Modeling for Simulations: A State-of-the-Art Survey and Perspectives on Explainable AI for Decision-Making
This survey synthesizes XAI methods with surrogate modeling workflows for simulations and outlines a research agenda to embed explainability into simulation-driven design and decision-making.
-
ToxiTrace: Gradient-Aligned Training for Explainable Chinese Toxicity Detection
ToxiTrace combines CuSA for LLM-refined toxic spans, GCLoss for gradient-focused saliency, and ARCL for contrastive toxic/non-toxic boundaries to improve Chinese toxicity classification and explainable span extraction.
-
Persistent and Conversational Multi-Method Explainability for Trustworthy Financial AI
An architecture stores XAI explanations persistently in searchable storage and uses RAG to synthesize multiple methods conversationally, cutting hallucination rates by 36% in a FinBERT financial sentiment demo.