ORCE decouples answer generation from confidence estimation in LLMs and applies rank-based reinforcement learning on sampled completions to better align verbalized confidence with actual correctness likelihood.
Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
ORCE: Order-Aware Alignment of Verbalized Confidence in Large Language Models
ORCE decouples answer generation from confidence estimation in LLMs and applies rank-based reinforcement learning on sampled completions to better align verbalized confidence with actual correctness likelihood.