ORCE decouples answer generation from confidence estimation in LLMs and applies rank-based reinforcement learning on sampled completions to better align verbalized confidence with actual correctness likelihood.
Teaching models to express their uncertainty in words.TMLR
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
ORCE: Order-Aware Alignment of Verbalized Confidence in Large Language Models
ORCE decouples answer generation from confidence estimation in LLMs and applies rank-based reinforcement learning on sampled completions to better align verbalized confidence with actual correctness likelihood.