LLM confidence for social science text measurements is poorly calibrated across models, and a soft-label distillation pipeline reduces expected calibration error by 43% and Brier score by 34%.
arXiv preprint arXiv:2310.19208 , year=
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.AI 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Assessing and Mitigating Miscalibration in LLM-Based Social Science Measurement
LLM confidence for social science text measurements is poorly calibrated across models, and a soft-label distillation pipeline reduces expected calibration error by 43% and Brier score by 34%.