Recognition: unknown
When Agents Say One Thing and Do Another: Validating Elicited Beliefs from LLMs
read the original abstract
Large language models (LLMs) are increasingly deployed in high-stakes settings where good decisions require forming beliefs over the probability of unknown outcomes. However, it is unclear whether LLMs act as if they hold coherent beliefs when making decisions, or if so, how we could validate models' reports of such beliefs. We propose a decision-theoretic framework that elicits both probability judgments and decisions from an agent and tests their mutual consistency. Formally, our methods characterize whether it is possible for the actions to be produced by a ``near-rational" decision maker who holds the elicited probability as their true belief. We show that, perhaps surprisingly, this formalization implies empirically testable conditions even without any assumption about the agent's utility function. Applying our framework to stylized clinical diagnosis tasks, we find that models' reported beliefs are demonstrably imperfect summaries of the information revealed in their decisions, but that the discrepancies are small for the strongest models.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Can Revealed Preferences Clarify LLM Alignment and Steering?
LLMs show partial internal coherence in medical decisions but frequently fail to accurately report their preferences or adopt user-directed ones via prompting.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.