pith. machine review for the scientific record. sign in

arxiv: 2601.07422 · v2 · submitted 2026-01-12 · 💻 cs.CL · cs.AI

Recognition: unknown

Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords truthfulnessmechanismspathwaysencodehallucinationsinformationinternalllms
0
0 comments X
read the original abstract

Despite their impressive capabilities, large language models (LLMs) frequently generate hallucinations. Previous work shows that their internal states encode rich signals of truthfulness, yet the origins and mechanisms of these signals remain unclear. In this paper, we demonstrate that truthfulness cues arise from two distinct information pathways: (1) a Question-Anchored pathway that depends on question-answer information flow, and (2) an Answer-Anchored pathway that derives self-contained evidence from the generated answer itself. First, we validate and disentangle these pathways through attention knockout and token patching. Afterwards, we uncover notable and intriguing properties of these two mechanisms. Further experiments reveal that (1) the two mechanisms are closely associated with LLM knowledge boundaries; and (2) internal representations are aware of their distinctions. Finally, building on these insightful findings, two applications are proposed to enhance hallucination detection performance. Overall, our work provides new insight into how LLMs internally encode truthfulness, offering directions for more reliable and self-aware generative systems.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Only Say What You Know: Calibration-Aware Generation for Long-Form Factuality

    cs.CL 2026-05 unverdicted novelty 5.0

    Exploration-Commitment Decoupling instantiated as Calibration-Aware Generation improves long-form factuality by up to 13% and reduces decoding time by up to 37% on five benchmarks.