pith. machine review for the scientific record. sign in

arxiv: 2601.02993 · v4 · submitted 2026-01-06 · 💻 cs.CL

Recognition: unknown

Stable-RAG: Mitigating Retrieval-Permutation-Induced Hallucinations in Retrieval-Augmented Generation

Authors on Pith no claims yet
classification 💻 cs.CL
keywords retrievalstable-ragacrossdocumenthallucinationspermutationsreasoningsensitivity
0
0 comments X
read the original abstract

Retrieval-Augmented Generation (RAG) has become a key paradigm for reducing factual hallucinations in Large Language Models (LLMs), yet little is known about how the order of retrieved documents affects model behavior. We empirically show that under a Top-5 retrieval setting with the gold document included, LLM answers vary substantially across permutations of the retrieved set, even when the gold document is fixed in the first position. This reveals a previously underexplored sensitivity to retrieval permutations. Although existing robust RAG methods focus primarily on enhancing LLM robustness to low-quality retrieval and mitigating positional bias to distribute attention fairly over long contexts, neither approach directly addresses permutation sensitivity. In this paper, we propose Stable-RAG, which exploits permutation sensitivity estimation to mitigate permutation-induced hallucinations. Stable-RAG runs the generator under multiple retrieval orders, clusters hidden states, and decodes from a cluster-center representation that captures the dominant reasoning pattern. It then uses these reasoning results to align hallucinated outputs toward the correct answer, encouraging the model to produce consistent and accurate predictions across document permutations. Experiments on three QA datasets show that Stable-RAG improves answer accuracy, reasoning consistency, and generalization across datasets, retrievers, and input lengths compared with strong baselines.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. GRADE: Probing Knowledge Gaps in LLMs through Gradient Subspace Dynamics

    cs.CL 2026-04 unverdicted novelty 6.0

    GRADE quantifies LLM knowledge gaps via the cross-layer rank ratio of the gradient subspace to the hidden state subspace.

  2. STRIDE-ED: A Strategy-Grounded Stepwise Reasoning Framework for Empathetic Dialogue Systems

    cs.CL 2026-04 unverdicted novelty 5.0

    STRIDE-ED improves empathetic dialogue by modeling it as strategy-conditioned multi-stage reasoning supported by refined training data and multi-objective RL.