DCRD uses attention-map analysis to detect context-memory conflicts in LLMs and conditionally applies either greedy or fidelity-based dynamic decoding, achieving SOTA results on QA tasks across four models and six datasets.
Clasheval: Quantifying the tug-of-war between an llm’s internal prior and external evidence
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Mitigating Context-Memory Conflicts in LLMs through Dynamic Cognitive Reconciliation Decoding
DCRD uses attention-map analysis to detect context-memory conflicts in LLMs and conditionally applies either greedy or fidelity-based dynamic decoding, achieving SOTA results on QA tasks across four models and six datasets.