pith. machine review for the scientific record. sign in

Fact-checking the output of large language models via token-level uncertainty quantification.arXiv preprint arXiv:2403.04696

6 Pith papers cite this work. Polarity classification is still indexing.

6 Pith papers citing it

years

2026 6

verdicts

UNVERDICTED 6

representative citing papers

Sanity Checks for Long-Form Hallucination Detection

cs.CL · 2026-05-08 · unverdicted · novelty 6.0

Hallucination detectors on LLM reasoning traces often rely on final-answer artifacts rather than reasoning validity; once controlled, lightweight lexical trajectory features suffice for robust detection.

Confidence-Aware Alignment Makes Reasoning LLMs More Reliable

cs.AI · 2026-05-08 · unverdicted · novelty 6.0

CASPO trains LLMs via iterative direct preference optimization so that token-level confidence tracks step-wise correctness, then applies Confidence-aware Thought pruning at inference to improve both reliability and speed on reasoning benchmarks.

citing papers explorer

Showing 6 of 6 citing papers.