pith. machine review for the scientific record. sign in

arxiv: 2601.20829 · v2 · submitted 2026-01-28 · 💻 cs.LG · cs.AI· cs.CL

Recognition: unknown

Training Reasoning Models on Saturated Problems via Failure-Prefix Conditioning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CL
keywords problemsconditioningreasoningfailure-prefixsaturatedtrainingimproveslearning
0
0 comments X
read the original abstract

As Reinforcement Learning with Verifiable Rewards (RLVR) substantially improves the reasoning abilities of large language models (LLMs), a new bottleneck emerges: more training problems become saturated, that is, the LLM answers the questions correctly for nearly every rollout. On such problems, rewards provide little useful learning signal. While collecting harder problems is a natural response, it is costly and increasingly difficult. We propose failure-prefix conditioning, a simple method that unlocks the remaining signal in saturated problems by shifting exploration toward failure-prone reasoning states. By conditioning on prefixes of rare incorrect trajectories, the method improves the model's ability to recover from misleading early reasoning. We observe that failure-prefix conditioning consistently improves performance where standard RLVR stalls, and achieves gains comparable to training on newly collected medium-difficulty problems. We further analyze the model's robustness, finding that our method reduces performance degradation under misleading failure prefixes, albeit with a mild trade-off in adherence to correct early reasoning. Finally, we demonstrate that an iterative approach, which refreshes failure prefixes during training, unlocks additional gains after performance plateaus. Overall, our results show that saturated problems still contain valuable learning signal, and that failure-prefix conditioning provides an effective way to unlock it.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Addressing Performance Saturation for LLM RL via Precise Entropy Curve Control

    cs.LG 2026-04 unverdicted novelty 6.0

    Entrocraft uses rejection sampling to enforce custom entropy curves in LLM RL, sustaining longer training, better generalization, and higher output diversity than prior regularization approaches.

  2. Addressing Performance Saturation for LLM RL via Precise Entropy Curve Control

    cs.LG 2026-04 unverdicted novelty 6.0

    Entrocraft uses rejection sampling to enforce precise entropy schedules in LLM RL by biasing advantages, enabling longer training, better generalization, and higher performance than baselines.