pith. machine review for the scientific record. sign in

arxiv: 2508.05170 · v3 · submitted 2025-08-07 · 💻 cs.SE · cs.AI· cs.CL· cs.LG

Recognition: unknown

ReCode: Reinforcing Code Generation with Reasoning-Process Rewards

Lishui Fan , Yu Zhang , Mouxiang Chen , Zhongxin Liu

Authors on Pith no claims yet
classification 💻 cs.SE cs.AIcs.CLcs.LG
keywords rewardreasoningcodereasoning-processgenerationmodelqualityrecode
0
0 comments X
read the original abstract

In practice, rigorous reasoning is often a key driver of correct code, while Reinforcement Learning (RL) for code generation often neglects optimizing reasoning quality. Bringing process-level supervision into RL is appealing, but it faces two challenges. First, training reliable reward models to assess reasoning quality is bottlenecked by the scarcity of fine-grained preference data. Second, naively incorporating such neural rewards may suffer from reward hacking. This work proposes ReCode (Reasoning-Reinforced Code Generation), a novel RL training framework comprising: (1) Contrastive Reasoning-Process Reward Learning (CRPL), which trains a reward model with synthesized optimized and degraded reasoning variants to assess the quality of reasoning process; and (2) Consistency-Gated GRPO (CG-GRPO), which integrates the reasoning-process reward model into RL by gating neural reasoning-process rewards with strict execution outcomes, using execution correctness as a hard gate to mitigate reward hacking. Additionally, to assess the reward model's discriminative capability in assessing reasoning-process quality, we introduce LiveCodeBench-RewardBench (LCB-RB), a new benchmark comprising preference pairs of superior and inferior reasoning processes tailored for code generation. Experimental results across HumanEval(+), MBPP(+), LiveCodeBench, and BigCodeBench show that a 7B model trained with ReCode outperforms the base version by 16.1% and reaches performance comparable to GPT-4-Turbo. We further demonstrate the generalizability of ReCode by extending it to the math domain.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. LASER: A Data-Centric Method for Low-Cost and Efficient SQL Rewriting based on SQL-GRPO

    cs.DB 2026-04 unverdicted novelty 7.0

    LASER generates complex slow-query training data with MCTS and aligns small models via SQL-GRPO to deliver efficient, low-cost SQL rewriting that outperforms rules and large models.

  2. Reinforcement Learning with Negative Tests as Completeness Signal for Formal Specification Synthesis

    cs.SE 2026-04 unverdicted novelty 5.0

    SpecRL uses the fraction of negative tests rejected by candidate specifications as a reward signal in RL training to produce stronger and more verifiable formal specifications than prior methods.