pith. machine review for the scientific record. sign in

arxiv: 2211.14275 · v1 · submitted 2022-11-25 · 💻 cs.LG · cs.AI· cs.CL

Recognition: unknown

Solving math word problems with process- and outcome-based feedback

Antonia Creswell, Francis Song, Geoffrey Irving, Irina Higgins, Jonathan Uesato, Lisa Wang, Nate Kushman, Noah Siegel, Ramana Kumar

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CL
keywords reasoningapproachesoutcome-basedsupervisionerrorfinal-answermodelsprocess-based
0
0 comments X
read the original abstract

Recent work has shown that asking language models to generate reasoning steps improves performance on many reasoning tasks. When moving beyond prompting, this raises the question of how we should supervise such models: outcome-based approaches which supervise the final result, or process-based approaches which supervise the reasoning process itself? Differences between these approaches might naturally be expected not just in final-answer errors but also in reasoning errors, which can be difficult to detect and are problematic in many real-world domains such as education. We run the first comprehensive comparison between process- and outcome-based approaches trained on a natural language task, GSM8K. We find that pure outcome-based supervision produces similar final-answer error rates with less label supervision. However, for correct reasoning steps we find it necessary to use process-based supervision or supervision from learned reward models that emulate process-based feedback. In total, we improve the previous best results from 16.8% $\to$ 12.7% final-answer error and 14.0% $\to$ 3.4% reasoning error among final-answer-correct solutions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 56 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. MedPRMBench: A Fine-grained Benchmark for Process Reward Models in Medical Reasoning

    cs.CL 2026-04 unverdicted novelty 8.0

    MedPRMBench is the first fine-grained benchmark for process reward models in medical reasoning, featuring 6500 questions, 13000 chains, 113910 step labels, and a baseline that improves downstream QA accuracy by 3.2-6....

  2. AgentLens: Revealing The Lucky Pass Problem in SWE-Agent Evaluation

    cs.SE 2026-05 conditional novelty 7.0

    10.7% of passing SWE-agent trajectories are Lucky Passes with chaotic behaviors, and a quality score based on process references changes model rankings across eight backends.

  3. Unmasking On-Policy Distillation: Where It Helps, Where It Hurts, and Why

    cs.LG 2026-05 unverdicted novelty 7.0

    Distillation signals align better with ideal updates on incorrect student rollouts than correct ones, with optimal teacher context depending on student capacity and task.

  4. The Last Word Often Wins: A Format Confound in Chain-of-Thought Corruption Studies

    cs.LG 2026-05 accept novelty 7.0

    Corruption studies on CoT chains detect the position of explicit answer statements rather than computational steps, as evidenced by format ablations collapsing suffix sensitivity 19x and models following conflicting a...

  5. Unsupervised Process Reward Models

    cs.LG 2026-05 unverdicted novelty 7.0

    Unsupervised PRMs derived from LLM probabilities achieve up to 15% better error detection than LLM judges and match supervised PRMs in verification and RL tasks.

  6. Distributional Process Reward Models: Calibrated Prediction of Future Rewards via Conditional Optimal Transport

    cs.LG 2026-05 unverdicted novelty 7.0

    Conditional optimal transport calibrates PRMs by learning monotonic conditional quantile functions over success probabilities conditioned on hidden states, yielding improved calibration and downstream Best-of-N perfor...

  7. Post Reasoning: Improving the Performance of Non-Thinking Models at No Cost

    cs.AI 2026-05 conditional novelty 7.0

    Post-Reasoning boosts LLM accuracy by reversing the usual answer-after-reasoning order, delivering mean relative gains of 17.37% across 117 model-benchmark pairs with zero extra cost.

  8. Logic-Regularized Verifier Elicits Reasoning from LLMs

    cs.CL 2026-05 unverdicted novelty 7.0

    LOVER creates an unsupervised logic-regularized verifier that reaches 95% of supervised verifier performance on reasoning tasks across 10 datasets.

  9. Maximizing Rollout Informativeness under a Fixed Budget: A Submodular View of Tree Search for Tool-Use Agentic Reinforcement Learning

    stat.ML 2026-05 unverdicted novelty 7.0

    InfoTree casts intermediate state selection in tree search as monotone submodular maximization under fixed rollout budgets, yielding closed-form UUCB terms and lifting mixed-outcome ratios while outperforming flat GRP...

  10. Correct Is Not Enough: Training Reasoning Planners with Executor-Grounded Rewards

    cs.AI 2026-05 unverdicted novelty 7.0

    TraceLift trains reasoning planners with executor-grounded rewards that multiply a rubric-based reasoning quality score by measured uplift on a frozen executor, outperforming execution-only training on math and code b...

  11. Generalized Distributional Alignment Games for Unbiased Answer-Level Fine-Tuning

    cs.LG 2026-05 unverdicted novelty 7.0

    Generalized Bregman alignment games plus U-statistics and optimal minimax polynomial estimators remove Jensen bias and achieve optimal statistical rates for unbiased answer-level fine-tuning.

  12. Decoding-Time Debiasing via Process Reward Models: From Controlled Fill-in to Open-Ended Generation

    cs.CL 2026-05 unverdicted novelty 7.0

    Decoding-time use of process reward models for bias mitigation raises fairness scores by up to 0.40 on a bilingual benchmark while preserving fluency across four LLMs and extends to open-ended generation with low overhead.

  13. AgentEval: DAG-Structured Step-Level Evaluation for Agentic Workflows with Error Propagation Tracking

    cs.SE 2026-04 conditional novelty 7.0

    AgentEval evaluates agentic workflows via DAGs with step metrics, a 21-category failure taxonomy, and error propagation tracking, yielding 2.17x higher failure recall than end-to-end methods and strong human agreement.

  14. Fine-Tuning Small Reasoning Models for Quantum Field Theory

    cs.LG 2026-04 unverdicted novelty 7.0

    Small 7B reasoning models were fine-tuned on synthetic and curated QFT problems using RL and SFT, yielding performance gains, error analysis, and public release of data and traces.

  15. Navigating the Conceptual Multiverse

    cs.HC 2026-04 unverdicted novelty 7.0

    The conceptual multiverse system with a verification framework for decision structures helps users in philosophy, AI alignment, and poetry build clearer working maps of open-ended problems by making implicit LLM choic...

  16. Does RL Expand the Capability Boundary of LLM Agents? A PASS@(k,T) Analysis

    cs.LG 2026-04 unverdicted novelty 7.0

    RL expands the capability boundary of LLM agents on compositional tool-use tasks, shown by non-converging pass curves at large k with increasing T, while SFT regresses it and the effect is absent on simpler tasks.

  17. AI Achieves a Perfect LSAT Score

    cs.AI 2026-04 unverdicted novelty 7.0

    Language models achieve a perfect LSAT score, with experiments showing that internal thinking phases and a fine-tuned process reward model are key to high performance on logical reasoning questions.

  18. Structural Evaluation Metrics for SVG Generation via Leave-One-Out Analysis

    cs.LG 2026-04 unverdicted novelty 7.0

    Element-level leave-one-out analysis yields per-element quality scores and four structural metrics (purity, coverage, compactness, locality) that quantify SVG modularity and enable artifact detection.

  19. Generate, Filter, Control, Replay: A Comprehensive Survey of Rollout Strategies for LLM Reinforcement Learning

    cs.LG 2026-04 unverdicted novelty 7.0

    This survey introduces the Generate-Filter-Control-Replay (GFCR) taxonomy to structure rollout pipelines for RL-based post-training of reasoning LLMs.

  20. Sampling for Quality: Training-Free Reward-Guided LLM Decoding via Sequential Monte Carlo

    cs.LG 2026-04 unverdicted novelty 7.0

    Sequential Monte Carlo sampling from a reward-augmented sequence distribution improves LLM performance on HumanEval by up to 54.9% and MATH500 by up to 8.8%, outperforming standard sampling and GRPO.

  21. WMF-AM: Probing LLM Working Memory via Depth-Parameterized Cumulative State Tracking

    cs.AI 2026-03 unverdicted novelty 7.0

    WMF-AM is a depth-parameterized benchmark that measures LLMs' cumulative state tracking ability without scratchpads, validated on 28 models across arithmetic and non-arithmetic tasks with ablations confirming the construct.

  22. Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs

    cs.CL 2024-12 unverdicted novelty 7.0

    o1-like models overthink easy tasks; self-training reduces compute use without accuracy loss on GSM8K, MATH500, GPQA, and AIME.

  23. GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

    cs.LG 2024-10 accept novelty 7.0

    LLMs display high variance and major accuracy drops on GSM-Symbolic variants of grade-school math problems, indicating they replicate training patterns rather than execute logical reasoning.

  24. Let's Verify Step by Step

    cs.LG 2023-05 accept novelty 7.0

    Process supervision significantly outperforms outcome supervision for training models on the MATH dataset, achieving 78% accuracy on a representative test subset with active learning and a released 800k step-label dataset.

  25. Learning from Failures: Correction-Oriented Policy Optimization with Verifiable Rewards

    cs.CL 2026-05 unverdicted novelty 6.0

    CIPO jointly optimizes standard RLVR rewards with correction samples derived from the model's own failed attempts, yielding better reasoning and self-correction on math and code benchmarks.

  26. When Reasoning Traces Become Performative: Step-Level Evidence that Chain-of-Thought Is an Imperfect Oversight Channel

    cs.AI 2026-05 unverdicted novelty 6.0

    CoT traces align with internal answer commitment in only 61.9% of steps on average, dominated by confabulated continuations after commitment has stabilized.

  27. Verifiable Process Rewards for Agentic Reasoning

    cs.AI 2026-05 unverdicted novelty 6.0

    Verifiable Process Rewards (VPR) converts symbolic oracles into dense turn-level supervision for reinforcement learning in agentic reasoning, outperforming outcome-only rewards and transferring to general benchmarks.

  28. Rubric-Grounded RL: Structured Judge Rewards for Generalizable Reasoning

    cs.AI 2026-05 unverdicted novelty 6.0

    Rubric-grounded RL with LLM judges on document-derived criteria raises Llama-3.1-8B normalized reward to 71.7% on held-out rubrics and improves performance on GSM8K, MATH, and GPQA benchmarks.

  29. Distributional Process Reward Models: Calibrated Prediction of Future Rewards via Conditional Optimal Transport

    cs.LG 2026-05 unverdicted novelty 6.0

    Conditional optimal transport is used to turn raw PRM outputs into monotonic quantile functions that improve calibration and downstream Best-of-N performance on MATH-500 and AIME.

  30. RLearner-LLM: Balancing Logical Grounding and Fluency in Large Language Models via Hybrid Direct Preference Optimization

    cs.CL 2026-05 unverdicted novelty 6.0

    RLearner-LLM's Hybrid-DPO fuses DeBERTa NLI and LLM verifier scores to deliver up to 6x higher NLI entailment than standard SFT while preserving answer coverage across academic domains.

  31. RLearner-LLM: Balancing Logical Grounding and Fluency in Large Language Models via Hybrid Direct Preference Optimization

    cs.CL 2026-05 unverdicted novelty 6.0

    RLearner-LLM achieves up to 6x gains in NLI entailment over standard fine-tuning by using an automated hybrid DPO pipeline that balances logic and fluency across multiple model sizes and domains.

  32. Correct Is Not Enough: Training Reasoning Planners with Executor-Grounded Rewards

    cs.AI 2026-05 unverdicted novelty 6.0

    TraceLift trains reasoning planners using rewards that credit traces for both rubric quality and actual performance gains on a frozen executor, outperforming final-answer-only training on math and code tasks.

  33. DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

    cs.LG 2026-05 unverdicted novelty 6.0

    DGPO is a critic-free RL framework that uses bounded Hellinger distance and entropy-gated advantage redistribution to enable fine-grained token-level credit assignment in long CoT generations for LLM alignment, report...

  34. DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

    cs.LG 2026-05 unverdicted novelty 6.0

    DGPO reinterprets distribution deviation as a guiding signal in a critic-free policy optimization framework to enable fine-grained credit assignment for LLM chain-of-thought reasoning.

  35. Controllable and Verifiable Process Data Synthesis for Process Reward Models

    cs.AI 2026-05 unverdicted novelty 6.0

    A controllable synthesis method creates prefix-invalid yet trajectory-consistent process supervision data for training and evaluating process reward models by injecting verifiable errors into symbolic reasoning chains.

  36. Distilling Long-CoT Reasoning through Collaborative Step-wise Multi-Teacher Decoding

    cs.AI 2026-05 unverdicted novelty 6.0

    CoRD uses collaborative multi-teacher step-wise decoding with perplexity-guided beam search to generate higher-quality Long-CoT data that lets smaller models reach near-teacher performance with less supervision.

  37. Adaptive Test-Time Compute Allocation with Evolving In-Context Demonstrations

    cs.AI 2026-04 unverdicted novelty 6.0

    An adaptive test-time framework uses a warm-up phase on the test set to build evolving in-context examples, then concentrates compute on unresolved queries to outperform static baselines on math, coding, and reasoning...

  38. TPS-CalcBench: A Benchmark and Diagnostic Evaluation Framework for LLM Analytical Calculation Competence in Hypersonic Thermal Protection System Engineering

    cs.AI 2026-04 unverdicted novelty 6.0

    TPS-CalcBench is a new benchmark and evaluation framework that tests LLMs on analytical calculations in hypersonic aerodynamics and gas dynamics, using dual-track scoring and interventions to detect physically invalid...

  39. Process Reward Models Meet Planning: Generating Precise and Scalable Datasets for Step-Level Rewards

    cs.CL 2026-04 unverdicted novelty 6.0

    PDDL planning problems are used to generate about one million precise reasoning steps for training Process Reward Models, and adding this data to existing datasets improves LLM performance on both mathematical and non...

  40. Adaptive Test-Time Compute Allocation for Reasoning LLMs via Constrained Policy Optimization

    cs.LG 2026-04 unverdicted novelty 6.0

    A Lagrangian-relaxation plus imitation-learning pipeline adaptively allocates test-time compute to LLMs, outperforming uniform baselines by up to 12.8% relative accuracy on MATH while staying within a fixed average budget.

  41. The Past Is Not Past: Memory-Enhanced Dynamic Reward Shaping

    cs.LG 2026-04 unverdicted novelty 6.0

    MEDS improves LLM RL performance by up to 4.13 pass@1 and 4.37 pass@128 points by dynamically penalizing rollouts matching prevalent historical error clusters identified via memory-stored representations and density c...

  42. PRISM-MCTS: Learning from Reasoning Trajectories with Metacognitive Reflection

    cs.AI 2026-04 unverdicted novelty 6.0

    PRISM-MCTS improves MCTS-based reasoning efficiency by maintaining a shared memory of heuristics and fallacies reinforced by a process reward model, halving required trajectories on GPQA while outperforming prior methods.

  43. Relative Density Ratio Optimization for Stable and Statistically Consistent Model Alignment

    cs.LG 2026-04 unverdicted novelty 6.0

    Relative density ratio optimization stabilizes direct density ratio estimation for language model alignment while preserving statistical consistency without assuming a Bradley-Terry preference model.

  44. Process Reinforcement through Implicit Rewards

    cs.LG 2025-02 conditional novelty 6.0

    PRIME enables online process reward model updates in LLM RL using implicit rewards from rollouts and outcome labels, yielding 15.1% average gains on reasoning benchmarks and surpassing a stronger instruct model with 1...

  45. Improve Mathematical Reasoning in Language Models by Automated Process Supervision

    cs.CL 2024-06 conditional novelty 6.0

    OmegaPRM automates collection of 1.5 million process supervision labels via binary-search MCTS, raising Gemini Pro math accuracy from 51% to 69.4% on MATH500 and Gemma2 27B from 42.3% to 58.2%.

  46. LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

    cs.SE 2024-03 unverdicted novelty 6.0

    LiveCodeBench collects 400 recent contest problems to create a contamination-free benchmark evaluating LLMs on code generation and related capabilities like self-repair and execution.

  47. Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations

    cs.AI 2023-12 conditional novelty 6.0

    Math-Shepherd is an automatically trained process reward model that scores solution steps to verify and reinforce LLMs, lifting Mistral-7B from 77.9% to 89.1% on GSM8K and 28.6% to 43.5% on MATH.

  48. Reinforced Self-Training (ReST) for Language Modeling

    cs.CL 2023-08 unverdicted novelty 6.0

    ReST improves LLM translation quality on benchmarks via offline RL on self-generated data, achieving gains in a compute-efficient way compared to typical RLHF.

  49. RLearner-LLM: Balancing Logical Grounding and Fluency in Large Language Models via Hybrid Direct Preference Optimization

    cs.CL 2026-05 unverdicted novelty 5.0

    Hybrid-DPO combining NLI and verifier scores delivers up to 6x NLI improvement over SFT baselines across multiple LLMs and domains while preserving answer coverage and inference speed.

  50. SCPRM: A Schema-aware Cumulative Process Reward Model for Knowledge Graph Question Answering

    cs.AI 2026-05 unverdicted novelty 5.0

    SCPRM adds prefix conditioning and schema distance to process reward models so that Monte Carlo Tree Search can explore knowledge-graph reasoning paths with both cumulative and future guidance, yielding a 1.18% averag...

  51. Enhancing LLM-based Search Agents via Contribution Weighted Group Relative Policy Optimization

    cs.LG 2026-04 unverdicted novelty 5.0

    CW-GRPO weights GRPO advantages with per-round contribution scores from an LLM judge, improving search agent performance by 5.0% on Qwen3-8B and 6.3% on Qwen3-1.7B over standard GRPO.

  52. Rethinking Token-Level Credit Assignment in RLVR: A Polarity-Entropy Analysis

    cs.LG 2026-04 unverdicted novelty 5.0

    Token credit in RLVR is upper-bounded by entropy, with reasoning gains concentrated in high-entropy tokens, motivating Entropy-Aware Policy Optimization that outperforms baselines.

  53. Your Model Diversity, Not Method, Determines Reasoning Strategy

    cs.AI 2026-04 unverdicted novelty 5.0

    The optimal reasoning strategy for LLMs depends on the model's diversity profile rather than the exploration method itself.

  54. Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models

    cs.CL 2025-03 accept novelty 5.0

    A survey organizing techniques to achieve efficient reasoning in LLMs by shortening chain-of-thought outputs.

  55. From System 1 to System 2: A Survey of Reasoning Large Language Models

    cs.AI 2025-02 accept novelty 3.0

    The survey organizes the shift of LLMs toward deliberate System 2 reasoning, covering model construction techniques, performance on math and coding benchmarks, and future research directions.

  56. Fin-PRM: A Domain-Specialized Process Reward Model for Financial Reasoning in Large Language Models

    cs.CL 2025-08