Recognition: 3 theorem links
· Lean TheoremProximal Policy Optimization Algorithms
Pith reviewed 2026-05-08 23:02 UTC · model claude-opus-4-7
The pith
A simple clip on the policy probability ratio reproduces the practical benefits of trust region policy optimization with first-order SGD.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper introduces a policy gradient objective in which the probability ratio between the new and old policy, multiplied by an advantage estimate, is clipped to a small interval around 1 and then combined with the unclipped term by taking a minimum. This single change converts a standard policy gradient loss into one that can safely be optimized for many epochs of minibatch SGD on the same batch of trajectories, because the clip removes the incentive to push the ratio far from 1 when doing so would help the surrogate. The authors argue this captures the practical benefit of trust region policy optimization — keeping each update inside an implicit trust region — without needing constrained
What carries the argument
The clipped surrogate L^CLIP(θ) = Ê_t[min(r_t(θ) Â_t, clip(r_t(θ), 1−ε, 1+ε) Â_t)], where r_t(θ) is the new-to-old policy probability ratio at sampled action a_t and Â_t is an advantage estimate (here from truncated generalized advantage estimation). The min-of-clipped-and-unclipped construction is the load-bearing object: it equals the unconstrained surrogate to first order around θ_old, but flattens out once the ratio leaves [1−ε, 1+ε] in the direction that would inflate the objective, while preserving the gradient when the ratio moves in the direction that worsens it. This asymmetry makes the loss a per-sample pessimistic bound on the conservative-policy-iteration surrogate, which the aut
If this is right
- Policy gradient training can take many epochs of minibatch SGD per collected batch without the destructive updates that motivated TRPO, raising sample efficiency at constant wall-clock cost.
- The clipped objective is fully compatible with shared policy/value architectures, dropout, and recurrent networks, removing one of TRPO's main practical limitations.
- On Atari, a much simpler algorithm than ACER reaches comparable final scores and superior early-training scores, suggesting that experience replay and off-policy correction are not required for competitive on-policy performance at this scale.
- An adaptive KL-penalty variant performs nearly as well, indicating the operative ingredient is bounding per-update policy change rather than the specific clip shape — leaving room for other proximal regularizers to play the same role.
- The recipe — collect a batch with N parallel actors, run K epochs of clipped-surrogate SGD, repeat — gives a default RL training loop with very few moving parts, plausibly explaining why it would become a workhorse baseline.
Where Pith is reading between the lines
- The 'pessimistic lower bound' framing is per-sample and per-timestep rather than a true expectation-level bound on policy improvement, so the trust-region intuition is heuristic; whether the clip actually constrains average KL across an update appears to be an empirical observation, not a guarantee.
- Because clipping zeros the gradient outside [1−ε, 1+ε], samples whose ratio drifts past the clip stop contributing for the rest of the epoch loop, so ε and the number of epochs K are coupled knobs that together act as an implicit early-stopping rule on each batch.
- The hyperparameter search in Section 6.1 is performed on the same MuJoCo suite later used to report scores, so the headline ranking against TRPO and A2C should be read as in-distribution performance rather than as evidence of out-of-the-box robustness.
- Three random seeds is thin for the ranking claims on 49 Atari games in Table 2; the win counts likely have substantial variance and a more thorough seed sweep could redistribute several games between PPO and ACER.
Load-bearing premise
That clipping the policy ratio acts as a reliable substitute for an actual trust region — the paper's lower-bound story is informal and only holds per-sample, so the method's stability across tasks rests on empirical observation from a few benchmarks and seeds rather than on a monotonic-improvement guarantee.
What would settle it
Re-run the MuJoCo and Atari benchmarks with more seeds and with hyperparameters frozen before any tuning on these same environments; if PPO with ε≈0.2 and K=10 epochs no longer outperforms TRPO and A2C on average, or if the clipped objective ceases to dominate the adaptive-KL and no-clip variants in Table 1, the central claim that clipping is the operative ingredient does not survive.
read the original abstract
We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Proximal Policy Optimization (PPO), a family of first-order policy-gradient methods that perform multiple epochs of minibatch SGD on a surrogate objective per batch of on-policy data. Two surrogates are proposed: (i) a clipped probability-ratio objective L^CLIP (Eq. 7), which takes the minimum of the unclipped importance-weighted advantage and a version where the ratio is clipped to [1−ε, 1+ε]; and (ii) an adaptive KL-penalty variant L^KLPEN (Eq. 8) that adjusts β to track a target KL. The authors argue L^CLIP forms a pessimistic lower bound on the conservative-policy-iteration objective L^CPI, allowing safe multi-epoch optimization without TRPO's conjugate-gradient/line-search machinery. Empirically, on 7 MuJoCo tasks (1M steps, 3 seeds, hyperparameters from a sweep), PPO with ε=0.2 outperforms TRPO, A2C, A2C+TR, vanilla PG, and CEM (Fig. 3, Table 1); on 49 Atari games (40M frames), PPO beats A2C in average-reward-over-training and is competitive with ACER (Table 2, Appendix B). Roboschool 3D humanoid tasks are showcased as scaling demonstrations.
Significance. If the empirical claims hold, the contribution is a methodologically simple, broadly applicable on-policy algorithm that captures most of TRPO's reliability with a few lines of code change to vanilla policy gradients, and is compatible with parameter-sharing and recurrent architectures (where TRPO's Fisher-vector machinery is awkward). The clean ablation in Table 1 across clipping, fixed-KL, and adaptive-KL settings is useful and supports the design choice. The work is well-positioned to become a strong default baseline for continuous-control and discrete-action benchmarks. The Atari sweep over 49 games and the head-to-head with ACER (Table 2, Fig. 6) is a substantial empirical scope. The hyperparameter tables in Appendix A make the work straightforwardly reproducible. Limitations on the theoretical side (no monotonic-improvement theorem analogous to TRPO) are acknowledged in spirit but should be sharpened (see major comments).
major comments (4)
- [§3 (Eq. 7) — 'pessimistic lower bound' claim] The paper claims L^CLIP is a 'lower bound (i.e., pessimistic bound) on the unclipped objective.' This is true only sample-wise: for each t, min(r_t Â_t, clip(r_t,1±ε)Â_t) ≤ r_t Â_t. It does not entail a lower bound on the true policy improvement η(π) − η(π_old) analogous to TRPO/Kakade-Langford, nor does it bound E_t[KL]. Once r_t exits [1−ε,1+ε] on the favorable side, the clipped branch is selected and its gradient w.r.t. θ on that sample is exactly zero — there is no restoring force, only a removed incentive. After K epochs of minibatch SGD on the same batch (the very feature being sold in §5), individual samples can drift arbitrarily far outside the band while the still-unclipped samples continue to drive updates. Please either (a) state precisely what the lower-bound property does and does not imply, or (b) provide a formal statement (even a per-state one) that connects L^CLIP optimi
- [Algorithm 1 / §5 — what is actually being evaluated] Algorithm 1 as written contains no KL-based early stopping, no value-loss clipping, no advantage normalization, no learning-rate annealing for MuJoCo, and no orthogonal init. Several of these are commonly used in implementations the community calls 'PPO' and are known to affect stability. The central methodological claim is that L^CLIP is what gives PPO its TRPO-like reliability. To support that, please clarify which auxiliary techniques (if any) are present in the implementation that produced Fig. 3, Table 1, and Table 2, and ideally include an ablation showing the contribution of L^CLIP itself versus those auxiliaries. Without this, it is hard to attribute the empirical wins to the clipped objective per se.
- [§6.1, Table 1 and §6.2, Fig. 3 — hyperparameter selection vs. evaluation] ε, β, and d_targ are searched on the same 7 MuJoCo environments that are then used to report Table 1, and the chosen ε=0.2 is reused in Fig. 3 against TRPO, A2C, etc. on those same environments. This raises a generalization concern for the headline ranking. Please report (i) seed-level variability (only 3 seeds per environment for Fig. 3 is thin for ranking claims), e.g., confidence intervals or per-seed traces, and (ii) at least one held-out environment or a leave-one-environment-out check that the chosen ε transfers. The Atari results (Table 2) partly address this with α-annealed ε=0.1, but the MuJoCo headline claim does not.
- [§6.4, Table 2 — divergent metric outcomes] Under metric (1) (avg reward over all training) PPO wins 30 to ACER's 18; under metric (2) (last 100 episodes) ACER wins 28 to PPO's 19. The paper presents both numbers but does not discuss the implication: PPO learns faster but ACER reaches better final performance on a majority of games. The current narrative ('competitive with ACER though much simpler') understates this. Please discuss and, if possible, report the metric (2) gap with effect sizes; with only 3 seeds per game (Fig. 6), I would also like to see whether the per-game victors are stable under seed resampling or some bootstrap.
minor comments (8)
- [Eq. 7 typography] The clipping function is rendered as 'clip(r_t(θ)), 1 − ε, 1 + ε)' in the §6.1 ablation list — there is a stray closing parenthesis. Please reconcile with Eq. 7.
- [Eq. 11] The truncated GAE expression has '(γλ)^{T−t+1} δ_{T−1}'; the conventional exponent is T−t−1. Please double-check the indexing.
- [Fig. 1] It would help to indicate explicitly on the figure that the gradient on the clipped branch is zero, and that the unclipped branch is selected only when it is the smaller of the two — this is the operational content of the min.
- [§4] The heuristic constants 1.5 and 2 for the adaptive-KL update, and the initial β=1, are stated as insensitive but not demonstrated. A small sensitivity table would close the loop, since the adaptive-KL variant is one of the named PPO variants.
- [§6.3, Roboschool] Table 4 says Adam stepsize was adjusted based on KL — this is essentially a third PPO variant not described in §3 or §4. Please state the rule explicitly so the Roboschool results are reproducible.
- [Table 6 (Appendix B)] Final-100-episode means without standard errors are hard to interpret given 3 seeds. Please include seed-level standard errors or a non-parametric interval.
- [§2.1] The remark that multi-step optimization on L^PG 'is not well-justified' and 'leads to destructively large policy updates' references unshown results ('similar or worse than the no clipping or penalty setting'). Including those numbers in Table 1 would strengthen the motivation.
- [Notation] r(θ_old)=1 is stated in §3 but the reader has to infer that the expectation in Eq. 6 is taken under π_{θ_old}; making the sampling distribution explicit would help.
Simulated Author's Rebuttal
We thank the referee for a careful and constructive reading. The four major comments are well taken and we agree with the substance of all of them. In summary: (1) we will sharpen the 'pessimistic lower bound' language in §3 to make clear that the inequality is per-sample and bounds L^CPI, not the true return improvement η(π)−η(π_old), and we will explicitly note the zero-gradient-outside-the-band behavior as a deliberate but unguaranteed-from-theory design choice; (2) we will fully document the auxiliary implementation details (advantage normalization, value-loss handling, LR/ε annealing schedules, entropy bonus, parameter sharing) that accompany Algorithm 1 in each experimental setting, and we acknowledge that Table 1 isolates only the surrogate-choice axis, not all auxiliaries; (3) we will add per-seed variability to Fig. 3 and a leave-one-environment-out check on ε=0.2, while pointing to Atari and Roboschool as genuinely held-out transfer evidence; (4) we will revise the Atari narrative to state plainly that PPO wins on training-time average reward while ACER wins on final-100-episode reward on a majority of games, and add effect sizes and a seed bootstrap in Appendix B. We do not claim a TRPO-style monotonic-improvement theorem and will avoid any wording that suggests one.
read point-by-point responses
-
Referee: The 'pessimistic lower bound' claim in §3 is only sample-wise; it does not imply a lower bound on η(π)−η(π_old) or on E_t[KL], and once r_t exits the band the gradient on that sample is zero with no restoring force. Multi-epoch SGD on the same batch can therefore let individual samples drift arbitrarily far. Either qualify the claim or provide a formal statement.
Authors: The referee is correct, and we will tighten the language. The lower-bound property we describe is the per-sample inequality min(r_t Â_t, clip(r_t,1±ε)Â_t) ≤ r_t Â_t, hence L^CLIP(θ) ≤ L^CPI(θ) pointwise in θ; we do not claim — and the manuscript should not be read as claiming — a Kakade–Langford-style bound on the true performance difference η(π)−η(π_old), nor a bound on E_t[KL]. We will revise §3 to state this explicitly and to remove any suggestion of a TRPO-analogous monotonic-improvement guarantee, replacing the phrasing with 'pessimistic surrogate' (lower bound on L^CPI, not on η). On the zero-gradient/no-restoring-force point: this is a real and intentional property of the clipped objective. When r_t exits [1−ε,1+ε] on the favorable side the incentive to move further is removed but the sample exerts no pull back toward 1; samples that remain in-band continue to drive the update, so individual r_t can in principle drift. We will state this caveat in §3 and note that the empirical evidence in Fig. 2 (KL ≈ 0.02 at the optimum of L^CLIP on Hopper) and Table 1 is what motivates the design — not a formal guarantee. The adaptive-KL variant in §4 is offered precisely as a hedge for users who require a controlled E_t[KL]. We will cross-reference this trade-off explicitly. revision: yes
-
Referee: Algorithm 1 omits auxiliaries (KL early stopping, value-loss clipping, advantage normalization, LR annealing on MuJoCo, orthogonal init) that some 'PPO' implementations use. Clarify which are present in the runs producing Fig. 3, Table 1, Table 2, and ideally ablate L^CLIP versus those auxiliaries.
Authors: This is a fair request and we will expand Appendix A accordingly. For the MuJoCo runs (Table 1, Fig. 3) the implementation behind Algorithm 1 uses: GAE(λ=0.95), advantage standardization within each batch, a separate (non-shared) value network trained with an unclipped MSE loss, fixed Adam stepsize 3×10⁻⁴ (no annealing), no KL-based early stopping, and default Gaussian-policy initialization (not orthogonal). For the Atari runs (Table 2, Fig. 6) we additionally linearly anneal both the Adam stepsize and the clip parameter ε via the factor α (already noted in Table 5), share parameters between policy and value, clip the value-function loss in the same manner as the policy ratio, and include the entropy bonus c₂=0.01. The Roboschool runs use the adaptive-LR rule referenced in §6.2 footnote 3. We will state all of this explicitly. Regarding the requested ablation of L^CLIP versus auxiliaries: Table 1 already isolates the surrogate choice (no clipping/penalty vs. clipping vs. fixed/adaptive KL) under an otherwise fixed pipeline, which is the cleanest available evidence that L^CLIP itself contributes the bulk of the gain on MuJoCo. We agree this does not isolate value-clipping or advantage normalization on Atari and we will say so; a fuller auxiliary ablation is outside the scope of this revision but we flag it as an open empirical question. revision: partial
-
Referee: ε, β, d_targ are tuned on the same 7 MuJoCo environments used in Table 1, and ε=0.2 is reused in Fig. 3 on those environments. Report seed-level variability (3 seeds is thin) and a held-out / leave-one-environment-out check that the chosen ε transfers.
Authors: We acknowledge the in-sample tuning concern. Two pieces of evidence we can point to, which we will surface more clearly in the revision: (i) Table 1 shows that ε ∈ {0.1, 0.2, 0.3} all give average normalized scores in [0.70, 0.82], i.e. the ranking is not knife-edge in ε; (ii) the Atari experiments (Table 2, Fig. 6, 49 games) and the Roboschool humanoid tasks (Fig. 4) constitute genuine held-out domains with respect to the MuJoCo sweep, and ε=0.1 (annealed) and ε=0.2 respectively continue to perform well there. This does not replace a leave-one-out study on MuJoCo but it does demonstrate transfer across domains. On seed variability: we agree 3 seeds is thin for ranking claims. For the revision we will (a) add per-seed traces or shaded standard-deviation bands to Fig. 3, and (b) report the leave-one-environment-out average normalized score for the chosen ε=0.2 against the next-best setting in Table 1, to quantify whether the choice is stable. We will not over-claim on the basis of 3 seeds — the wording of §6.2 will be softened from 'outperforms the previous methods on almost all' to a comparison qualified by the seed count. revision: yes
-
Referee: Table 2 shows PPO wins metric (1) 30–18 but loses metric (2) 19–28 to ACER, suggesting PPO learns faster while ACER reaches better final performance. The 'competitive with ACER though much simpler' framing understates this; please discuss, report effect sizes, and check stability under seed resampling.
Authors: The referee's reading of Table 2 is accurate and we will revise §6.4 to reflect it. The honest summary is: PPO has better learning-curve area (sample efficiency over the 40M-frame budget) on a majority of games, while ACER attains better terminal performance on a majority of games. We will state this explicitly rather than collapse it into 'competitive'. For effect sizes we will, in the revision, add to Appendix B (i) per-game mean and standard deviation across the three seeds for the last-100-episode metric, and (ii) the median and mean of the per-game ACER−PPO score gap normalized by the larger of the two scores, separately for the two metrics, so readers can see the magnitude rather than only the win count. We will also report a simple bootstrap over the three seeds per game indicating how many of the per-game victors flip under resampling; we expect a non-trivial number of close games to be unstable, and the revised text will say so. Whether ACER's terminal advantage would persist with longer training or with PPO's auxiliaries (e.g. value clipping, entropy schedule) tuned per game is beyond what we can claim from the present data and we will not assert it. revision: yes
- We do not provide a formal lower-bound result on the true policy-performance gap η(π)−η(π_old) for L^CLIP; the manuscript's contribution on this point is empirical, and we will state this limitation rather than attempt a theorem in the revision.
- A full ablation isolating L^CLIP from every commonly-used auxiliary (value-loss clipping, orthogonal init, KL early stopping, etc.) across all benchmarks is beyond the scope of the present revision; we will document what is and is not used but cannot deliver an exhaustive auxiliary-by-auxiliary ablation here.
Circularity Check
Not circular: L^CLIP is defined independently of benchmarks; mild hyperparameter-selection-on-evaluation-set issue, but no equation reduces to its input.
specific steps
-
fitted input called prediction
[§6.1 Table 1 vs §6.2 Fig. 3]
"Because we are searching over hyperparameters for each algorithm variant, we chose a computationally cheap benchmark to test the algorithms on. Namely, we used 7 simulated robotics tasks ... For PPO, we used the hyperparameters from the previous section, with ϵ = 0.2."
The ε=0.2 setting is selected by sweeping over the same 7 MuJoCo environments that are then used in §6.2 to claim PPO 'outperforms the previous methods on almost all the continuous control environments.' This is hyperparameter selection on the evaluation set — a mild form of fitted-input-called-prediction. It is not definitional circularity (L^CLIP is not derived from the scores), but it does inflate the headline comparison.
full rationale
PPO's central object — the clipped surrogate L^CLIP(θ) = Ê_t[min(r_t Â_t, clip(r_t, 1−ε, 1+ε) Â_t)] — is defined in §3 from the probability ratio and advantage estimator alone, with no appeal to the benchmark scores it is later evaluated against. Empirical claims are tested on external standard suites (OpenAI Gym MuJoCo, Arcade Learning Environment) against independently tuned baselines (TRPO, A2C, ACER, CEM, vanilla PG). No "prediction" is fitted from the quantity it is later said to predict; no uniqueness theorem is imported from the authors' own prior work to forbid alternatives. The §6.1 ablation grid (ε ∈ {0.1, 0.2, 0.3}, KL targets, fixed-β values) is itself a comparison rather than a definitional fit. The one mild methodological circularity is that the hyperparameter sweep in Table 1 is run on the same 7 MuJoCo tasks later used for the headline comparison in Fig. 3, so the ε=0.2 choice is partly tuned on the evaluation distribution — but this is hyperparameter-selection-on-test, a generalization concern, not a definitional or fit-as-prediction circularity. The skeptic's attack on L^CLIP (zero-gradient outside the band, no restoring force, "pessimistic lower bound" only per-sample) is a correctness/mechanism complaint about whether the surrogate truly enforces a trust region; it is not a circular-derivation complaint, since the paper does not claim a formal monotonic-improvement theorem for L^CLIP — it explicitly says experiments show fixed-β doesn't suffice and that PPO "emulates" TRPO. The self-citations to GAE [Sch+15a] and TRPO [Sch+15b] supply the advantage estimator and the conceptual baseline, not load-bearing uniqueness claims. Honest finding: score 2 (one minor methodological optimism, no derivation circularity).
Axiom & Free-Parameter Ledger
free parameters (5)
- Clipping parameter ε =
0.2 (MuJoCo), 0.1·α (Atari, annealed)
- Number of epochs K =
10 (MuJoCo), 15 (Roboschool), 3 (Atari)
- GAE λ =
0.95
- Adaptive-KL targets (d_targ, β multiplier 1.5/2) =
d_targ ∈ {0.003, 0.01, 0.03}
- VF and entropy coefficients c1, c2 =
c1=1, c2=0.01 (Atari)
axioms (3)
- domain assumption Generalized Advantage Estimation provides a usefully low-variance, low-bias advantage estimator.
- ad hoc to paper The clipped surrogate L^CLIP is a 'pessimistic lower bound' on L^CPI in a sense that justifies multi-epoch optimization.
- domain assumption Empirical performance on MuJoCo + Atari is a sufficient proxy for general algorithmic quality in deep RL.
Lean theorems connected to this paper
-
Cost.Jcost / Foundation.CostAxiomsJ(x) = ½(x + x⁻¹) − 1 (canonical reciprocal cost) unclearL^CLIP(θ) = Ê_t[min(r_t(θ)Â_t, clip(r_t(θ), 1−ε, 1+ε)Â_t)] with r_t(θ) = π_θ(a_t|s_t)/π_θ_old(a_t|s_t)
-
Foundation.InevitabilityStructurezero_parameter / NoFreeParameters unclearε = 0.2 chosen by hyperparameter search; Atari uses linearly annealed ε and Adam stepsize; multiple epochs K, GAE λ=0.95, etc., all hand-tuned
-
Foundation.LedgerForcingJ_symmetric (J(x) = J(1/x)) unclearL^CLIP is asymmetric: for Â_t > 0 the clip activates only when r_t > 1+ε; for Â_t < 0 only when r_t < 1−ε. The objective is not invariant under r ↦ 1/r.
Forward citations
Cited by 60 Pith papers
-
Alignment faking in large language models
Claude 3 Opus strategically fakes alignment by complying with harmful requests only during simulated training to preserve its preference for refusing them afterward.
-
Agent-BRACE: Decoupling Beliefs from Actions in Long-Horizon Tasks via Verbalized State Uncertainty
Agent-BRACE improves LLM agent performance on long-horizon partially observable tasks by 5.3-14.5% through a decoupled belief state of verbalized atomic claims with certainty labels that keeps context length constant.
-
SimWorld Studio: Automatic Environment Generation with Evolving Coding Agent for Embodied Agent Learning
SimWorld Studio uses a self-evolving coding agent to generate adaptive 3D environments that improve embodied agent performance, with reported gains of 18 points over fixed environments in navigation tasks.
-
ReLibra: Routing-Replay-Guided Load Balancing for MoE Training in Reinforcement Learning
ReLibra uses pre-known token-to-expert routing from RL rollouts to perform inter-batch expert reordering and intra-batch replication, delivering up to 1.6x higher throughput than Megatron-LM and 1.2x over oracle-equip...
-
Weak-to-Strong Generalization is Nearly Inevitable (in Linear Models)
Weak-to-strong generalization is nearly inevitable in linear logistic regression for most student-teacher pairs without any model capacity mismatch.
-
Structural Equivalence and Learning Dynamics in Delayed MARL
Observation and action delays are formally equivalent in cooperative Dec-POMDPs, yielding identical optimal solutions and enabling zero-shot transfer, though learning dynamics differ due to credit assignment and opera...
-
RefereeBench: Are Video MLLMs Ready to be Multi-Sport Referees
RefereeBench shows that even the strongest video MLLMs reach only around 60% accuracy on multi-sport refereeing tasks and struggle with rule application and temporal grounding.
-
Lightning OPD: Efficient Post-Training for Large Reasoning Models with Offline On-Policy Distillation
Lightning OPD enforces teacher consistency by precomputing log-probabilities over SFT rollouts, matching standard OPD performance with bounded gradient discrepancy and achieving 4x speedup on math and code reasoning tasks.
-
OP-GRPO: Efficient Off-Policy GRPO for Flow-Matching Models
OP-GRPO is the first off-policy GRPO method for flow-matching models that reuses trajectories via replay buffer and importance sampling corrections, matching on-policy performance with 34.2% of the training steps.
-
Beyond the Assistant Turn: User Turn Generation as a Probe of Interaction Awareness in Language Models
User-turn generation reveals that LLMs' interaction awareness is largely decoupled from task accuracy, remaining near zero in deterministic settings even as accuracy scales to 96.8% on GSM8K.
-
Flow-GRPO: Training Flow Matching Models via Online RL
Flow-GRPO is the first online RL method for flow matching models, raising GenEval accuracy from 63% to 95% and text-rendering accuracy from 59% to 92% with little reward hacking.
-
QAP-Router: Tackling Qubit Routing as Dynamic Quadratic Assignment with Reinforcement Learning
QAP-Router models qubit routing as dynamic QAP and applies RL with a solution-aware Transformer to cut CNOT counts by 12-30% versus industry compilers on real circuit benchmarks.
-
Missingness-MDPs: Bridging the Theory of Missing Data and POMDPs
Miss-MDPs extend POMDPs with missing-data theory to learn observation missingness patterns and compute near-optimal policies with high-probability guarantees.
-
Combining On-Policy Optimization and Distillation for Long-Context Reasoning in Large Language Models
dGRPO merges outcome-based policy optimization with dense teacher guidance from on-policy distillation, yielding more stable long-context reasoning on the new LongBlocks synthetic dataset.
-
On the Importance of Multistability for Horizon Generalization in Reinforcement Learning
Multistability is necessary for temporal horizon generalization in POMDPs, sufficient in simple tasks along with transient dynamics in complex ones, while monostable parallelizable RNNs like SSMs and gated linear RNNs...
-
Learning Agentic Policy from Action Guidance
ActGuide-RL uses human action data as plan-style guidance in mixed-policy RL to overcome exploration barriers in LLM agents, matching SFT+RL performance on search benchmarks without cold-start training.
-
Towards Order Fairness: Mitigating LLMs Order Sensitivity through Dual Group Advantage Optimization
DGAO uses reinforcement learning to optimize LLMs for both accuracy and order stability by balancing intra-group accuracy advantages and inter-group stability advantages.
-
StepCodeReasoner: Aligning Code Reasoning with Stepwise Execution Traces via Reinforcement Learning
StepCodeReasoner aligns code reasoning with verifiable stepwise execution traces via print anchors and bi-level GRPO reinforcement learning, reaching SOTA results on CRUXEval (91.1%) and LiveCodeBench (86.5%) for a 7B model.
-
Delightful Gradients Accelerate Corner Escape
Delightful Policy Gradient removes exponential corner trapping in softmax policy optimization for bandits and tabular MDPs, achieving logarithmic escape times and global O(1/t) convergence.
-
Variance-aware Reward Modeling with Anchor Guidance
Anchor-guided variance-aware reward modeling uses two response-level anchors to resolve non-identifiability in Gaussian models of pluralistic preferences, yielding provable identification, a joint training objective, ...
-
Block-R1: Rethinking the Role of Block Size in Multi-domain Reinforcement Learning for Diffusion Large Language Models
Block-R1 formulates domain block size conflicts in multi-domain RL for dLLMs, releases a 41K-sample dataset with per-sample best block sizes and a conflict score, and provides a benchmark plus simple cross-domain trai...
-
CaC: Advancing Video Reward Models via Hierarchical Spatiotemporal Concentrating
CaC is a hierarchical spatiotemporal concentrating reward model for video anomalies that reports 25.7% accuracy gains on fine-grained benchmarks and 11.7% anomaly reduction in generated videos via a new dataset and GR...
-
Breaking $\textit{Winner-Takes-All}$: Cooperative Policy Optimization Improves Diverse LLM Reasoning
GCPO shifts RLVR from rollout competition to team cooperation by assigning advantages via marginal contributions to a determinant-based coverage volume over semantic embeddings, yielding higher accuracy and solution d...
-
TuniQ: Autotuning Compilation Passes for Quantum Workloads at Scale for Effectiveness and Efficiency
TuniQ uses RL with a dual-encoder, shaped rewards, and action masking to autotune quantum compilation passes, improving fidelity and speed over Qiskit while generalizing across backends and scaling to large circuits.
-
Dynamic Full-body Motion Agent with Object Interaction via Blending Pre-trained Modular Controllers
A two-stage framework augments HOI data with dynamic priors and blends pre-trained dynamic motion and static interaction agents via a composer network to enable long-term dynamic human-object interactions with higher ...
-
gym-invmgmt: An Open Benchmarking Framework for Inventory Management Methods
gym-invmgmt is a new benchmarking framework that evaluates inventory policies across optimization and learning methods, finding stochastic programming strongest among non-oracle approaches and PPO-Transformer best amo...
-
OLIVIA: Online Learning via Inference-time Action Adaptation for Decision Making in LLM ReAct Agents
OLIVIA treats LLM agent action selection as a contextual linear bandit over frozen hidden states and applies UCB exploration to adapt online, yielding consistent gains over static ReAct and prompt-based baselines on f...
-
Newton's Lantern: A Reinforcement Learning Framework for Finetuning AC Power Flow Warm Start Models
Newton's Lantern is an RL finetuning pipeline that uses iteration count as reward to produce warm starts for AC power flow, outperforming supervised methods by converging on all tested snapshots with lowest mean itera...
-
Equivariant Reinforcement Learning for Clifford Quantum Circuit Synthesis
Equivariant RL agent synthesizes near-optimal Clifford circuits up to 30 qubits with lower two-qubit gate counts than Qiskit baselines.
-
Natural Policy Gradient as Doubly Smoothed Policy Iteration: A Bellman-Operator Framework
Natural policy gradient is a special case of doubly smoothed policy iteration that achieves distribution-free global geometric convergence to an epsilon-optimal policy in O((1-gamma)^{-1} log((1-gamma)^{-1} epsilon^{-...
-
Controllability in preference-conditioned multi-objective reinforcement learning
Standard MORL metrics do not measure whether preference inputs reliably control agent behavior, so a new controllability metric is introduced to restore the link between user intent and agent output.
-
DeepRefine: Agent-Compiled Knowledge Refinement via Reinforcement Learning
DeepRefine refines agent-compiled knowledge bases via multi-turn abductive diagnosis and RL training with a GBD reward, yielding consistent downstream task gains.
-
Relative Score Policy Optimization for Diffusion Language Models
RSPO interprets reward advantages as targets for relative log-ratios in dLLMs, calibrating noisy estimates to stabilize RLVR training and achieve strong gains on planning tasks with competitive math reasoning performance.
-
Beyond Self-Play and Scale: A Behavior Benchmark for Generalization in Autonomous Driving
BehaviorBench reveals that self-play RL policies for autonomous driving overfit to their training traffic agents and do not generalize to other behaviors, motivating a hybrid rule-based plus learned planner.
-
Overcoming Catastrophic Forgetting in Visual Continual Learning with Reinforcement Fine-Tuning
RaPO reduces catastrophic forgetting in visual continual learning by shaping rewards around policy drift and stabilizing advantages with cross-task exponential moving averages during reinforcement fine-tuning of multi...
-
Trust Region Inverse Reinforcement Learning: Explicit Dual Ascent using Local Policy Updates
TRIRL enables explicit dual-ascent IRL via trust-region local policy updates that guarantee monotonic improvement without full RL solves per iteration, outperforming prior imitation methods by 2.4x aggregate IQM and r...
-
Offline Preference Optimization for Rectified Flow with Noise-Tracked Pairs
PNAPO augments preference data with prior noise pairs and uses straight-line interpolation to create a tighter surrogate objective for offline alignment of rectified flow models.
-
Fast Rates for Offline Contextual Bandits with Forward-KL Regularization under Single-Policy Concentrability
The paper establishes the first tilde O(epsilon^{-1}) upper bounds and matching lower bounds for forward-KL-regularized offline contextual bandits under single-policy concentrability in both tabular and general functi...
-
Revisiting Mixture Policies in Entropy-Regularized Actor-Critic
A new marginalized reparameterization estimator allows low-variance training of mixture policies in entropy-regularized actor-critic algorithms, matching or exceeding Gaussian policy performance in several continuous ...
-
BoostAPR: Boosting Automated Program Repair via Execution-Grounded Reinforcement Learning with Dual Reward Models
BoostAPR improves automated program repair by using execution-grounded RL with a sequence-level assessor and line-level credit allocator, reaching 40.7% on SWE-bench Verified and strong cross-language results.
-
Non-Parametric Rehearsal Learning via Conditional Mean Embeddings
A non-parametric rehearsal learning framework using conditional mean embeddings and a Probit surrogate for avoiding undesired outcomes, with consistency guarantees.
-
Preserving Foundational Capabilities in Flow-Matching VLAs through Conservative SFT
ConSFT prevents catastrophic forgetting in fine-tuning flow-matching VLAs by dynamically scaling gradients based on model confidence, retaining over 20% more pre-trained capability than standard SFT without prior data...
-
BubbleSpec: Turning Long-Tail Bubbles into Speculative Rollout Drafts for Synchronous Reinforcement Learning
BubbleSpec exploits long-tail bubbles in synchronous RL by using faster ranks' idle time to pre-generate rollout drafts for speculative decoding, reducing steps by 50% and raising throughput up to 1.8x while preservin...
-
AHD Agent: Agentic Reinforcement Learning for Automatic Heuristic Design
AHD Agent trains a 4B-parameter LLM via agentic RL to actively use tools for automatic heuristic design, matching or exceeding larger baselines across eight domains with fewer evaluations.
-
Value-Decomposed Reinforcement Learning Framework for Taxiway Routing with Hierarchical Conflict-Aware Observations
CaTR applies value-decomposed RL with hierarchical conflict-aware observations to achieve better safety-efficiency trade-offs than planning, optimization, and standard RL baselines in a realistic airport taxiway simulation.
-
TMPO: Trajectory Matching Policy Optimization for Diverse and Efficient Diffusion Alignment
TMPO replaces scalar reward maximization with trajectory-level matching to a Boltzmann distribution via Softmax-TB, improving generative diversity by 9.1% while keeping competitive reward performance.
-
The Cancellation Hypothesis in Critic-Free RL: From Outcome Rewards to Token Credits
The cancellation hypothesis shows how rollout-level rewards produce token-level credit assignment in critic-free RL through cancellation of opposing signals on shared tokens, with empirical support and batching interv...
-
The Attacker in the Mirror: Breaking Self-Consistency in Safety via Anchored Bipolicy Self-Play
Anchored Bipolicy Self-Play trains role-specific LoRA adapters on a frozen base model to break self-consistency collapse in self-play red-teaming, yielding up to 100x parameter efficiency and stronger safety on Qwen2....
-
KL for a KL: On-Policy Distillation with Control Variate Baseline
vOPD stabilizes on-policy distillation gradients by subtracting a closed-form per-token negative reverse KL baseline as a detached control variate, preserving unbiasedness while lowering variance and matching expensiv...
-
Approximation-Free Differentiable Oblique Decision Trees
DTSemNet gives an exact, invertible neural-network encoding of hard oblique decision trees that supports direct gradient training for both classification and regression without probabilistic softening or quantized estimators.
-
Guidance Is Not a Hyperparameter: Learning Dynamic Control in Diffusion Language Models
Adaptive guidance trajectories learned via PPO outperform fixed-scale CFG on controllability-quality balance in three controlled NLP generation tasks with discrete diffusion models.
-
Your Language Model is Its Own Critic: Reinforcement Learning with Value Estimation from Actor's Internal States
POISE trains a lightweight probe on the actor's internal states to predict expected rewards for RLVR, matching DAPO performance on math benchmarks with lower compute by avoiding extra rollouts or critic models.
-
Your Language Model is Its Own Critic: Reinforcement Learning with Value Estimation from Actor's Internal States
POISE estimates value baselines for RL in LLMs from the actor's internal states via a lightweight probe and cross-rollout construction, matching DAPO performance with lower compute on math reasoning benchmarks.
-
Convex Optimization with Nested Evolving Feasible Sets
For convex losses in nested evolving feasible sets, a lazy algorithm balances O(T^{1-β}) regret with O(T^β) movement for any β; for strongly convex or sharp losses, Frugal achieves zero regret with O(log T) movement, ...
-
Rethinking Importance Sampling in LLM Policy Optimization: A Cumulative Token Perspective
The cumulative token IS ratio gives unbiased prefix correction and lower variance than full-sequence ratios for token-level gradients in LLM policy optimization, enabling CTPO to outperform GRPO and GSPO baselines on ...
-
Beyond Reasoning: Reinforcement Learning Unlocks Parametric Knowledge in LLMs
RL on binary rewards boosts LLM factual recall by ~27% relative across models by redistributing probability mass to latent correct answers rather than acquiring new knowledge.
-
Theoretical Limits of Language Model Alignment
The maximum reward gain under KL-regularized LM alignment is a Jeffreys divergence term, estimable as covariance from base samples, with best-of-N approaching the theoretical limit.
-
Learning Visual Feature-Based World Models via Residual Latent Action
RLA-WM predicts residual latent actions via flow matching to create visual feature world models that outperform prior feature-based and diffusion approaches while enabling offline video-based robot RL.
-
Rollback-Free Stable Brick Structures Generation
Reinforcement learning internalizes physical stability rules for brick structures, enabling the first rollback-free generation with orders-of-magnitude faster inference.
-
Randomness is sometimes necessary for coordination
Structured per-agent randomness via ranked masking in attention allows symmetric agents to break ties and coordinate, achieving perfect success on symmetric tasks where deterministic policies fail and enabling zero-sh...
Reference graph
Works this paper leans on
-
[1]
[Bro+16] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. “OpenAI Gym”. In: arXiv preprint arXiv:1606.01540 (2016). [Dua+16] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. “Benchmarking Deep Reinforcement Learning for Continuous Control”. In: arXiv preprint arXiv:1604.06778 (2016). [Hee+17] N. Heess, ...
work page internal anchor Pith review arXiv 2016
-
[2]
Adam: A Method for Stochastic Optimization
2002, pp. 267–274. [KB14] D. Kingma and J. Ba. “Adam: A method for stochastic optimization”. In: arXiv preprint arXiv:1412.6980 (2014). [Mni+15] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. “Human-level control through deep reinforcement learning”. In: Nature 51...
work page internal anchor Pith review Pith/arXiv arXiv 2002
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.