pith. machine review for the scientific record. sign in

Journal of Machine Learning Research , volume=

4 Pith papers cite this work. Polarity classification is still indexing.

4 Pith papers citing it

fields

cs.LG 4

years

2026 4

verdicts

UNVERDICTED 4

representative citing papers

Delightful Gradients Accelerate Corner Escape

cs.LG · 2026-05-12 · unverdicted · novelty 7.0

Delightful Policy Gradient removes exponential corner trapping in softmax policy optimization for bandits and tabular MDPs, achieving logarithmic escape times and global O(1/t) convergence.

Actor-Critic Algorithm for Dynamic Expectile and CVaR

cs.LG · 2026-05-08 · unverdicted · novelty 6.0

A model-free off-policy actor-critic algorithm is constructed for dynamic expectile and CVaR using a surrogate policy gradient without transition perturbation and elicitability-based value learning, with empirical outperformance in risk-averse domains.

citing papers explorer

Showing 4 of 4 citing papers.

  • Delightful Gradients Accelerate Corner Escape cs.LG · 2026-05-12 · unverdicted · none · ref 4

    Delightful Policy Gradient removes exponential corner trapping in softmax policy optimization for bandits and tabular MDPs, achieving logarithmic escape times and global O(1/t) convergence.

  • Policy Gradient Methods for Non-Markovian Reinforcement Learning cs.LG · 2026-05-11 · unverdicted · none · ref 48

    Introduces the Agent State-Markov Policy Gradient (ASMPG) algorithm and a policy gradient theorem for non-Markovian decision processes by jointly optimizing agent state dynamics and control policy.

  • Rethinking Ratio-Based Trust Regions for Policy Optimization in Multi-Agent Reinforcement Learning cs.LG · 2026-05-09 · unverdicted · none · ref 21

    MARS replaces additive clipping and soft penalties in multi-agent trust-region methods with a symmetric geometric barrier, matching or exceeding MAPPO and MASPO performance across 47 tasks in eight environments.

  • Actor-Critic Algorithm for Dynamic Expectile and CVaR cs.LG · 2026-05-08 · unverdicted · none · ref 25

    A model-free off-policy actor-critic algorithm is constructed for dynamic expectile and CVaR using a surrogate policy gradient without transition perturbation and elicitability-based value learning, with empirical outperformance in risk-averse domains.