pith. machine review for the scientific record. sign in

arxiv: 1502.03919 · v2 · submitted 2015-02-13 · 💻 cs.AI · cs.LG· stat.ML

Recognition: unknown

Policy Gradient for Coherent Risk Measures

Authors on Pith no claims yet
classification 💻 cs.AI cs.LGstat.ML
keywords riskmeasuresapproachgradientpolicycoherentcostdynamic
0
0 comments X
read the original abstract

Several authors have recently developed risk-sensitive policy gradient methods that augment the standard expected cost minimization problem with a measure of variability in cost. These studies have focused on specific risk-measures, such as the variance or conditional value at risk (CVaR). In this work, we extend the policy gradient method to the whole class of coherent risk measures, which is widely accepted in finance and operations research, among other fields. We consider both static and time-consistent dynamic risk measures. For static risk measures, our approach is in the spirit of policy gradient algorithms and combines a standard sampling approach with convex programming. For dynamic risk measures, our approach is actor-critic style and involves explicit approximation of value function. Most importantly, our contribution presents a unified approach to risk-sensitive reinforcement learning that generalizes and extends previous results.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Reinforcement Learning for Exponential Utility: Algorithms and Convergence in Discounted MDPs

    cs.LG 2026-05 unverdicted novelty 7.0

    Derives contraction-based Q-value extensions for exponential utility and proves almost-sure convergence of two-timescale and one-timescale model-free algorithms in discounted MDPs.