pith. machine review for the scientific record. sign in

Proof.From Theorem 3.3, the learned energy satisfies: Eϕ(a,s) =− Q∗(s,a) α +c(s),(28) wherec(s)is a state-dependent constant arising from integration

1 Pith paper cite this work. Polarity classification is still indexing.

1 Pith paper citing it

fields

cs.RO 1

years

2026 1

verdicts

UNVERDICTED 1

representative citing papers

Recovering Hidden Reward in Diffusion-Based Policies

cs.RO · 2026-05-01 · unverdicted · novelty 6.0

EnergyFlow shows that denoising score matching on diffusion policies recovers the gradient of the expert's soft Q-function under maximum-entropy optimality, enabling non-adversarial reward extraction and improved policy generalization.

citing papers explorer

Showing 1 of 1 citing paper.

  • Recovering Hidden Reward in Diffusion-Based Policies cs.RO · 2026-05-01 · unverdicted · none · ref 16

    EnergyFlow shows that denoising score matching on diffusion policies recovers the gradient of the expert's soft Q-function under maximum-entropy optimality, enabling non-adversarial reward extraction and improved policy generalization.