pith. machine review for the scientific record. sign in

arxiv: 2602.06138 · v2 · submitted 2026-02-05 · 💻 cs.LG

Recognition: unknown

Flow Matching for Offline Reinforcement Learning with Discrete Actions

Fairoz Nower Khan, Haibo Yang, Nabuat Zaman Nahim, Peizhong Ju, Ruiquan Huang

classification 💻 cs.LG
keywords actionflowmatchingofflinediscretesettingsspacescontinuous
0
0 comments X
read the original abstract

Generative policies based on diffusion models and flow matching have shown strong promise for offline reinforcement learning (RL), but their applicability remains largely confined to continuous action spaces. To address a broader range of offline RL settings, we extend flow matching to a general framework that supports discrete action spaces with multiple objectives. Specifically, we replace continuous flows with continuous-time Markov chains, trained using a Q-weighted flow matching objective. We then extend our design to multi-agent settings, mitigating the exponential growth of joint action spaces via a factorized conditional path. We theoretically show that, under idealized conditions, optimizing this objective recovers the optimal policy. Extensive experiments further demonstrate that our method performs robustly across diverse settings and benchmarks, including high-dimensional control, multi-agent games, and dynamically changing preferences over multiple objectives, while outperforming traditional offline RL methods in practical multi-modal decision-making scenarios. Our discrete framework can also be applied to continuous-control problems through action quantization, providing a flexible trade-off between representational complexity and performance.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Discrete MeanFlow: One-Step Generation via Conditional Transition Kernels

    cs.LG 2026-05 unverdicted novelty 7.0

    Discrete MeanFlow parameterizes CTMC conditional transition kernels with a boundary-by-construction design to enable exact one-step generation in discrete state spaces.

  2. Discrete Flow Matching for Offline-to-Online Reinforcement Learning

    cs.LG 2026-05 unverdicted novelty 6.0

    DRIFT enables stable offline-to-online fine-tuning of CTMC policies in discrete RL via advantage-weighted discrete flow matching, path-space regularization, and candidate-set approximation.