pith. machine review for the scientific record. sign in

arxiv: 1712.09381 · v4 · submitted 2017-12-26 · 💻 cs.AI · cs.DC· cs.LG

Recognition: unknown

RLlib: Abstractions for Distributed Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.AI cs.DCcs.LG
keywords rllibalgorithmscomputationdistributedlearningprimitivesreinforcementabstractions
0
0 comments X
read the original abstract

Reinforcement learning (RL) algorithms involve the deep nesting of highly irregular computation patterns, each of which typically exhibits opportunities for distributed computation. We argue for distributing RL components in a composable way by adapting algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource requirements within short-running compute tasks. We demonstrate the benefits of this principle through RLlib: a library that provides scalable software primitives for RL. These primitives enable a broad range of algorithms to be implemented with high performance, scalability, and substantial code reuse. RLlib is available at https://rllib.io/.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. RAMP: Hybrid DRL for Online Learning of Numeric Action Models

    cs.AI 2026-04 unverdicted novelty 5.0

    RAMP learns numeric action models online via a DRL-planning feedback loop and outperforms PPO on IPC numeric domains in solvability and plan quality.