pith. machine review for the scientific record. sign in

arxiv: 1906.04737 · v1 · submitted 2019-06-11 · 💻 cs.LG · cs.AI· cs.MA· stat.ML

Recognition: unknown

Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.MAstat.ML
keywords learningmulti-agentreinforcementagentsdeepdecision-makingnon-stationarityproblems
0
0 comments X
read the original abstract

Recent developments in deep reinforcement learning are concerned with creating decision-making agents which can perform well in various complex domains. A particular approach which has received increasing attention is multi-agent reinforcement learning, in which multiple agents learn concurrently to coordinate their actions. In such multi-agent environments, additional learning problems arise due to the continually changing decision-making policies of agents. This paper surveys recent works that address the non-stationarity problem in multi-agent deep reinforcement learning. The surveyed methods range from modifications in the training procedure, such as centralized training, to learning representations of the opponent's policy, meta-learning, communication, and decentralized learning. The survey concludes with a list of open problems and possible lines of future research.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Plasticity-Enhanced Multi-Agent Mixture of Experts for Dynamic Objective Adaptation in UAVs-Assisted Emergency Communication Networks

    cs.MA 2026-04 unverdicted novelty 7.0

    PE-MAMoE combines sparsely gated mixture-of-experts actors with a non-parametric phase controller in MAPPO to maintain plasticity under dynamic user mobility and traffic, yielding 26.3% higher normalized IQM return in...

  2. ERPPO: Entropy Regularization-based Proximal Policy Optimization

    cs.LG 2026-05 unverdicted novelty 5.0

    ERPPO adds a DSA-based ambiguity estimator to MAPPO and switches between L1 and L2 entropy regularization to improve exploration and stability in non-stationary multi-dimensional observations.