pith. machine review for the scientific record. sign in

arxiv: 1811.07029 · v1 · submitted 2018-11-13 · 💻 cs.LG · cs.AI· cs.MA· stat.ML

Recognition: unknown

Modelling the Dynamic Joint Policy of Teammates with Attention Multi-agent DDPG

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.MAstat.ML
keywords teammatespoliciesatt-maddpgattentionmodelpolicychallengedynamic
0
0 comments X
read the original abstract

Modelling and exploiting teammates' policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates' policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents' policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present \emph{ATTention Multi-Agent Deep Deterministic Policy Gradient} (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates' policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates' policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Policy Optimization in Hybrid Discrete-Continuous Action Spaces via Mixed Gradients

    cs.LG 2026-05 unverdicted novelty 7.0

    HPO enables unbiased policy optimization in hybrid action spaces by mixing differentiable simulation gradients with score-function estimates, outperforming PPO as continuous dimensions increase.

  2. Bridging MARL to SARL: An Order-Independent Multi-Agent Transformer via Latent Consensus

    cs.LG 2026-04 conditional novelty 6.0

    CMAT uses a transformer decoder to produce a high-level consensus vector in latent space, enabling simultaneous order-independent actions by all agents and optimization via single-agent PPO, with superior results on S...