pith. machine review for the scientific record. sign in

arxiv: 1810.09206 · v1 · submitted 2018-10-22 · 💻 cs.MA · cs.AI

Recognition: unknown

Multi-Agent Actor-Critic with Generative Cooperative Policy Network

Authors on Pith no claims yet
classification 💻 cs.MA cs.AI
keywords policyagentsdecentralizednetworkactioncollaborativecooperativeduring
0
0 comments X
read the original abstract

We propose an efficient multi-agent reinforcement learning approach to derive equilibrium strategies for multi-agents who are participating in a Markov game. Mainly, we are focused on obtaining decentralized policies for agents to maximize the performance of a collaborative task by all the agents, which is similar to solving a decentralized Markov decision process. We propose to use two different policy networks: (1) decentralized greedy policy network used to generate greedy action during training and execution period and (2) generative cooperative policy network (GCPN) used to generate action samples to make other agents improve their objectives during training period. We show that the samples generated by GCPN enable other agents to explore the policy space more effectively and favorably to reach a better policy in terms of achieving the collaborative tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Policy Optimization in Hybrid Discrete-Continuous Action Spaces via Mixed Gradients

    cs.LG 2026-05 unverdicted novelty 7.0

    HPO enables unbiased policy optimization in hybrid action spaces by mixing differentiable simulation gradients with score-function estimates, outperforming PPO as continuous dimensions increase.