pith. machine review for the scientific record. sign in

arxiv: 2508.01929 · v2 · submitted 2025-08-03 · 🧮 math.OC · cs.MA· math.PR

Recognition: unknown

Distributed games with jumps: An α-potential game approach

Authors on Pith no claims yet
classification 🧮 math.OC cs.MAmath.PR
keywords alphagamegamespotentialnashdistributedequilibriaframework
0
0 comments X
read the original abstract

Motivated by game-theoretic models of crowd motion dynamics, this paper analyzes a broad class of distributed games with jump diffusions within the recently developed $\alpha$-potential game framework. We demonstrate that analyzing the $\alpha$-Nash equilibria reduces to solving a finite-dimensional control problem. Beyond the viscosity and verification characterizations for the general games, we examine explicitly and in detail how spatial population distributions and interaction rules influence the structure of $\alpha$-Nash equilibria in these distributed settings. For crowd motion network games, we show that $\alpha = 0$ for all symmetric interaction networks, and or asymmetric networks. We quantify the precise polynomial and logarithmic decays of $\alpha$ in terms of the number of players, the degree of the network, and the decay rate of interaction asymmetry. We also exploit the $\alpha$-potential game framework to analyze an $N$-player portfolio selection game under a mean-variance criterion. We show that this portfolio game constitutes a potential game and explicitly construct its Nash equilibrium. Our analysis allows for heterogeneous preference parameters, going beyond the mean-field interactions considered in the existing game literature. Our theoretical results are supported by numerical implementations using policy gradient-based algorithms, demonstrating the computational advantages of the $\alpha$-potential game framework in computing Nash equilibria for general dynamic games.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. NePPO: Near-Potential Policy Optimization for General-Sum Multi-Agent Reinforcement Learning

    cs.LG 2026-03 unverdicted novelty 7.0

    NePPO learns a player-independent potential function via a novel objective whose minimization yields an approximate Nash equilibrium for general-sum multi-agent games.