pith. machine review for the scientific record. sign in

arxiv: 2605.05492 · v1 · submitted 2026-05-06 · 💻 cs.LG

Recognition: unknown

MEMOA: Massive Mixtures of Online Agents via Mean-Field Decentralized Nash Equilibria

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:43 UTC · model grok-4.3

classification 💻 cs.LG
keywords decentralized policymean-fieldNash equilibriumfederated learningonline learningminimax regretmulti-agent systemsagent mixtures
0
0 comments X

The pith

A closed-form decentralized policy for large AI agent populations minimizes the worst agent's regret and converges to the centralized optimum.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper derives a unique optimal decentralized policy in closed form for large populations of online AI agents. Optimality is defined by a minimax criterion that minimizes the highest online cost incurred by the weakest agent in the group. This policy uses only each agent's local state and a summary of the population average. In the limit of many agents, the decentralized policy approaches the performance of the optimal centralized policy, which cannot be computed directly at scale. This would matter because it enables federated learning of massive agent ensembles while avoiding prohibitive communication and computation costs.

Core claim

We derive the unique optimal decentralized policy in closed form. Optimality is characterized through a worst-client/minimax criterion: minimizing the under-performer regret, namely the maximal online cost incurred by the weakest agent in the ensemble. We further prove that the resulting decentralized policy asymptotically converges, in the large-population limit, to the Nash-optimal centralized policy, whose direct computation is not scalable. We use an online weighting mechanism to optimize the server-computed mixture of client predictions, thereby improving the mean prediction in addition to the previously optimized weakest-client prediction.

What carries the argument

The worst-client/minimax regret criterion that characterizes optimality for the closed-form decentralized policy in the mean-field limit.

If this is right

  • The policy scales computationally to arbitrarily large agent populations since each agent uses only local data and the population average.
  • An online weighting step improves both the weakest-agent prediction and the overall average prediction.
  • Direct computation of the centralized Nash policy becomes unnecessary once the population is large enough for the limit to hold.
  • Numerical experiments confirm that the policy outperforms simple greedy decentralized alternatives while satisfying the theoretical convergence.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same minimax focus on underperformers could be tested in other distributed optimization settings where full coordination is costly.
  • Edge-device networks with limited bandwidth might adopt this approach to train models collaboratively with minimal data sharing.
  • If the closed-form expression holds under mild heterogeneity, it could be adapted to non-stationary environments where agent goals shift over time.

Load-bearing premise

The average behavior of the agent population sufficiently represents their interactions when the group is large, and that minimizing the regret of the weakest agent fully defines optimality without extra coordination or uniformity assumptions.

What would settle it

A simulation with increasing numbers of agents where the maximum regret under the derived decentralized policy does not approach the regret achieved by the centralized policy would disprove the asymptotic convergence.

Figures

Figures reproduced from arXiv: 2605.05492 by Anastasis Kratsios, David B. Emerson, Fatemeh Tavakoli, Xuwei Yang.

Figure 1
Figure 1. Figure 1: Aggregated predictions compared with targets for RFN models with and without the view at source ↗
read the original abstract

In the modern age of large-scale AI, federated learning has become an increasingly important tool for training large populations of AI agents; however, its computational and communication costs can rapidly fail to scale with the number of agents. This is precisely where decentralized agentic strategies shine: each agent acts autonomously, using only its own state together with a minimal summary of the ensemble, namely the mean-field. We derive the unique optimal decentralized policy in closed form. Optimality is characterized through a worst-client/minimax criterion: minimizing the under-performer regret, namely the maximal online cost incurred by the weakest agent in the ensemble. We further prove that the resulting decentralized policy asymptotically converges, in the large-population limit, to the Nash-optimal centralized policy, whose direct computation is not scalable. We use an online weighting mechanism to optimize the server-computed mixture of client predictions, thereby improving the mean prediction in addition to the previously optimized weakest-client prediction. Numerical experiments verify our theoretical guarantees and demonstrate that our decentralized policy typically outperforms natural greedy decentralized baselines.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes MEMOA, a decentralized framework for large-scale agent ensembles that uses mean-field approximations to derive a unique closed-form optimal policy. Optimality is defined via a worst-client/minimax criterion that minimizes the maximum online cost incurred by the weakest agent. The paper further claims an asymptotic convergence proof showing that the decentralized policy approaches the centralized Nash-optimal policy in the large-population limit, together with an online weighting mechanism for optimizing server-computed mixtures of client predictions. Numerical experiments are presented to support the theoretical guarantees.

Significance. If the closed-form derivation and convergence proof hold with the required uniformity, the work would offer a scalable, theoretically grounded alternative to federated learning for massive decentralized AI systems, with explicit control over worst-case agent performance. The parameter-free character of the policy and the focus on minimax regret are potentially high-impact contributions for heterogeneous agent populations.

major comments (2)
  1. [Proof of asymptotic convergence to centralized Nash policy] The central convergence claim (stated in the abstract and presumably proved in the main theoretical section): the mean-field limit is asserted to preserve optimality under the worst-client/minimax criterion. Standard mean-field theory controls convergence of the empirical measure or average cost, but the optimality criterion is defined on the supremum over agents. Without explicit uniformity, tail bounds, or homogeneity assumptions on the cost functions, the limit need not control the identity or value of the worst agent; the convergence may therefore hold only in an average sense that does not imply the claimed minimax optimality. This is load-bearing for the main result.
  2. [Online weighting mechanism] Derivation of the closed-form decentralized policy (abstract and § on optimal policy): the worst-client regret is minimized via the mean-field, yet the online weighting mechanism for the mixture is introduced separately to improve mean prediction. It is unclear whether this weighting step remains fully decentralized and parameter-free or introduces implicit coordination that could alter the minimax characterization.
minor comments (2)
  1. [Abstract] The abstract asserts 'unique optimal decentralized policy' and 'asymptotic convergence' but does not state the key assumptions (e.g., on cost-function regularity or agent homogeneity) required for the mean-field limit to control the supremum.
  2. [Experiments] Numerical experiments section: the description of baselines, population sizes, and quantitative metrics for verifying both the minimax regret and the convergence rate should be expanded for reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the thoughtful and detailed report. The comments raise important points about the uniformity in our convergence analysis and the decentralization properties of the weighting mechanism. We respond to each major comment below and indicate the revisions we will make.

read point-by-point responses
  1. Referee: [Proof of asymptotic convergence to centralized Nash policy] The central convergence claim (stated in the abstract and presumably proved in the main theoretical section): the mean-field limit is asserted to preserve optimality under the worst-client/minimax criterion. Standard mean-field theory controls convergence of the empirical measure or average cost, but the optimality criterion is defined on the supremum over agents. Without explicit uniformity, tail bounds, or homogeneity assumptions on the cost functions, the limit need not control the identity or value of the worst agent; the convergence may therefore hold only in an average sense that does not imply the claimed minimax optimality. This is load-bearing for the main result.

    Authors: We appreciate the referee highlighting the need for explicit uniformity control. Our proof establishes this via bounded Lipschitz costs and a uniform law of large numbers for the empirical measure, combined with McDiarmid-type concentration to bound the deviation of the worst-agent cost from its mean-field counterpart with high probability, uniformly over the population. This ensures the minimax optimality is preserved in the limit. We will revise the main theoretical section to state the uniformity assumptions and tail bounds explicitly (moving supporting lemmas from the appendix if needed) rather than assuming they are implicit. revision: yes

  2. Referee: [Online weighting mechanism] Derivation of the closed-form decentralized policy (abstract and § on optimal policy): the worst-client regret is minimized via the mean-field, yet the online weighting mechanism for the mixture is introduced separately to improve mean prediction. It is unclear whether this weighting step remains fully decentralized and parameter-free or introduces implicit coordination that could alter the minimax characterization.

    Authors: The weighting mechanism is computed server-side from aggregate statistics only and does not change the information available to individual agents. Each agent still selects its action using solely its local state and the mean-field summary; no agent receives private information about others. The weights are derived in a parameter-free manner from online regret minimization on the mixture and serve to improve average performance without affecting the closed-form minimax policy for the worst agent. We will add a clarifying subsection on the information structure in the revised manuscript to eliminate any ambiguity regarding coordination. revision: yes

Circularity Check

0 steps flagged

Derivation self-contained via mean-field Nash analysis with no reduction to inputs

full rationale

The paper derives the decentralized policy in closed form from the mean-field limit and a minimax regret criterion, then proves asymptotic convergence to the centralized Nash equilibrium. No quoted step reduces a claimed prediction or uniqueness result to a fitted parameter, self-citation chain, or definitional tautology; the online weighting mechanism is presented as an additional optimization layer rather than a re-labeling of fitted quantities. The central claims rest on standard mean-field convergence arguments and the explicit worst-client criterion without importing uniqueness from prior self-work or smuggling ansatzes. This is the normal case of an independent derivation.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract provides no explicit free parameters, axioms, or invented entities; the mean-field model and large-population limit are invoked but not detailed as assumptions or derivations.

pith-pipeline@v0.9.0 · 5494 in / 1289 out tokens · 72085 ms · 2026-05-08T16:43:28.718343+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

48 extracted references · 10 canonical work pages · 3 internal anchors

  1. [1]

    How Do AI Agents Spend Your Money? Analyzing and Predicting Token Consumption in Agentic Coding Tasks

    L. Bai et al. How do AI agents spend your money? Analyzing and predicting token consump- tion in agentic coding tasks.arXiv preprint arXiv:2604.22750, 2026

  2. [2]

    Historical noon and closing rates, Feb 2025

    Bank of Canada. Historical noon and closing rates, Feb 2025

  3. [3]

    SIAM: Society for Industrial and Applied Mathematics, Philadelphia, PA, 2nd edition, 1999

    Tamer Bas ¸ar and Geert Jan Olsder.Dynamic Noncooperative Game Theory. SIAM: Society for Industrial and Applied Mathematics, Philadelphia, PA, 2nd edition, 1999

  4. [4]

    The theory of dynamic programming.Bulletin of the American Mathemati- cal Society, 60(6):503–515, 1954

    Richard Bellman. The theory of dynamic programming.Bulletin of the American Mathemati- cal Society, 60(6):503–515, 1954

  5. [5]

    Model-sharing games: Analyzing federated learning under voluntary participation

    Kate Donahue and Jon Kleinberg. Model-sharing games: Analyzing federated learning under voluntary participation. InProceedings of the AAAI Conference on Artificial Intelligence, 2021

  6. [6]

    Optimality and stability in federated learning: A game- theoretic approach

    Kate Donahue and Jon Kleinberg. Optimality and stability in federated learning: A game- theoretic approach. InAdvances in Neural Information Processing Systems, 2021

  7. [7]

    Llmcarbon: Modeling the end-to-end carbon footprint of large language models

    Ahmad Faiz, Sotaro Kaneda, Ruhan Wang, Rita Osi, Prateek Sharma, Fan Chen, and Lei Jiang. Llmcarbon: Modeling the end-to-end carbon footprint of large language models. In International Conference on Learning Representations, 2024

  8. [8]

    Echo state networks are universal.Neural Net- works, 108:495–508, 2018

    Lyudmila Grigoryeva and Juan-Pablo Ortega. Echo state networks are universal.Neural Net- works, 108:495–508, 2018

  9. [9]

    Oxford University Press, May 2001

    Geoffrey R Grimmett and David R Stirzaker.Probability and Random Processes. Oxford University Press, May 2001

  10. [10]

    Springer Series in Statistics

    Trevor Hastie, Robert Tibshirani, and Jerome Friedman.The elements of statistical learning. Springer Series in Statistics. Springer-Verlag, New York, 2001. Data mining, inference, and prediction

  11. [11]

    Training Compute-Optimal Large Language Models

    Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models.arXiv preprint arXiv:2203.15556, 2022

  12. [12]

    Universal approximation using incre- mental constructive feedforward networks with random hidden nodes.IEEE transactions on neural networks, 17(4):879–892, 2006

    Guang-Bin Huang, Lei Chen, and Chee-Kheong Siew. Universal approximation using incre- mental constructive feedforward networks with random hidden nodes.IEEE transactions on neural networks, 17(4):879–892, 2006

  13. [13]

    Caines, and Roland P

    Minyi Huang, Peter E. Caines, and Roland P. Malham ´e. Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralizedε-Nash equi- libria.IEEE Transactions on Automatic Control, 52(9):1560–1571, 2007

  14. [14]

    Malham ´e, and Peter E

    Minyi Huang, Roland P. Malham ´e, and Peter E. Caines. Large population stochastic dynamic games: Closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Communications in Information and Systems, 6(3):221–252, 2006

  15. [15]

    Linear quadratic mean field game: DecentralizedO(1/N) Nash equilibria.Journal of Systems Science and Complexity, 34(5):2003–2035, 2021

    Minyi Huang and Xuwei Yang. Linear quadratic mean field game: DecentralizedO(1/N) Nash equilibria.Journal of Systems Science and Complexity, 34(5):2003–2035, 2021. 10

  16. [16]

    Linear quadratic mean field games: Asymptotic solvability and relation to the fixed point approach.IEEE Transactions on Automatic Control, 65(4):1397– 1412, 2020

    Minyi Huang and Mengjie Zhou. Linear quadratic mean field games: Asymptotic solvability and relation to the fixed point approach.IEEE Transactions on Automatic Control, 65(4):1397– 1412, 2020

  17. [17]

    Mean-field game strategies for optimal execution.Applied Mathematical Finance, 26(2):153–185, 2019

    Xuancheng Huang, Sebastian Jaimungal, and Mojtaba Nourian. Mean-field game strategies for optimal execution.Applied Mathematical Finance, 26(2):153–185, 2019

  18. [18]

    Scaling Laws for Neural Language Models

    Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models.arXiv preprint arXiv:2001.08361, 2020

  19. [19]

    Jeux `a champ moyen

    Jean-Michel Lasry and Pierre-Louis Lions. Jeux `a champ moyen. i – le cas stationnaire. Comptes Rendus Math´ematique, 343(9):619–625, 2006

  20. [20]

    Jeux `a champ moyen

    Jean-Michel Lasry and Pierre-Louis Lions. Jeux `a champ moyen. ii – horizon fini et contr ˆole optimal.Comptes Rendus Math ´ematique, 343(10):679–684, 2006

  21. [21]

    Mean field games.Japanese Journal of Mathemat- ics, 2(1):229–260, 2007

    Jean-Michel Lasry and Pierre-Louis Lions. Mean field games.Japanese Journal of Mathemat- ics, 2(1):229–260, 2007

  22. [22]

    Powers of tensors and fast matrix multiplication

    Franc ¸ois Le Gall. Powers of tensors and fast matrix multiplication. InProceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC ’14, page 296–303, New York, NY , USA, 2014. Association for Computing Machinery

  23. [23]

    Ditto: Fair and robust federated learning through personalization

    Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. InProceedings of the 38th International Conference on Machine Learning, volume 139 ofProceedings of Machine Learning Research, pages 6357–

  24. [24]

    Federated optimization in heterogeneous networks

    Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. InProceedings of Machine Learn- ing and Systems, volume 2, pages 429–450, 2020

  25. [25]

    Liang and P

    X. Liang and P. E. Caines. Decentralized open-loop strategies of linear quadratic mean field games. InIFAC-PapersOnLine, volume 56, pages 11464–11469. Elsevier, 2023

  26. [26]

    Joint local relational aug- mentation and global Nash equilibrium for federated learning with non-iid data

    Xinting Liao, Chaochao Chen, Weiming Liu, Pengyang Zhou, Huabin Zhu, Shuheng Shen, Weiqiang Wang, Mengling Hu, Yanchao Tan, and Xiaolin Zheng. Joint local relational aug- mentation and global Nash equilibrium for federated learning with non-iid data. InProceed- ings of the 31st ACM International Conference on Multimedia, MM ’23, page 1536–1545, New York, ...

  27. [27]

    Budget-aware tool-use enables effective agent scaling

    T. Liu et al. Budget-aware tool-use enables effective agent scaling.arXiv preprint arXiv:2511.17006, 2025

  28. [28]

    On privacy and personal- ization in cross-silo federated learning

    Ziyu Liu, Shengyuan Hu, Zhiwei Steven Wu, and Virginia Smith. On privacy and personal- ization in cross-silo federated learning. InProceedings of the 36th International Conference on Neural Information Processing Systems, NIPS ’22, Red Hook, NY , USA, 2022. Curran Associates Inc

  29. [29]

    Alexandra Sasha Luccioni, Yacine Jernite, and Emma Strubell. Power hungry processing: Watts driving the cost of ai deployment? InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), pages 85–99, Rio de Janeiro, Brazil,

  30. [30]

    Estimating the carbon footprint of bloom, a 176b parameter language model.Journal of Machine Learning Research, 24(253):1–15, 2023

    Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. Estimating the carbon footprint of bloom, a 176b parameter language model.Journal of Machine Learning Research, 24(253):1–15, 2023

  31. [31]

    Federated learning as a mean-field game.arXiv preprint arXiv:2107.03770, 2021

    Arash Mehrjou. Federated learning as a mean-field game.arXiv preprint arXiv:2107.03770, 2021

  32. [32]

    Incentives in feder- ated learning: Equilibria, dynamics, and mechanisms for welfare maximization

    Aniket Murhekar, Milind Tambe, Kai Wang, and Yevgeniy V orobeychik. Incentives in feder- ated learning: Equilibria, dynamics, and mechanisms for welfare maximization. InAdvances in Neural Information Processing Systems, 2023. 11

  33. [33]

    Incentives in federated learning: Equilibria, dynamics, and mechanisms for welfare maximization

    Aniket Murhekar, Zhuowen Yuan, Bhaskar Ray Chaudhury, Bo Li, and Ruta Mehta. Incentives in federated learning: Equilibria, dynamics, and mechanisms for welfare maximization. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors,Advances in Neural Information Processing Systems, volume 36, pages 17811–17831. Curran Associates, Inc., 2023

  34. [34]

    NVIDIA RTX A4000 Graphics Card.https://www.nvidia.com/ en-us/products/workstations/rtx-a4000/, 2026

    NVIDIA Corporation. NVIDIA RTX A4000 Graphics Card.https://www.nvidia.com/ en-us/products/workstations/rtx-a4000/, 2026. Accessed: 2026-05-02

  35. [35]

    Weighted sums of random kitchen sinks: Replacing mini- mization with randomization in learning.Advances in neural information processing systems, 21, 2008

    Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing mini- mization with randomization in learning.Advances in neural information processing systems, 21, 2008

  36. [36]

    From words to watts: Benchmarking the energy costs of large language model inference

    Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, and Vijay Gadepally. From words to watts: Benchmarking the energy costs of large language model inference. In2023 IEEE High Per- formance Extreme Computing Conference, pages 1–9, 2023

  37. [37]

    Echotorch: Reservoir computing with pytorch.https://github.com/ nschaetti/EchoTorch, 2018

    Nils Schaetti. Echotorch: Reservoir computing with pytorch.https://github.com/ nschaetti/EchoTorch, 2018

  38. [38]

    Communication-efficient massive UA V on- line path control: Federated learning meets mean-field game theory.IEEE Transactions on Communications, 2020

    Hamed Shiri, Jihong Park, and Mehdi Bennis. Communication-efficient massive UA V on- line path control: Federated learning meets mean-field game theory.IEEE Transactions on Communications, 2020

  39. [39]

    Privacy as commodity: MFG-RegretNet for large-scale privacy trading in federated learning.arXiv preprint arXiv:2603.28329, 2026

    Sun et al. Privacy as commodity: MFG-RegretNet for large-scale privacy trading in federated learning.arXiv preprint arXiv:2603.28329, 2026

  40. [40]

    Reputation-aware incentive mechanism of federated learning: A mean field game approach.arXiv preprint, 2024

    Sun, Wu, and Li. Reputation-aware incentive mechanism of federated learning: A mean field game approach.arXiv preprint, 2024

  41. [41]

    Improving the efficiency of llm agent systems through trajectory reduction,

    Yuan-An Xiao, Pengfei Gao, Chao Peng, and Yingfei Xiong. Reducing cost of llm agents with trajectory reduction.arXiv preprint arXiv:2509.23586, 2025

  42. [42]

    Mixed Nash for robust federated learning.Transactions on Machine Learning Research, 2024

    Wanyun Xie, Thomas Pethick, Ali Ramezani-Kebrya, and V olkan Cevher. Mixed Nash for robust federated learning.Transactions on Machine Learning Research, 2024

  43. [43]

    Decentralizedϵ-Nash strategy for linear quadratic mean field games using a successive approximation approach.Asian Journal of Control, 26(2):565–574, 2024

    Zhenhui Xu and Tielong Shen. Decentralizedϵ-Nash strategy for linear quadratic mean field games using a successive approximation approach.Asian Journal of Control, 26(2):565–574, 2024

  44. [44]

    Syn- chronizing pretrained kernel regressors with applications to American option pricing.Frontiers of Mathematical Finance, 8:23–77, March 2026

    Xuwei Yang, Anastasis Kratsios, Florian Krach, Matheus Grasselli, and Aurelien Lucchi. Syn- chronizing pretrained kernel regressors with applications to American option pricing.Frontiers of Mathematical Finance, 8:23–77, March 2026

  45. [45]

    Emerson, and Anastasis Kratsios

    Xuwei Yang, Fatemeh Tavakoli, David B. Emerson, and Anastasis Kratsios. Online federation for mixtures of proprietary agents with black-box encoders.arXiv preprint arXiv:2505.00216, 2025

  46. [46]

    arXiv preprint arXiv:2501.08263 , year=

    Taeho Yoon, Sayak Ray Chowdhury, and Nicolas Loizou. Multiplayer federated learn- ing: Reaching equilibrium with communication-efficient algorithms.arXiv preprint arXiv:2501.08263, 2025

  47. [47]

    A game-theoretic framework for privacy-aware client sampling in federated learning.arXiv preprint arXiv:2412.05636, 2024

    Yuan and Wang. A game-theoretic framework for privacy-aware client sampling in federated learning.arXiv preprint arXiv:2412.05636, 2024

  48. [48]

    systemic regret-type

    Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In The Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Conference, volume 35, pages 11106–11115. AAAI Press, 2021. 12 A Proof of Secondary Results in Pr...