pith. machine review for the scientific record. sign in

arxiv: 2604.16533 · v1 · submitted 2026-04-16 · 💻 cs.LG · cs.AI

Recognition: unknown

G-PARC: Graph-Physics Aware Recurrent Convolutional Neural Networks for Spatiotemporal Dynamics on Unstructured Meshes

Authors on Pith no claims yet

Pith reviewed 2026-05-10 11:48 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords graph neural networksphysics-informed learningspatiotemporal predictionunstructured meshesmoving least squaresnonlinear dynamicsrecurrent networkspartial differential equations
0
0 comments X

The pith

G-PARC embeds moving least squares approximations of differential operators into graph networks to model spatiotemporal dynamics on unstructured meshes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes G-PARC to overcome the grid restrictions of pixel-based physics-aware networks and the limitations of current graph methods in extreme nonlinear regimes. It approximates spatial derivatives of governing PDEs via moving least squares kernels placed directly on unstructured graph nodes and inserts these operators into the recurrent network graph. The design eliminates the conventional encoder-processor-decoder stack, yielding 2-3 times fewer parameters while maintaining or improving accuracy. Demonstrations cover generalization to nonuniform space-time grids, handling of deforming meshes, and superior results on fluvial hydrology, shock waves, and elastoplastic problems. This integration of explicit physics with graph flexibility extends simulation capabilities to complex, evolving domains.

Core claim

G-PARC replaces the traditional encoder-processor-decoder framework of graph neural networks with analytically computed differential operators derived from the governing partial differential equations using moving least squares kernels on unstructured graphs. This embedding enables superior performance in predicting nonlinear spatiotemporal dynamics across benchmarks involving fluvial hydrology, planar shock waves, and elastoplastic dynamics, with 2-3 times fewer parameters, while generalizing to nonuniform discretizations and handling moving meshes for structural deformation.

What carries the argument

Moving least squares kernels that approximate spatial derivatives on unstructured graphs, which are then embedded directly into the recurrent convolutional network to enforce physical constraints.

If this is right

  • The model can simulate physical processes on deforming domains without remeshing overhead.
  • It reduces computational cost in terms of model size for physics-informed predictions.
  • Performance holds across different spatial and temporal resolutions without retraining.
  • It extends physics-aware learning to domains where Cartesian grids are impractical.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This approach might allow hybrid models where some operators are analytic and others learned for multi-scale problems.
  • Applications in real-time engineering simulations could benefit from the parameter efficiency.
  • Testing on higher-dimensional or coupled systems would reveal scalability limits.

Load-bearing premise

Moving least squares kernels provide sufficiently accurate approximations to spatial derivatives even in extreme nonlinear regimes on evolving unstructured meshes.

What would settle it

Observing large approximation errors or numerical instability in G-PARC predictions for a benchmark like planar shock waves on a highly deforming mesh would falsify the reliability of the MLS embedding.

Figures

Figures reproduced from arXiv: 2604.16533 by Andrew Davis, H.S. Udaykumar, Jack T. Beerman, Mehdi Taghizadeh, Negin Alemazkoor, Stephen S. Baek, Tyler J. Abele, Xinfeng Gao, Zo\"e J. Gray.

Figure 1
Figure 1. Figure 1: G-PARC architecture with MLS differential operators. A GNN approximates source terms R, while domain-specific MLS operators compute physics features ∇, ∇2 with zero learned parameters. The fusion module concatenates [s, ∇, ∇2 , R] and approximates ds/dt via an MLP, which is then advanced by a numerical integrator (Euler/Heun/RK4) and fed back autoregressively. and varying left-state conditions that require… view at source ↗
Figure 2
Figure 2. Figure 2: Top three best performing model for Iowa River Flooding Model Prediction.The models are predicting the water depth of the Iowa River utilizing a rollout prediction during a flood at 3 different time steps (t=1 hr, 3.3hrs, and 6.3 hrs). The top row is the ground truth, followed by G-PARC, G-PARC (w/o MLS), and MGKAN. All three models have a good prediction at the t = 1hr mark, however, all models predict mo… view at source ↗
Figure 3
Figure 3. Figure 3: Top three best performing models for Planar Shockwave Model Prediction. The models are predicting x-momentum ρu with ρ being fluid density (kg/m3 ) and u being velocity (m/s), meaning we measure ρu in terms of (kg/m2 s). We utilize a rollout prediction for all models at three different timesteps (t = 2.65 × 10−5 s, t = 8.82 × 10−5 s, and t = 2.00 × 10−4 s). Additionally, the pressure of the system is set t… view at source ↗
Figure 4
Figure 4. Figure 4: Planar Shockwave Model Prediction for PARCv2. PARCv2 predicts the total energy E (J/m3 ), which represents the sum of internal and kinetic energy per unit volume. PARCv2 fails to produce a meaningful prediction. 8 [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Top three best performing models for elastoplastic dataset: G-PARC, G-PARC (w/o MLS), and MGKAN. 9 [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Overview of the MLS differential operator construction used in G-PARC. (a) An irregular mesh with non-uniform node spacing, where classical finite difference stencils are invalid. (b) For each node vi , the relative position of each neighbor vj is encoded as a quadratic basis vector Hij , assembling one row per neighbor. (c) Differential operators are computed directly from local geometry. (d) The resultin… view at source ↗
Figure 7
Figure 7. Figure 7: Schematic of a typical planar shock wave problem. xD denotes the location of the diaphragm. Rupture of the diaphragm produces an expansion fan (left), a contact surface, and a shock wave (right). The numerical solution is computed with the in-house CFD solver Chord [24], a structured-grid fourth-order finite volume method. The finite volume method solves the integral form of the compressible Euler equation… view at source ↗
Figure 8
Figure 8. Figure 8: Parameter space distribution across the 400 training, 25 validation, and 75 test cases in the pL–ρL domain. The split shows that training covers the full interior grid, validation is restricted to a narrow mid-range pressure band, and test cases are deliberately concentrated at the extremes to evaluate out-of-distribution extrapolation. C Elastoplastic Simulation Setup The elastoplastic dataset is derived … view at source ↗
Figure 9
Figure 9. Figure 9: Top three best performing model for White River Flooding Model Prediction: G￾PARC, G-PARC (w/o MLS), MGKAN. 23 [PITH_FULL_IMAGE:figures/full_fig_p023_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Worst two baseline models for Iowa River Flooding Model Prediction: MGNET and GraphSAGE. E.2 Planar Shock Wave 25 [PITH_FULL_IMAGE:figures/full_fig_p025_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Predicted density fields for the top three models (G-PARC, G-PARC (w/o MLS), and MGKAN) on the same representative test case as [PITH_FULL_IMAGE:figures/full_fig_p027_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Predicted total energy fields for the top three models (G-PARC, G-PARC (w/o MLS), and MGKAN) on the same representative test case as [PITH_FULL_IMAGE:figures/full_fig_p028_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Predicted x-momentum fields for the two worst-performing models (MGNET and GraphSAGE) on the same representative test case as [PITH_FULL_IMAGE:figures/full_fig_p029_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Worst two performing models for elastoplastic dataset: MGNET and GraphSAGE. 31 [PITH_FULL_IMAGE:figures/full_fig_p031_14.png] view at source ↗
read the original abstract

Physics-aware recurrent convolutional networks (PARC) have demonstrated strong performance in predicting nonlinear spatiotemporal dynamics by embedding differential operators directly into the computational graph of a neural network. However, pixel-based convolutions are restricted to static, uniform Cartesian grids, making them ill-suited to following evolving localized structures in an efficient manner. Graph neural networks (GNNs) naturally handle irregular spatial discretizations, but existing graph-based physics-aware deep learning (PADL) methods have difficulty handling extreme nonlinear regimes. To address these limitations, we propose Graph PARC (G-PARC), which uses moving least squares (MLS) kernels to approximate spatial derivatives on unstructured graphs, and embeds the derivatives of governing partial differential equations into the network's computational graph. G-PARC achieves better accuracy with 2-3x fewer parameters than MeshGraphNet, MeshGraphKAN, and GraphSAGE, replacing the traditional encoder-processor-decoder framework with analytically computed differential operators. We demonstrate that G-PARC (1) generalizes across nonuniform spatial and temporal discretizations; (2) handles moving meshes required for structural deformation; and (3) outperforms existing graph-based PADL methods on nonlinear benchmarks including fluvial hydrology, planar shock waves, and elastoplastic dynamics. By embedding explicit physical operators within the flexibility of GNNs, G-PARC enables accurate modeling of extreme nonlinear phenomena on complex computational domains, moving PADLbeyond idealized Cartesian grids.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes G-PARC, a graph-physics aware recurrent convolutional network that approximates spatial derivatives of governing PDEs via moving least squares (MLS) kernels on unstructured graphs and embeds these operators directly into the network computational graph. It replaces the conventional encoder-processor-decoder stack of graph PADL models with these analytically computed differential operators, claiming 2-3x fewer parameters and higher accuracy than MeshGraphNet, MeshGraphKAN, and GraphSAGE while generalizing across nonuniform spatial/temporal discretizations and handling moving meshes on nonlinear benchmarks (fluvial hydrology, planar shock waves, elastoplastic dynamics).

Significance. If the central claims hold, the work would advance physics-aware deep learning by extending explicit differential-operator embedding beyond Cartesian grids to unstructured and deforming domains, potentially improving parameter efficiency and generalization in extreme nonlinear regimes. The explicit use of MLS-based operators rather than learned surrogates is a methodological strength that could enhance interpretability.

major comments (2)
  1. [Abstract] Abstract: the claims of superior accuracy, 2-3x parameter reduction, and generalization rest on unshown quantitative results; no error metrics, ablation studies, or direct comparison tables are supplied, rendering the central empirical claim unevaluable.
  2. [Method (MLS embedding)] Section describing MLS embedding and recurrent update (likely §3): the assertion that analytically embedded MLS operators supply faithful spatial derivatives (gradient, divergence, etc.) on moving unstructured meshes in discontinuous regimes is load-bearing for the claim that performance gains derive from physics embedding rather than GNN capacity, yet no isolated MLS truncation/consistency error, condition-number analysis, or stability test on the shock-wave or elastoplastic test meshes is reported.
minor comments (2)
  1. [Figures] Figure captions and mesh-visualization panels should explicitly label MLS support radii and show how kernels adapt under mesh motion to aid reproducibility.
  2. [Notation] Notation for the MLS weight function and the recurrent convolutional update rule could be unified with standard GNN message-passing notation for clarity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed review. We address each major comment below and describe the revisions we will make to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claims of superior accuracy, 2-3x parameter reduction, and generalization rest on unshown quantitative results; no error metrics, ablation studies, or direct comparison tables are supplied, rendering the central empirical claim unevaluable.

    Authors: We agree that the abstract would be strengthened by including concrete quantitative metrics rather than summary statements. The manuscript body (Section 4) contains the supporting tables and figures with L2 error metrics, parameter counts, ablation results, and direct comparisons against MeshGraphNet, MeshGraphKAN, and GraphSAGE across the three benchmarks. To address the referee's concern directly, we will revise the abstract to report key numerical results (e.g., specific error reductions and the observed 2-3x parameter savings) while preserving its length constraints. revision: yes

  2. Referee: [Method (MLS embedding)] Section describing MLS embedding and recurrent update (likely §3): the assertion that analytically embedded MLS operators supply faithful spatial derivatives (gradient, divergence, etc.) on moving unstructured meshes in discontinuous regimes is load-bearing for the claim that performance gains derive from physics embedding rather than GNN capacity, yet no isolated MLS truncation/consistency error, condition-number analysis, or stability test on the shock-wave or elastoplastic test meshes is reported.

    Authors: The referee correctly notes that isolated verification of the MLS operators is important for attributing gains to the physics embedding. Section 3 presents the MLS formulation, kernel choice, and consistency conditions for derivative approximation on graphs, together with the recurrent update that incorporates these operators. While the end-to-end accuracy on the discontinuous shock-wave and large-deformation elastoplastic benchmarks provides indirect evidence of operator fidelity, we acknowledge the value of dedicated diagnostics. In the revision we will add a new subsection (or appendix) containing MLS truncation-error and consistency plots, condition-number statistics, and stability checks performed on representative meshes extracted from the shock-wave and elastoplastic test cases. revision: yes

Circularity Check

0 steps flagged

No circularity detected in G-PARC derivation chain

full rationale

The paper's core construction embeds standard moving least squares (MLS) kernels for spatial derivative approximation on unstructured graphs directly into a recurrent GNN architecture, replacing the encoder-processor-decoder stack with these analytically computed operators. This integration draws from established numerical methods for PDE discretization and existing GNN frameworks without any self-definitional loops, fitted inputs renamed as predictions, or load-bearing self-citations that reduce the central claims to prior unverified results by the same authors. Performance is evaluated via direct comparisons on external benchmarks (fluvial hydrology, shock waves, elastoplastic dynamics), and the method remains falsifiable through isolated MLS error quantification or mesh-motion stability tests outside the fitted network weights. No steps reduce by construction to the model's own inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No free parameters, axioms, or invented entities can be identified from the abstract; the method description is high-level and does not expose specific fitted quantities or new postulated objects.

pith-pipeline@v0.9.0 · 5603 in / 1135 out tokens · 28285 ms · 2026-05-10T11:48:32.865861+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

24 extracted references · 7 canonical work pages · 2 internal anchors

  1. [1]

    Parc: Physics-aware recurrent convolutional neural networks to assimilate meso scale reactive mechanics of energetic materials.Science advances, 9(17):eadd6868, 2023

    Phong CH Nguyen, Yen-Thi Nguyen, Joseph B Choi, Pradeep K Seshadri, HS Udaykumar, and Stephen S Baek. Parc: Physics-aware recurrent convolutional neural networks to assimilate meso scale reactive mechanics of energetic materials.Science advances, 9(17):eadd6868, 2023

  2. [2]

    Parcv2: Physics-aware recurrent convolutional neural networks for spatiotemporal dynamics modeling,

    Phong CH Nguyen, Xinlun Cheng, Shahab Azarfar, Pradeep Seshadri, Yen T Nguyen, Munho Kim, Sanghun Choi, HS Udaykumar, and Stephen Baek. Parcv2: Physics-aware recurrent convolutional neural networks for spatiotemporal dynamics modeling.arXiv preprint arXiv:2402.12503, 2024

  3. [3]

    Physics-aware recurrent convolutional neural networks for modeling multiphase compressible flows.International Journal of Multiphase Flow, 177:104877, 2024

    Xinlun Cheng, Phong CH Nguyen, Pradeep K Seshadri, Mayank Verma, Zo¨ e J Gray, Jack T Beerman, HS Udaykumar, and Stephen S Baek. Physics-aware recurrent convolutional neural networks for modeling multiphase compressible flows.International Journal of Multiphase Flow, 177:104877, 2024

  4. [4]

    Multi-resolution physics-aware recurrent convolutional neural network for complex flows.APL Machine Learning, 3(4), 2025

    Xinlun Cheng, Joseph Choi, HS Udaykumar, and Stephen Baek. Multi-resolution physics-aware recurrent convolutional neural network for complex flows.APL Machine Learning, 3(4), 2025

  5. [5]

    Reduced order modeling of energetic materials using physics-aware recurrent convolutional neural networks in a latent space (latentparc).arXiv preprint arXiv:2509.12401, 2025

    Zo¨ e J Gray, Joseph B Choi, Youngsoo Choi, H Keo Springer, HS Udaykumar, and Stephen S Baek. Reduced order modeling of energetic materials using physics-aware recurrent convolutional neural networks in a latent space (latentparc).arXiv preprint arXiv:2509.12401, 2025

  6. [6]

    Size is not the solution: Deformable convolutions for effective physics aware deep learning.arXiv preprint arXiv:2601.11657, 2026

    Jack T Beerman, Shobhan Roy, HS Udaykumar, and Stephen S Baek. Size is not the solution: Deformable convolutions for effective physics aware deep learning.arXiv preprint arXiv:2601.11657, 2026

  7. [7]

    Learning mesh-based simulation with graph networks

    Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter Battaglia. Learning mesh-based simulation with graph networks. InInternational conference on learning representations, 2020

  8. [8]

    KAN: Kolmogorov-Arnold Networks

    Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljaˇ ci´ c, Thomas Y Hou, and Max Tegmark. Kan: Kolmogorov-arnold networks.arXiv preprint arXiv:2404.19756, 2024

  9. [9]

    Interpretable physics-informed graph neural networks for flood forecasting.Computer-Aided Civil and Infrastructure Engineering, 2025

    Mehdi Taghizadeh, Zanko Zandsalimi, Mohammad Amin Nabian, Majid Shafiee-Jood, and Negin Alemazkoor. Interpretable physics-informed graph neural networks for flood forecasting.Computer-Aided Civil and Infrastructure Engineering, 2025

  10. [10]

    Combining physics-informed graph neural network and finite difference for solving forward and inverse spatiotemporal pdes.Computer Physics Communications, 308:109462, 2025

    Hao Zhang, Longxiang Jiang, Xinkun Chu, Yong Wen, Luxiong Li, Jianbo Liu, Yonghao Xiao, and Liyuan Wang. Combining physics-informed graph neural network and finite difference for solving forward and inverse spatiotemporal pdes.Computer Physics Communications, 308:109462, 2025

  11. [11]

    Fourier Neural Operator for Parametric Partial Differential Equations

    Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020

  12. [12]

    Geometry-informed neural operator for large-scale 3d pdes.Advances in Neural Information Processing Systems, 36:35836–35854, 2023

    Zongyi Li, Nikola Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Otta, Mohammad Amin Nabian, Maximilian Stadler, Christian Hundt, Kamyar Azizzadenesheli, et al. Geometry-informed neural operator for large-scale 3d pdes.Advances in Neural Information Processing Systems, 36:35836–35854, 2023. 15

  13. [13]

    Charac- terizing possible failure modes in physics-informed neural networks

    Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Charac- terizing possible failure modes in physics-informed neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors,Advances in Neural Information Processing Systems, volume 34, pages 26548–26560. Curran Associates, Inc., 2021

  14. [14]

    Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021

    Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021

  15. [15]

    Spectral neural operators

    Vladimir Sergeevich Fanaskov and Ivan V Oseledets. Spectral neural operators. InDoklady Mathematics, volume 108, pages S226–S232. Springer, 2023

  16. [16]

    Are neural operators really neural operators? frame theory meets operator learning

    Francesca Bartolucci, Emmanuel de B´ ezenac, Bogdan Raoni´ c, Roberto Molinaro, Siddhartha Mishra, and Rima Alaifari. Are neural operators really neural operators? frame theory meets operator learning. SAM Research Report, 2023, 2023

  17. [17]

    Physics-learning ai datamodel (plaid) datasets: a collection of physics simulations for machine learning

    Fabien Casenave, Xavier Roynard, Brian Staber, William Piat, Michele Alessandro Bucci, Nissrine Akkari, Abbas Kabalan, Xuan Minh Vuong Nguyen, Luca Saverio, Rapha¨ el Carpintero Perez, et al. Physics-learning ai datamodel (plaid) datasets: a collection of physics simulations for machine learning. arXiv preprint arXiv:2505.02974, 2025

  18. [18]

    Inductive representation learning on large graphs

    Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017

  19. [19]

    Army Corps of Engineers, Hydrologic Engineering Center.HEC-RAS: River Analysis System

    U.S. Army Corps of Engineers, Hydrologic Engineering Center.HEC-RAS: River Analysis System. U.S. Army Corps of Engineers, Davis, CA, 2025. Version 6.6

  20. [20]

    Geological Survey

    U.S. Geological Survey. USGS National Water Information System surface-water data, 2023

  21. [21]

    OpenRadioss: Open-source finite element solver for dynamic event analysis

    OpenRadioss Community. OpenRadioss: Open-source finite element solver for dynamic event analysis. https://openradioss.org/, 2022. Accessed: March 2026

  22. [22]

    Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems.IEEE transactions on neural networks, 6(4):911–917, 1995

  23. [23]

    NVIDIA PhysicsNeMo: An open-source framework for physics-informed machine learning

    NVIDIA. NVIDIA PhysicsNeMo: An open-source framework for physics-informed machine learning. https://developer.nvidia.com/physicsnemo, 2024. Accessed: March 2026

  24. [24]

    Velocity X

    Xinfeng Gao, Landon D Owen, and Stephen M Guzik. A high-order finite-volume method for combustion. In54th AIAA Aerospace Sciences Meeting, page 1808, 2016. A Appendix Fluvial Hydrology Simulation Setup The fluvial hydrology benchmark consists of spatiotemporal flood simulations generated on irregular HEC- RAS meshes over two real-world river-floodplain do...