pith. machine review for the scientific record. sign in

arxiv: 2604.18953 · v2 · submitted 2026-04-21 · 💻 cs.LG

Recognition: unknown

FlowForge: A Staged Local Rollout Engine for Flow-Field Prediction

David L. S. Hung, Fengnian Zhao, Xiaowen Zhang, Ziming Zhou

Authors on Pith no claims yet

Pith reviewed 2026-05-10 03:11 UTC · model grok-4.3

classification 💻 cs.LG
keywords flow-field predictionstaged rolloutlocal predictorCFD surrogaterobustness to noisemulti-step rolloutlocality-preserving schedule
0
0 comments X

The pith

FlowForge predicts flow fields by compiling locality-preserving stages and running them with a shared lightweight local predictor.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents FlowForge as a method that rewrites flow-field predictions stage by stage rather than in one global pass. Each stage updates spatial sites using only bounded local context from prior stages, executed by the same simple predictor. This design is intended to match or exceed the accuracy of larger models while handling noisy or incomplete inputs more reliably and keeping per-step computation lighter. A sympathetic reader would care because existing deep-learning surrogates for fluid simulations often become unstable over multiple steps or too slow for repeated use. If the approach holds, it would allow longer, more dependable rollouts in engineering workflows without scaling model size.

Core claim

FlowForge rewrites spatial sites stage by stage so that each update conditions only on bounded local context exposed by earlier stages. It compiles a locality-preserving update schedule from the spatial sites and executes that schedule with a shared lightweight local predictor. The resulting compile-execute design aligns inference with short-range physical dependence, keeps latency predictable, and limits error amplification from global mixing.

What carries the argument

The staged local rollout engine that compiles a locality-preserving update schedule and executes it with a shared lightweight local predictor.

Load-bearing premise

That updates based on bounded local context alone can be executed without losing global physical consistency or creating artifacts in complex multi-scale flows.

What would settle it

Long multi-step rollouts on a complex flow benchmark that show faster growth in pointwise error or visible physical violations compared with a strong global baseline.

Figures

Figures reproduced from arXiv: 2604.18953 by David L. S. Hung, Fengnian Zhao, Xiaowen Zhang, Ziming Zhou.

Figure 1
Figure 1. Figure 1: From global one-step prediction to locality-preserving rollout. (a) Global predictors update all sites simultaneously. (b) Short-horizon dynamics exhibit bounded spatial dependence. (c) FLOWFORGE performs a staged local rollout aligned with this structure. Over short horizons, however, physical influence is predominantly local: each spatial site depends almost only on a bounded neighborhood. Single-pass gl… view at source ↗
Figure 2
Figure 2. Figure 2: FLOWFORGE workflow. Offline, we compile a rollout plan and lower it into index tables. Online, the executor overwrites a working buffer stage by stage using a shared local predictor Gθ, producing Ubt+1. and then commits the results by writing them back to the working buffer: yi ← ybi , i ∈ Ss. (4) We use a single shared local predictor Gθ across all stages: the update rule is stationary, and stages differ … view at source ↗
Figure 3
Figure 3. Figure 3: Streamline prediction visualizations for the FB-Gravity dataset. Original +Global Noise +Global Masking +Edge Noise +Edge Masking 0.0 0.2 0.4 0.6 0.8 1.0 Average Normalized Loss 0.11 0.09 0.21 0.13 0.60 0.27 0.27 0.45 0.34 1.00 0.16 0.15 0.36 0.12 0.76 0.17 0.15 0.36 0.20 0.72 Prediction Robustness with Perturbed Inputs FlowForge (Ours) DeepONet FNO U-Net [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Quantitative evaluation of model robustness under input perturbations. RMSE losses are max-scaled normalized relative to the worst-performing case per dataset. Bar heights indicate the average normalized loss, and error bars represent the standard deviation. U-Net produces overly uniform streamlines with widespread artifacts, likely from amplifying local uncertainty across the domain. Overall, this case st… view at source ↗
Figure 5
Figure 5. Figure 5: Robustness case study under global block masking corruption. 10 4 10 5 Flow field size (# grids) 0.01 0.02 0.03 0.05 0.07 0.09 0.11 0.13 0.15 Latency / grid [µs] DeepONet U-Net FNO FlowForge [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Latency comparison across datasets of increasing resolution. Lower is better. In contrast, FLOWFORGE produces a reconstruction that is visually close to the ground truth, with almost no visible trace of the mask and errors confined to a narrow band surrounding the occluded blocks. 4.3 Latency Analysis We report inference latency for predicting 100 future frames, normalized by the number of spatial grid poi… view at source ↗
Figure 7
Figure 7. Figure 7: Multi-step autoregressive rollout on CFDBench-Cylinder. [PITH_FULL_IMAGE:figures/full_fig_p022_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Evolution of Nearest Preceding Neighbor Distance ( [PITH_FULL_IMAGE:figures/full_fig_p022_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Complete robustness summaries across all four CFDBench datasets. Each panel aggregates performance under global and edge-localized perturbations across the noise, masking, and boundary-corruption protocols described in Section 4.2. FLOWFORGE remains consistently more stable than the baselines throughout the suite. 23 [PITH_FULL_IMAGE:figures/full_fig_p023_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Qualitative results on CFDBench-Dam. Visualization of the fluid phase fraction and vorticity error. While FNO tends to over-smooth tight vortex cores, FLOWFORGE accurately captures the sharp interface and rotational dynamics of the water column as it impacts the obstacle. R1 R2 Global Reference (Ground Truth) Zoom-in R1 Ground Truth Ours FNO U-Net Zoom-in R2 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Absolute Error (RMS… view at source ↗
Figure 11
Figure 11. Figure 11: Qualitative results on BubbleML FB-VelScale. [PITH_FULL_IMAGE:figures/full_fig_p025_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Robustness to Global Block Masking (Cavity, [PITH_FULL_IMAGE:figures/full_fig_p026_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Robustness to Localized Gaussian Noise (Cylinder, [PITH_FULL_IMAGE:figures/full_fig_p026_13.png] view at source ↗
read the original abstract

Deep learning surrogates for CFD flow-field prediction often rely on large, complex models, which can be slow and fragile when data are noisy or incomplete. We introduce FlowForge, a staged local rollout engine that predicts future flow fields by compiling a locality-preserving update schedule and executing it with a shared lightweight local predictor. Rather than producing the next frame in a single global pass, FlowForge rewrites spatial sites stage by stage so that each update conditions only on bounded local context exposed by earlier stages. This compile-execute design aligns inference with short-range physical dependence, keeps latency predictable, and limits error amplification from global mixing. Across PDEBench, CFDBench, and BubbleML, FlowForge matches or improves upon strong baselines in pointwise accuracy, delivers consistently better robustness to noise and missing observations, and maintains stable multi-step rollout behavior while reducing per-step latency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces FlowForge, a staged local rollout engine for CFD flow-field prediction. It compiles a locality-preserving update schedule from spatial sites and executes it with a shared lightweight local predictor, so that each update conditions only on bounded local context from prior stages. The central claim is that this design matches or exceeds strong baselines on PDEBench, CFDBench, and BubbleML in pointwise accuracy, delivers better robustness to noise and missing observations, maintains stable multi-step rollouts, and reduces per-step latency.

Significance. If the performance and stability claims hold under detailed scrutiny, the compile-execute locality approach could offer a practical alternative to large global models for surrogate CFD, with advantages in predictable latency and reduced error amplification. The emphasis on aligning inference with short-range physical dependence is a clear conceptual strength.

major comments (3)
  1. [Abstract and §3] Abstract and §3 (Method): the central claim that a shared lightweight local predictor plus compile-time locality schedule produces stable multi-step rollouts requires that all relevant non-local dependencies (pressure projection in incompressible NS, long-range correlations in turbulence) are captured inside the bounded neighborhood at each stage. No description is given of how the predictor is trained (e.g., with PDE residuals, divergence penalties, or conservation constraints), so pointwise accuracy on clean data does not guarantee global consistency.
  2. [§4] §4 (Experiments): the reported improvements in robustness and multi-step stability across PDEBench, CFDBench, and BubbleML are stated without quantitative tables, ablation studies on neighborhood size, error accumulation plots, or analysis of invariant drift (e.g., divergence error over time). This absence makes it impossible to verify that local staged updates do not introduce artifacts invisible to pointwise MSE.
  3. [§3.2] §3.2 (Update schedule): the claim that the locality-preserving schedule limits error amplification is load-bearing for the latency and stability advantages, yet no formal argument or empirical test is supplied showing that the schedule preserves global physical consistency when the local predictor is applied repeatedly.
minor comments (2)
  1. [Abstract] Abstract: the phrase 'compile-execute design' is used without a one-sentence illustration of how the schedule is generated from spatial sites.
  2. [§3] Notation: the distinction between 'stage' and 'step' in the rollout description should be defined explicitly on first use to avoid ambiguity in multi-step experiments.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the detailed and constructive review. The comments highlight important areas for clarification and additional validation, which we will address through targeted revisions to strengthen the presentation of the method, experiments, and supporting analysis.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (Method): the central claim that a shared lightweight local predictor plus compile-time locality schedule produces stable multi-step rollouts requires that all relevant non-local dependencies (pressure projection in incompressible NS, long-range correlations in turbulence) are captured inside the bounded neighborhood at each stage. No description is given of how the predictor is trained (e.g., with PDE residuals, divergence penalties, or conservation constraints), so pointwise accuracy on clean data does not guarantee global consistency.

    Authors: We agree that the manuscript would benefit from expanded details on training and information propagation. The local predictor is trained via supervised regression on ground-truth local patches extracted from the simulation datasets using an MSE loss; no explicit PDE residuals or conservation penalties are included, as the approach is purely data-driven. The staged schedule propagates information across the domain by design, as each stage exposes updated values to neighboring sites in subsequent stages, allowing non-local effects (such as pressure influences) to be captured through sequential local conditioning. In the revision we will add a dedicated paragraph in §3 describing the training procedure, loss function, and data preparation, together with a short discussion and illustrative diagram showing how multi-stage updates enable effective long-range dependence without global mixing. revision: yes

  2. Referee: [§4] §4 (Experiments): the reported improvements in robustness and multi-step stability across PDEBench, CFDBench, and BubbleML are stated without quantitative tables, ablation studies on neighborhood size, error accumulation plots, or analysis of invariant drift (e.g., divergence error over time). This absence makes it impossible to verify that local staged updates do not introduce artifacts invisible to pointwise MSE.

    Authors: We acknowledge that the current experimental section would be strengthened by more granular quantitative evidence. While comparative pointwise results are presented, we will augment §4 with full numerical tables reporting all metrics, neighborhood-size ablations, multi-step error-accumulation curves, and invariant-drift analysis (divergence error for incompressible cases and mass conservation for BubbleML). These additions will directly address the concern about potential hidden artifacts and allow readers to assess robustness and stability beyond aggregate accuracy figures. revision: yes

  3. Referee: [§3.2] §3.2 (Update schedule): the claim that the locality-preserving schedule limits error amplification is load-bearing for the latency and stability advantages, yet no formal argument or empirical test is supplied showing that the schedule preserves global physical consistency when the local predictor is applied repeatedly.

    Authors: The schedule is constructed via a compile-time graph traversal that guarantees each local update depends only on a bounded, previously updated neighborhood; this structural property inherently restricts immediate error spread. A general formal proof of global consistency would require strong assumptions on predictor accuracy that do not hold for learned models, so we do not attempt one. Instead, we will add empirical validation in the revision by reporting global consistency metrics (divergence drift, total variation) over long rollouts and direct comparisons of error-amplification rates against global baselines. These tests will be placed in §4 alongside the existing stability results. revision: partial

Circularity Check

0 steps flagged

No circularity: FlowForge is introduced as an independent architectural design validated empirically on benchmarks.

full rationale

The paper presents FlowForge as a new staged local rollout engine that compiles a locality-preserving update schedule executed by a shared lightweight local predictor. Performance claims (matching or improving baselines on PDEBench, CFDBench, BubbleML in accuracy, robustness, and latency) rest on empirical comparisons rather than any derived quantity that reduces to fitted inputs or self-citations by construction. No equations, self-definitional steps, or load-bearing self-citations appear in the provided description; the central premise is a proposed engineering schedule whose correctness is tested externally against global baselines and physical datasets. This is the common case of a self-contained architectural contribution.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review provides no explicit free parameters, axioms, or invented entities; the method is described at a high level without derivation details.

pith-pipeline@v0.9.0 · 5453 in / 941 out tokens · 57741 ms · 2026-05-10T03:11:39.036057+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

35 extracted references · 16 canonical work pages

  1. [1]

    Brunton, Bernd R

    Steven L. Brunton, Bernd R. Noack, and Petros Koumoutsakos. Machine learning for fluid mechanics.Annual Review of Fluid Mechanics, 52:477–508, 2020. doi:10.1146/annurev-fluid-010719-060214. URL https://www.annualreviews.org/doi/10.1146/annurev-fluid-010719-060214

  2. [2]

    Uncertainty quantification in particle image velocimetry.Measurement Science and Technology, 30(9):092001, 2019

    Andrea Sciacchitano. Uncertainty quantification in particle image velocimetry.Measurement Science and Technology, 30(9):092001, 2019. doi:10.1088/1361-6501/ab1db8. URL https://iopscience.iop.org/article/10.1088/1361-6501/ab1db8

  3. [3]

    Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3(3), 218–229 (2021)

    Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3:218–229, 2021. doi:10.1038/s42256-021-00302-5. URL https://www.nature.com/articles/s42256-021-00302-5

  4. [4]

    U -Net: Convolutional Networks for Biomedical Image Segmentation,

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InMedical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing. URL https://link.springer.com/chapter/10.1007/978-3-319-24574-4_28. 11 FLOWFORGE: Staged Local Rollout...

  5. [5]

    Fourier neural operator for parametric partial differential equations

    Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Burigede liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=c8P9NQVtmnO

  6. [6]

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations

    M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686–707, 2019. ISSN 0021-9991. doi:https://doi.org/10.1016/j.jcp.2018.10.045. URL https://www.sciencedirect.com/science...

  7. [7]

    Dpot: auto-regressive denoising operator transformer for large-scale pde pre-training

    Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, and Jun Zhu. Dpot: auto-regressive denoising operator transformer for large-scale pde pre-training. In Proceedings of the 41st International Conference on Machine Learning, ICML’24, 2024. doi:10.5555/3692070.3692773. URLhttps://dl.acm.org/doi/10.5555...

  8. [8]

    PDEformer: Towards a foundation model for one-dimensional partial differential equations

    Zhanhong Ye, Xiang Huang, Leheng Chen, Hongsheng Liu, Zidong Wang, and Bin Dong. PDEformer: Towards a foundation model for one-dimensional partial differential equations. InICLR 2024 Workshop on AI4DifferentialEquations In Science, 2024. URLhttps://openreview.net/forum?id=GLDMCwdhTK

  9. [9]

    Unisolver: PDE-conditional transformers are universal PDE solvers, 2025

    Hang Zhou, Yuezhou Ma, Haixu Wu, Haowen Wang, and Mingsheng Long. Unisolver: PDE-conditional transformers are universal PDE solvers, 2025. URLhttps://openreview.net/forum?id=f3xXPDCh8Q

  10. [10]

    Knlgge, David R

    David M. Knlgge, David R. Wessels, Riccardo Valperga, Samuele Papa, Jan-Jakob Sonke, Efstratios Gavves, and Erik J. Bekkers. Space-time continuous pde forecasting using equivariant neural fields. InProceedings of the 38th International Conference on Neural Information Processing Systems, NIPS ’24, 2024. ISBN 9798331314385. doi:10.5555/3737916.3740354. URL...

  11. [11]

    Worrall, and Max Welling

    Johannes Brandstetter, Daniel E. Worrall, and Max Welling. Message passing neural PDE solvers. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=vSix3HPYKSU

  12. [12]

    Towards stability of autoregressive neural operators.Transactions on Machine Learning Research, 2023

    Michael McCabe, Peter Harrington, Shashank Subramanian, and Jed Brown. Towards stability of autoregressive neural operators.Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=RFfUUtKYOG

  13. [13]

    Multi-scale time-stepping of partial differential equations with transformers.Computer Methods in Applied Mechanics and Engineering, 426:116983, 2024

    AmirPouya Hemmasian and Amir Barati Farimani. Multi-scale time-stepping of partial differential equations with transformers.Computer Methods in Applied Mechanics and Engineering, 426:116983, 2024. ISSN 0045-7825. doi:https://doi.org/10.1016/j.cma.2024.116983. URL https://www.sciencedirect.com/science/article/pii/S0045782524002391

  14. [14]

    On the benefits of memory for modeling time-dependent PDEs

    Ricardo Buitrago, Tanya Marwah, Albert Gu, and Andrej Risteski. On the benefits of memory for modeling time-dependent PDEs. InThe Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=o9kqa5K3tB

  15. [15]

    Temporal neural operator for modeling time-dependent physical phenomena.Scientific Reports, 15(1):32791, 2025

    Waleed Diab and Mohammed Al Kobaisi. Temporal neural operator for modeling time-dependent physical phenomena.Scientific Reports, 15(1):32791, 2025. doi:10.1038/s41598-025-16922-5. URL https://doi.org/10.1038/s41598-025-16922-5

  16. [16]

    Multipole graph neural operator for parametric partial differential equations

    Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. InProceedings of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, 2020. URL https://dl.acm.org/doi/10.5555/3495724.3496291

  17. [17]

    Fourier neural operator with learned deformations for pdes on general geometries.J

    Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator with learned deformations for pdes on general geometries.J. Mach. Learn. Res., January 2023. URL https://dl.acm.org/doi/10.5555/3648699.3649087

  18. [18]

    Geometry-informed neural operator for large-scale 3d pdes

    Zongyi Li, Nikola Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Otta, Mohammad Amin Nabian, Maximilian Stadler, Christian Hundt, Kamyar Azizzadenesheli, and Animashree Anandkumar. Geometry-informed neural operator for large-scale 3d pdes. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors,Advances in Neural Information...

  19. [19]

    Convolutional neural operators for robust and accurate learning of PDEs

    Bogdan Raonic, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, and Emmanuel de Bezenac. Convolutional neural operators for robust and accurate learning of PDEs. InThirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=MtekhXRP4h. 12 FLOWFORGE: Stag...

  20. [20]

    Factorized fourier neural operators

    Alasdair Tran, Alexander Mathews, Lexing Xie, and Cheng Soon Ong. Factorized fourier neural operators. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=tmIiMPl4IPa

  21. [21]

    U-NO: U-shaped neural operators

    Md Ashiqur Rahman, Zachary E Ross, and Kamyar Azizzadenesheli. U-NO: U-shaped neural operators. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=j3oQF9coJd

  22. [22]

    Gnot: a general neural operator transformer for operator learning

    Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, and Jun Zhu. Gnot: a general neural operator transformer for operator learning. InProceedings of the 40th International Conference on Machine Learning, ICML’23, 2023. URL https://dl.acm.org/doi/10.5555/3618408.3618917

  23. [23]

    Improved operator learning by orthogonal attention, 2024

    Zipeng Xiao, Zhongkai Hao, Bokai Lin, Zhijie Deng, and Hang Su. Improved operator learning by orthogonal attention, 2024. URLhttps://openreview.net/forum?id=mt5NPvTp5a

  24. [24]

    HT-net: Hierarchical transformer based operator learning model for multiscale PDEs, 2023

    Xinliang Liu, Bo Xu, and Lei Zhang. HT-net: Hierarchical transformer based operator learning model for multiscale PDEs, 2023. URLhttps://openreview.net/forum?id=UY5zS0OsK2e

  25. [25]

    Transolver: a fast transformer solver for pdes on general geometries

    Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, and Mingsheng Long. Transolver: a fast transformer solver for pdes on general geometries. InProceedings of the 41st International Conference on Machine Learning, ICML’24, 2024. URLhttps://dl.acm.org/doi/10.5555/3692070.3694270

  26. [26]

    Hamlet: graph transformer neural operator for partial differential equations

    Andrey Bryutkin, Jiahao Huang, Zhongying Deng, Guang Yang, Carola-Bibiane Schönlieb, and Angelica Aviles-Rivero. Hamlet: graph transformer neural operator for partial differential equations. InProceedings of the 41st International Conference on Machine Learning, ICML’24, 2024. URL https://dl.acm.org/doi/10.5555/3692070.3692256

  27. [27]

    Poseidon: Efficient foundation models for PDEs

    Maximilian Herde, Bogdan Raonic, Tobias Rohner, Roger Käppeli, Roberto Molinaro, Emmanuel de Bezenac, and Siddhartha Mishra. Poseidon: Efficient foundation models for PDEs. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URLhttps://openreview.net/forum?id=JC1VKK3UXk

  28. [28]

    Operator learning with neural fields: Tackling PDEs on general geometries

    Louis Serrano, Lise Le Boudec, Armand Kassaï Koupaï, Thomas X Wang, Yuan Yin, Jean-Noël Vittaut, and patrick gallinari. Operator learning with neural fields: Tackling PDEs on general geometries. InThirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=4jEjq5nhg1

  29. [29]

    PDEBench Datasets, 2022

    Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, and Mathias Niepert. PDEBench Datasets, 2022. URLhttps://doi.org/10.18419/darus-2986

  30. [30]

    Towards multi-spatiotemporal-scale generalized PDE modeling

    Jayesh K Gupta and Johannes Brandstetter. Towards multi-spatiotemporal-scale generalized PDE modeling. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=dPSTDbGtBY

  31. [31]

    Cfdbench: A large-scale benchmark for machine learning methods in fluid dynamics.arXiv preprint arXiv:2310.05963, 2023

    Luo Yining, Chen Yingfa, and Zhang Zhen. Cfdbench: A large-scale benchmark for machine learning methods in fluid dynamics. 2023. URLhttps://arxiv.org/abs/2310.05963

  32. [32]

    BubbleML: A multi-physics dataset and benchmarks for machine learning

    Sheikh Md Shakeel Hassan, Arthur Feeney, Akash Dhruv, Jihoon Kim, Youngjoon Suh, Jaiyoung Ryu, Yoonjin Won, and Aparna Chandramowlishwaran. BubbleML: A multi-physics dataset and benchmarks for machine learning. InAdvances in Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=0Wmglu8zak

  33. [33]

    APEBench: A benchmark for autoregressive neural emulators of PDEs

    Felix Koehler, Simon Niedermayr, rüdiger westermann, and Nils Thuerey. APEBench: A benchmark for autoregressive neural emulators of PDEs. InThe Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024. URLhttps://openreview.net/forum?id=iWc0qE116u. 13 FLOWFORGE: Staged Local Rollout for Flow Field PredictionARXI...

  34. [34]

    Warm-up Phase (t <30 ):In the initial steps, locality-preserving orders (Outward Spiral, Raster Scan, Hilbert Curve) maintain a consistent dt = 1 (adjacent pixels), whereas Random ordering exhibits high variance and larger average distances (dt >1), as early points are scattered sparsely across the grid

  35. [35]

    Consequently, the dt for Random ordering rapidly decays and converges to the same baseline (dt ≈1 ), providing similar context vectors as the locality-preserving orders

    Stable Phase (t≥30 ):As the grid becomes populated, the probability of a randomly selected target location falling within the immediate neighborhood of an existing point increases distinctly. Consequently, the dt for Random ordering rapidly decays and converges to the same baseline (dt ≈1 ), providing similar context vectors as the locality-preserving ord...