pith. machine review for the scientific record. sign in

arxiv: 2604.05700 · v1 · submitted 2026-04-07 · 💻 cs.LG

Recognition: no theorem link

Optimal-Transport-Guided Functional Flow Matching for Turbulent Field Generation in Hilbert Space

Authors on Pith no claims yet

Pith reviewed 2026-05-10 19:33 UTC · model grok-4.3

classification 💻 cs.LG
keywords turbulent flow generationflow matchingoptimal transportHilbert spacegenerative modelsNavier-Stokes equationschaotic dynamicsfunctional data
0
0 comments X

The pith

Defining flow matching in Hilbert space with optimal transport paths generates turbulent fields that match high-order statistics.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a generative framework called FOT-CFM that operates directly on physical fields viewed as elements of infinite-dimensional Hilbert space instead of fixed grids. Optimal transport is used to build deterministic straight-line paths connecting noise measures to data measures, allowing the model to learn probability dynamics in function space. This setup supports simulation-free training and faster sampling while targeting the multi-scale intermittency found in chaotic flows. The approach is tested on systems including the Navier-Stokes equations, Kolmogorov flow, and Hasegawa-Wakatani equations, where it reproduces energy spectra and higher-order statistics more closely than grid-based baselines.

Core claim

FOT-CFM treats physical fields as elements of an infinite-dimensional Hilbert space and learns resolution-invariant generative dynamics directly at the level of probability measures by integrating optimal transport to construct deterministic straight-line probability paths between noise and data measures.

What carries the argument

Functional Optimal Transport Conditional Flow Matching (FOT-CFM), which defines conditional flow matching in Hilbert space and uses optimal transport to form straight probability paths for functional data generation.

If this is right

  • Enables training without simulating the forward dynamics at each step.
  • Speeds up generation of new field samples compared to iterative grid-based methods.
  • Produces fields whose statistics align more closely with reference turbulent data on tested chaotic systems.
  • Remains invariant to spatial resolution because operations occur in function space rather than on discrete pixels.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The framework could be extended by adding explicit conservation laws or dissipation terms to improve stability over long time horizons.
  • Similar Hilbert-space constructions might apply to generating other continuous functional data such as electromagnetic fields or density distributions in biology.
  • Hybrid models could combine this generative approach with traditional numerical solvers to correct drift in data-driven predictions.

Load-bearing premise

Deterministic straight-line paths from optimal transport in Hilbert space can capture the chaotic multi-scale intermittency of turbulence without additional physics-based constraints.

What would settle it

If samples drawn from the trained model fail to reproduce the energy spectra or high-order moments observed in an independent set of turbulent flow realizations from the Navier-Stokes or similar equations, the central claim would not hold.

Figures

Figures reproduced from arXiv: 2604.05700 by Li Kunpeng, Lim Kyungtak, Ong Yew Soon, Qu Zhisong, Virginie Grandgirard, Wan Chenguang, Xavier Garbet, Yu Hua.

Figure 1
Figure 1. Figure 1: FOT-CFM in infinite-dimensional function space, OT-aligned operator training [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison of generative models on the 2D Kolmogorov Flow. Each row presents [PITH_FULL_IMAGE:figures/full_fig_p023_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of generative models on the 2D Navier-Stokes equations. Each row [PITH_FULL_IMAGE:figures/full_fig_p025_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Comparison of generative models on the density [PITH_FULL_IMAGE:figures/full_fig_p029_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Comparison of generative models on the potential [PITH_FULL_IMAGE:figures/full_fig_p030_5.png] view at source ↗
read the original abstract

High-fidelity modeling of turbulent flows requires capturing complex spatiotemporal dynamics and multi-scale intermittency, posing a fundamental challenge for traditional knowledge-based systems. While deep generative models, such as diffusion models and Flow Matching, have shown promising performance, they are fundamentally constrained by their discrete, pixel-based nature. This limitation restricts their applicability in turbulence computing, where data inherently exists in a functional form. To address this gap, we propose Functional Optimal Transport Conditional Flow Matching (FOT-CFM), a generative framework defined directly in infinite-dimensional function space. Unlike conventional approaches defined on fixed grids, FOT-CFM treats physical fields as elements of an infinite-dimensional Hilbert space, and learns resolution-invariant generative dynamics directly at the level of probability measures. By integrating Optimal Transport (OT) theory, we construct deterministic, straight-line probability paths between noise and data measures in Hilbert space. This formulation enables simulation-free training and significantly accelerates the sampling process. We rigorously evaluate the proposed system on a diverse suite of chaotic dynamical systems, including the Navier-Stokes equations, Kolmogorov Flow, and Hasegawa-Wakatani equations, all of which exhibit rich multi-scale turbulent structures. Experimental results demonstrate that FOT-CFM achieves superior fidelity in reproducing high-order turbulent statistics and energy spectra compared to state-of-the-art baselines.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces Functional Optimal Transport Conditional Flow Matching (FOT-CFM), a generative framework operating directly in infinite-dimensional Hilbert space for synthesizing turbulent fields. It constructs deterministic straight-line probability paths via optimal transport between noise and data measures to enable simulation-free training and resolution-invariant sampling. The approach is tested on the Navier-Stokes equations, Kolmogorov flow, and Hasegawa-Wakatani equations, with the central claim that it achieves superior fidelity in reproducing high-order turbulent statistics and energy spectra relative to state-of-the-art baselines.

Significance. If the empirical results are robust, the work offers a meaningful advance in generative modeling for functional scientific data by moving beyond grid-based discretizations. The combination of flow matching with OT-induced straight paths in Hilbert space provides an efficient, measure-theoretic route to resolution-independent generation, which could benefit turbulence simulation and related chaotic systems. The multi-system evaluation is a strength.

major comments (2)
  1. [Methods (FOT-CFM objective)] Methods section describing the FOT-CFM objective: the conditional flow-matching loss is defined solely via the OT-induced straight paths without explicit terms enforcing the divergence-free constraint (for incompressible NS) or the nonlinear advection/dissipation operators of the underlying PDEs. This is load-bearing for the claim of faithful high-order statistics, as the learned vector field on the probability path may not implicitly respect these structures.
  2. [Results (high-order statistics)] Results section on high-order statistics and energy spectra: the reported superiority lacks ablations isolating the contribution of the OT guidance versus standard functional CFM, and no quantitative tables with error bars or statistical significance tests are referenced to support the fidelity gains on intermittent structures.
minor comments (1)
  1. [Abstract] Abstract: the phrase 'rigorously evaluate' is used without naming the specific baselines or metrics, which should be clarified for precision.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their positive assessment of the significance of the work and for the constructive major comments. We address each point below and have revised the manuscript to incorporate clarifications and additional analyses where appropriate.

read point-by-point responses
  1. Referee: [Methods (FOT-CFM objective)] Methods section describing the FOT-CFM objective: the conditional flow-matching loss is defined solely via the OT-induced straight paths without explicit terms enforcing the divergence-free constraint (for incompressible NS) or the nonlinear advection/dissipation operators of the underlying PDEs. This is load-bearing for the claim of faithful high-order statistics, as the learned vector field on the probability path may not implicitly respect these structures.

    Authors: We appreciate the referee highlighting this aspect of the formulation. FOT-CFM is a data-driven generative model that learns the pushforward map between noise and data measures in Hilbert space; the training data are drawn from solutions of the target PDEs and therefore already satisfy the relevant constraints (e.g., divergence-free fields for incompressible Navier-Stokes). Consequently, samples drawn from the learned measure reproduce the physical structures in a distributional sense, which is corroborated by the superior high-order statistics reported across all three systems. In the revised manuscript we have added a dedicated paragraph in the Methods section that explicitly discusses this implicit enforcement via measure matching and outlines possible future extensions that could incorporate physics-informed residuals into the objective. revision: yes

  2. Referee: [Results (high-order statistics)] Results section on high-order statistics and energy spectra: the reported superiority lacks ablations isolating the contribution of the OT guidance versus standard functional CFM, and no quantitative tables with error bars or statistical significance tests are referenced to support the fidelity gains on intermittent structures.

    Authors: We agree that an explicit ablation isolating the OT component and more rigorous quantitative reporting would strengthen the results. The revised manuscript now includes a new ablation subsection that compares FOT-CFM directly against a standard functional conditional flow-matching baseline (identical architecture and training protocol but without OT-guided paths). The ablation demonstrates that the OT component is responsible for the observed gains in high-order statistics. We have also added tables that report mean errors together with standard deviations computed over five independent random seeds, as well as p-values from paired t-tests confirming that the improvements are statistically significant. revision: yes

Circularity Check

0 steps flagged

No circularity: FOT-CFM is a new construction from standard OT and flow-matching primitives

full rationale

The paper defines FOT-CFM directly in Hilbert space by combining established Optimal Transport (for straight probability paths) with conditional flow matching; the abstract and described framework present this as an independent synthesis rather than a re-derivation of its own outputs. No equations, claims, or experimental results are shown to reduce by construction to fitted parameters, self-citations, or renamed inputs. Evaluations on Navier-Stokes, Kolmogorov, and Hasegawa-Wakatani systems are treated as external benchmarks. The derivation chain remains self-contained with independent content.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract provides no explicit free parameters, axioms, or invented entities; the central claim rests on the unstated assumption that Hilbert-space probability measures are sufficient to represent turbulent dynamics.

pith-pipeline@v0.9.0 · 5553 in / 1204 out tokens · 34994 ms · 2026-05-10T19:33:55.976148+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

54 extracted references · 23 canonical work pages · 6 internal anchors

  1. [1]

    S. B. Pope, Turbulent flows, Measurement Science and Technology 12 (11) (2001) 2020–2021

  2. [2]

    Hussain, P

    S. Hussain, P. H. Oosthuizen, A. Kalendar, Evaluation of various tur- bulence models for the prediction of the airflow and temperature distri- butions in atria, Energy and Buildings 48 (2012) 18–28

  3. [3]

    G.Conway, Turbulencemeasurementsinfusionplasmas, PlasmaPhysics and Controlled Fusion 50 (12) (2008) 124026

  4. [4]

    Fouladi, P

    F. Fouladi, P. Henshaw, D. S.-K. Ting, S. Ray, Wind turbulence impact on solar energy harvesting, Heat Transfer Engineering 41 (5) (2020) 407–417

  5. [5]

    F. Z. Wang, I. Animasaun, T. Muhammad, S. Okoya, Recent advance- ments in fluid dynamics: drag reduction, lift generation, computational fluid dynamics, turbulence modelling, and multiphase flow, Arabian Journal for Science and Engineering 49 (8) (2024) 10237–10249

  6. [6]

    C.Drygala, B.Winhart, F.diMare, H.Gottschalk, Generativemodeling of turbulence, Physics of Fluids 34 (3) (2022)

  7. [7]

    Drygala, E

    C. Drygala, E. Ross, F. di Mare, H. Gottschalk, Comparison of generative learning methods for turbulence modeling, arXiv preprint arXiv:2411.16417 (2024)

  8. [8]

    S. Kim, S. Moon, Y. Lim, S.-M. Choi, S.-K. Ko, Multi-modal rec- ommender system using text-to-image generative models and adaptive learning, Expert Systems with Applications 296 (2026) 129086

  9. [9]

    Dhariwal, A

    P. Dhariwal, A. Nichol, Diffusion models beat gans on image synthesis, Advances in neural information processing systems 34 (2021) 8780–8794

  10. [10]

    Kang, J.-Y

    M. Kang, J.-Y. Zhu, R. Zhang, J. Park, E. Shechtman, S. Paris, T. Park, Scaling up gans for text-to-image synthesis, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 10124–10134. 36

  11. [11]

    J. Gao, T. Shen, Z. Wang, W. Chen, K. Yin, D. Li, O. Litany, Z. Gojcic, S. Fidler, Get3d: A generative model of high quality 3d textured shapes learned from images, Advances in neural information processing systems 35 (2022) 31841–31854

  12. [12]

    Achlioptas, O

    P. Achlioptas, O. Diamanti, I. Mitliagkas, L. Guibas, Learning repre- sentations and generative models for 3d point clouds, in: International conference on machine learning, PMLR, 2018, pp. 40–49

  13. [13]

    M. Zhao, W. Wang, R. Zhang, H. Jia, Q. Chen, Tia2v: Video generation conditioned on triple modalities of text–image–audio, Expert Systems with Applications 268 (2025) 126278

  14. [14]

    A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, K. Kavukcuoglu, Wavenet: A generative model for raw audio, arXiv preprint arXiv:1609.03499 (2016)

  15. [15]

    Vasquez, M

    S. Vasquez, M. Lewis, Melnet: A generative model for audio in the frequency domain, arXiv preprint arXiv:1906.01083 (2019)

  16. [16]

    J. Ho, T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, D. J. Fleet, Video diffusion models, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Bel- grave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, Vol. 35, Curran Associates, Inc., 2022, pp. 8633–8646

  17. [17]

    Aldausari, A

    N. Aldausari, A. Sowmya, N. Marcus, G. Mohammadi, Video generative adversarial networks: A review, ACM Comput. Surv. 55 (2) (Jan. 2022). doi:10.1145/3487891

  18. [18]

    Kumar, D

    V. Kumar, D. Sinha, Synthetic attack data generation model apply- ing generative adversarial network for intrusion detection, Computers & Security 125 (2023) 103054.doi:https://doi.org/10.1016/j.cose. 2022.103054

  19. [19]

    Alwahedi, A

    F. Alwahedi, A. Aldhaheri, M. A. Ferrag, A. Battah, N. Tihanyi, Ma- chine learning techniques for iot security: Current research and fu- ture vision with generative ai and large language models, Internet of Things and Cyber-Physical Systems 4 (2024) 167–185.doi:https: //doi.org/10.1016/j.iotcps.2023.12.003. 37

  20. [20]

    S. Nam, Y. Kim, S. J. Kim, Text-adaptive generative adversarial net- works: Manipulating images with natural language, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 31, Curran Associates, Inc., 2018

  21. [21]

    C. Dong, Y. Li, H. Gong, M. Chen, J. Li, Y. Shen, M. Yang, A survey of natural language generation, ACM Comput. Surv. 55 (8) (Dec. 2022). doi:10.1145/3554727

  22. [22]

    Anand, P

    N. Anand, P. Huang, Generative modeling for protein structures, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 31, Curran Associates, Inc., 2018

  23. [23]

    Ingraham, V

    J. Ingraham, V. Garg, R. Barzilay, T. Jaakkola, Generative models for graph-based protein design, in: H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alché-Buc, E. Fox, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 32, Curran Associates, Inc., 2019

  24. [24]

    J. Chen, F. Zhu, Y. Han, C. Chen, Fast prediction of complicated temperature field using conditional multi-attention generative adversar- ial networks (cmagan), Expert Systems with Applications 186 (2021) 115727

  25. [25]

    Y. Liu, M. Yang, P. Jiang, Cgan-driven intelligent generative design of vehicle exterior shape, Expert Systems with Applications 274 (2025) 127066

  26. [26]

    Y. Chen, L. Lin, H. Ruan, Y. Chen, S. Zhong, L. Zu, Hydraulic re- sponse enhancement in brake valve anomaly monitoring: an integrated hardware-in-the-loop and cyclic generative adversarial network, Expert Systems with Applications (2026) 131905

  27. [27]

    Y. Yang, A. F. Gao, J. C. Castellanos, Z. E. Ross, K. Azizzadenesheli, R. W. Clayton, Seismic wave propagation and inversion with neural operators (2021).arXiv:2108.05421. URLhttps://arxiv.org/abs/2108.05421

  28. [28]

    G. Wen, Z. Li, Q. Long, K. Azizzadenesheli, A. Anandkumar, S. M. Ben- son, Real-time high-resolution co2 geological storage prediction using 38 nested fourier neural operators, Energy Environ. Sci. 16 (2023) 1732– 1741.doi:10.1039/D2EE04204E

  29. [29]

    Mildenhall, P

    B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoor- thi, R. Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM 65 (1) (2021) 99–106

  30. [30]

    J. J. Park, P. Florence, J. Straub, R. Newcombe, S. Lovegrove, Deepsdf: Learning continuous signed distance functions for shape representation, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174

  31. [31]

    From data to functa: Your data point is a function and you can treat it like one.arXiv preprint arXiv:2201.12204, 2022

    E. Dupont, H. Kim, S. Eslami, D. Rezende, D. Rosenbaum, From data to functa: Your data point is a function and you can treat it like one, arXiv preprint arXiv:2201.12204 (2022)

  32. [32]

    Z. Li, Y. Sun, G. Turk, B. Zhu, Functional mean flow in hilbert space, arXiv preprint arXiv:2511.12898 (2025)

  33. [33]

    Zhang, C

    J. Zhang, C. Scott, Flow straight and fast in hilbert space: Functional rectified flow, arXiv preprint arXiv:2509.10384 (2025)

  34. [34]

    Kossaifi, V

    J.H.Lim, N.B.Kovachki, R.Baptista, C.Beckham, K.Azizzadenesheli, J. Kossaifi, V. Voleti, J. Song, K. Kreis, J. Kautz, et al., Score-based dif- fusion models in function space, Journal of Machine Learning Research 26 (158) (2025) 1–62

  35. [35]

    Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, B. Poole, Score-based generative modeling through stochastic differen- tial equations, arXiv preprint arXiv:2011.13456 (2020)

  36. [36]

    Flow Matching for Generative Modeling

    Y. Lipman, R. T. Chen, H. Ben-Hamu, M. Nickel, M. Le, Flow matching for generative modeling, arXiv preprint arXiv:2210.02747 (2022)

  37. [37]

    Functional flow matching.arXiv preprint arXiv:2305.17209, 2023

    G. Kerrigan, G. Migliorini, P. Smyth, Functional flow matching, arXiv preprint arXiv:2305.17209 (2023)

  38. [38]

    Villani, et al., Optimal transport: old and new, Vol

    C. Villani, et al., Optimal transport: old and new, Vol. 338, Springer, 2008. 39

  39. [39]

    Benamou, Y

    J.-D. Benamou, Y. Brenier, A computational fluid mechanics solution to the monge-kantorovich mass transfer problem, Numerische Mathematik 84 (3) (2000) 375–393

  40. [40]

    R. J. McCann, A convexity principle for interacting gases, Advances in mathematics 128 (1) (1997) 153–179

  41. [41]

    Zhang, P

    B. Zhang, P. Wonka, Functional diffusion, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4723–4732

  42. [42]

    Kerrigan, J

    G. Kerrigan, J. Ley, P. Smyth, Diffusion generative models in infinite dimensions, arXiv preprint arXiv:2212.00886 (2022)

  43. [43]

    Z. Li, M. Liu-Schiaffini, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Learning chaotic dynam- ics in dissipative systems, Advances in Neural Information Processing Systems 35 (2022) 16768–16781

  44. [44]

    M. A. Rahman, M. A. Florez, A. Anandkumar, Z. E. Ross, K. Az- izzadenesheli, Generative adversarial neural operators, arXiv preprint arXiv:2205.03017 (2022)

  45. [45]

    Castagna, F

    J. Castagna, F. Schiavello, L. Zanisi, J. Williams, Stylegan as an ai deconvolution operator for large eddy simulations of turbulent plasma equations in bout++, Physics of Plasmas 31 (3) (2024)

  46. [46]

    Physics-preserving ai-accelerated simulations of plasma turbulence,

    R. Greif, F. Jenko, N. Thuerey, Physics-preserving ai-accelerated simu- lations of plasma turbulence, arXiv preprint arXiv:2309.16400 (2023)

  47. [47]

    Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stu- art, A. Anandkumar, Fourier neural operator for parametric partial dif- ferential equations, arXiv preprint arXiv:2010.08895 (2020)

  48. [48]

    Gyselax, TOKAM2D: Github repository,https://github.com/ gyselax/tokam2d, accessed: 30 June 2025 (2024)

  49. [49]

    Gillot, V

    P.Ghendrih, Y.Asahi, E.Caschera, G.Dif-Pradalier, P.Donnel, X.Gar- bet, C. Gillot, V. Grandgirard, G. Latu, Y. Sarazin, et al., Generation and dynamics of sol corrugated profiles, Journal of Physics: Confer- ence Series 1125 (1) (2018) 012011.doi:10.1088/1742-6596/1125/1/ 012011. 40

  50. [50]

    Ghendrih, G

    P. Ghendrih, G. Dif-Pradalier, O. Panico, Y. Sarazin, H. Bufferand, G. Ciraolo, P. Donnel, N. Fedorczak, X. Garbet, V. Grandgirard, et al., Role of avalanche transport in competing drift wave and interchange tur- bulence, Journal of Physics: Conference Series 2397 (1) (2022) 012018. doi:10.1088/1742-6596/2397/1/012018

  51. [51]

    Kovachki, Z

    N. Kovachki, Z. Li, B. Liu, K. Azizzadenesheli, K. Bhattacharya, A. Stu- art, A. Anandkumar, Neural operator: Learning maps between function spaces with applications to pdes, Journal of Machine Learning Research 24 (89) (2023) 1–97

  52. [52]

    Paszke, S

    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An imperative style, high- performance deep learning library, version 2.2.1 (2019). URLhttps://pytorch.org

  53. [53]

    D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014)

  54. [54]

    Gaussian Error Linear Units (GELUs)

    D. Hendrycks, K. Gimpel, Gaussian error linear units (gelus), arXiv preprint arXiv:1606.08415 (2016). 41