pith. machine review for the scientific record. sign in

arxiv: 2604.25824 · v1 · submitted 2026-04-28 · ⚛️ physics.flu-dyn

Recognition: unknown

Discovery of Sparse Invariant Subgrid-Scale Closures via Dissipation-Controlled Training for Large Eddy Simulation on Anisotropic Grids

Samantha Friess , Aviral Prakash , John A. Evans

Authors on Pith no claims yet

Pith reviewed 2026-05-07 14:53 UTC · model grok-4.3

classification ⚛️ physics.flu-dyn
keywords large eddy simulationsubgrid-scale modelingsparse regressionturbulence closureanisotropic gridsinvariant modelsdissipation control
0
0 comments X

The pith

Sparse regression identifies simple polynomial subgrid-scale models that match neural network accuracy for large eddy simulation while training and running at lower cost.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a framework that uses sparse regression to find explicit polynomial expressions for subgrid-scale stresses in turbulence simulations. These models start from a tensor basis scaled by polynomials in invariant scalars, which automatically respects rotational invariance, and they add a penalty that controls energy dissipation during training. The approach works on grids with unequal spacing in different directions. When tested on standard turbulence cases and a separated flow, the resulting models often perform as well as a more complex neural network but remain simple enough to evaluate quickly.

Core claim

A sparsity-promoting regression procedure applied to a minimal tensor basis scaled by truncated polynomial expansions of invariants yields explicit subgrid-scale stress models that, after dissipation-controlled training on idealized turbulence data, produce a priori and a posteriori predictions comparable to an invariance-preserving neural network on both isotropic and anisotropic grids, yet require far less computational effort to train and deploy.

What carries the argument

Sparsity-promoting regression on a tensor basis scaled by polynomial expansions of invariant scalars, with an explicit dissipation constraint added to the loss function.

If this is right

  • The discovered polynomial models can be implemented directly in existing large-eddy simulation codes without neural-network libraries.
  • Because the models are explicit and low-order, their computational cost per time step remains negligible compared with neural-network evaluations.
  • The same regression procedure can be repeated for different filter anisotropies to produce grid-specific closures without retraining from scratch.
  • Enforcing dissipation control during training reduces the risk of numerical blow-up in long simulations.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The method could be extended to include additional invariants or higher-order terms if the current models show systematic bias in particular flow regimes.
  • Because the models are sparse and polynomial, symbolic regression or model-reduction techniques could further simplify them for use in very large-scale simulations.
  • The dissipation constraint might serve as a general regularizer for other data-driven closures beyond the present tensor-basis approach.

Load-bearing premise

Models fitted to a small set of idealized turbulence cases with dissipation control will remain stable and accurate when applied to more complex flows such as separated boundary layers.

What would settle it

Run the trained sparse models on a high-Reynolds-number separated flow benchmark not used in training and check whether they produce stable solutions with mean-flow errors below those of the neural-network baseline.

Figures

Figures reproduced from arXiv: 2604.25824 by Aviral Prakash, John A. Evans, Samantha Friess.

Figure 1
Figure 1. Figure 1: Mapping between an anisotropic grid in the physical filtering space and an isotropic grid in the view at source ↗
Figure 2
Figure 2. Figure 2: (a) Correlation coefficient (CC) and (b) relative error in mean energy flux (REF) versus filter view at source ↗
Figure 3
Figure 3. Figure 3: Three-dimensional energy spectra of forced HIT at view at source ↗
Figure 4
Figure 4. Figure 4: Temporal evolution of the resolved dissipation for Taylor-Green vortex flow at view at source ↗
Figure 5
Figure 5. Figure 5: Streamwise velocity profiles for turbulent channel flow at view at source ↗
Figure 6
Figure 6. Figure 6: Three-dimensional energy spectra of forced HIT at view at source ↗
Figure 7
Figure 7. Figure 7: Temporal evolution of the resolved dissipation for Taylor-Green vortex flow at view at source ↗
Figure 8
Figure 8. Figure 8: Streamwise velocity profiles for turbulent channel flow at view at source ↗
Figure 9
Figure 9. Figure 9: Distribution of nondimensional grid spac view at source ↗
Figure 10
Figure 10. Figure 10: Averaged wall shear stress and pressure distributions along the lower wall of the periodic hill at view at source ↗
Figure 11
Figure 11. Figure 11: Streamwise velocity profiles for the periodic hill at view at source ↗
Figure 12
Figure 12. Figure 12: Vertical velocity profiles for the periodic hill at view at source ↗
read the original abstract

Neural networks offer highly expressive turbulence closures, yet their complexity obscures the physical mechanisms they aim to model, and their computational cost can limit their tractability. To address these limitations, we introduce a sparsity-promoting subgrid-scale (SGS) stress closure modeling framework that identifies explicit polynomial model forms using sparse regression. Candidate models are constructed through scaling a minimal tensor basis by a truncated polynomial expansion of invariant scalars, thereby enforcing fundamental invariance properties while regulating the highest order of admissible terms. Arbitrary filter anisotropy is incorporated to enable consistent representation of turbulent structures across computational grids with anisotropic scales and resolutions. We also explicitly constrain SGS energy dissipation during training to improve functional performance and promote numerical stability. The framework is trained on a small dataset of idealized turbulence and evaluated through a series of a priori and a posteriori tests. Sensitivity studies examine the effects of variations in model order and optimization penalties for regularization and dissipation across a range of canonical flow configurations. We also evaluate on a separated flow benchmark to assess generalizability to a more complex turbulent regime. In many cases, the sparse regression closures achieve predictive accuracy comparable to an invariance-preserving neural network while retaining markedly simpler parametric forms. Moreover, we demonstrate that the sparse closures can be trained and evaluated at a fraction of the cost of the neural network model.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces a sparsity-promoting framework for subgrid-scale (SGS) stress closures in large-eddy simulation that constructs explicit polynomial models from a minimal tensor basis scaled by truncated expansions of invariant scalars, with explicit incorporation of arbitrary filter anisotropy and an energy-dissipation penalty in the training objective. Models are obtained via sparse regression on a small idealized-turbulence dataset, then assessed through a priori tests, sensitivity studies on polynomial order and penalty weights, and a posteriori LES on both canonical configurations and a separated-flow benchmark; the central claim is that the resulting sparse closures attain predictive accuracy comparable to an invariance-preserving neural-network baseline while remaining markedly simpler and cheaper to train and evaluate.

Significance. If the reported accuracy and stability transfer robustly, the work supplies a practical route to interpretable, low-cost SGS models that respect tensor invariance and explicit dissipation control, thereby addressing two persistent obstacles to deploying data-driven closures in production LES. The combination of a physically constrained basis with sparsity and a dissipation regularizer is a clear methodological advance over purely black-box approaches.

major comments (3)
  1. [Abstract] Abstract and a-posteriori evaluation section: the claim that sparse-regression closures achieve 'predictive accuracy comparable to an invariance-preserving neural network' is unsupported by any quantitative error metrics (e.g., integrated SGS stress error norms, kinetic-energy spectra, or mean-flow discrepancies) for either the a priori or a posteriori tests; without these numbers the comparability assertion cannot be evaluated.
  2. [a posteriori tests on separated flow] Separated-flow benchmark results: the generalization argument rests on a single a-posteriori test case, yet no stability diagnostics (e.g., time-to-blow-up, maximum SGS dissipation rate, or behavior under grid refinement) are reported when local strain-rate invariants depart from the training distribution; the dissipation constraint is therefore not shown to prevent unphysical stresses outside the training manifold.
  3. [sensitivity studies] Sensitivity studies: the effects of polynomial truncation order and the two penalty weights are examined, but the manuscript does not quantify how these free parameters affect out-of-sample stability or accuracy on the separated-flow case, leaving the robustness of the selected sparse models unverified.
minor comments (2)
  1. The abstract states that the framework is 'trained on a small dataset of idealized turbulence' but supplies neither the number of samples nor the precise flow configurations, which hinders reproducibility.
  2. Notation for the invariant scalars and the dissipation penalty term should be defined explicitly at first use rather than only in the methods section.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed review. The comments highlight important opportunities to strengthen the quantitative support for our claims and to better demonstrate robustness. We address each major comment below and will incorporate revisions accordingly.

read point-by-point responses
  1. Referee: [Abstract] Abstract and a-posteriori evaluation section: the claim that sparse-regression closures achieve 'predictive accuracy comparable to an invariance-preserving neural network' is unsupported by any quantitative error metrics (e.g., integrated SGS stress error norms, kinetic-energy spectra, or mean-flow discrepancies) for either the a priori or a posteriori tests; without these numbers the comparability assertion cannot be evaluated.

    Authors: We agree that explicit quantitative error metrics would make the comparability claim more rigorous and directly evaluable. While the manuscript presents comparative plots of SGS stresses, spectra, and mean profiles, these do not include tabulated norms or integrated discrepancies. In the revised version we will add integrated SGS stress error norms (L2 and L-infinity), kinetic-energy spectra differences, and mean-flow discrepancy measures for both the a priori tests and the a posteriori LES cases, allowing direct numerical comparison with the neural-network baseline. revision: yes

  2. Referee: [a posteriori tests on separated flow] Separated-flow benchmark results: the generalization argument rests on a single a-posteriori test case, yet no stability diagnostics (e.g., time-to-blow-up, maximum SGS dissipation rate, or behavior under grid refinement) are reported when local strain-rate invariants depart from the training distribution; the dissipation constraint is therefore not shown to prevent unphysical stresses outside the training manifold.

    Authors: The single separated-flow benchmark is indeed the primary out-of-distribution test presented. We note that all reported a posteriori simulations completed the full integration time without numerical blow-up and that the dissipation penalty was active during training to bound unphysical energy production. However, we did not report explicit stability diagnostics such as time-to-blow-up, peak SGS dissipation rates, or grid-refinement checks on this case. In revision we will add these diagnostics (simulation duration achieved, maximum instantaneous SGS dissipation, and a brief grid-sensitivity note) to demonstrate that the dissipation constraint prevented unphysical stresses outside the training manifold. revision: yes

  3. Referee: [sensitivity studies] Sensitivity studies: the effects of polynomial truncation order and the two penalty weights are examined, but the manuscript does not quantify how these free parameters affect out-of-sample stability or accuracy on the separated-flow case, leaving the robustness of the selected sparse models unverified.

    Authors: The sensitivity studies in the manuscript focus on canonical configurations to isolate the influence of polynomial order and penalty weights. We acknowledge that extending these studies to the separated-flow benchmark would better verify robustness of the selected models. In the revised manuscript we will include a concise sensitivity table or figure for the separated-flow case, reporting how variations in polynomial order and the two penalty weights affect both accuracy (mean-flow and spectra errors) and stability (peak dissipation and simulation completion) on this out-of-sample configuration. revision: yes

Circularity Check

0 steps flagged

No circularity detected; derivation and claims are empirically grounded.

full rationale

The paper constructs candidate SGS models by scaling a minimal tensor basis with truncated invariant polynomials (enforcing invariance by construction) and fits sparse coefficients via regression on idealized turbulence data while adding an explicit dissipation penalty. The central claim of comparable accuracy to an invariance-preserving NN is not a prediction by construction but an empirical outcome from a posteriori tests on a distinct separated-flow benchmark. No load-bearing step equates the reported performance or discovered forms to the training inputs alone; generalization is tested rather than assumed, and no self-citation chain or renaming of known results is invoked to force the result.

Axiom & Free-Parameter Ledger

3 free parameters · 2 axioms · 0 invented entities

The framework assumes a minimal tensor basis and invariance properties from turbulence theory, plus that polynomial truncation and penalty weights can be chosen to produce stable, accurate models; these are free parameters.

free parameters (3)
  • polynomial truncation order
    Limits the highest-order terms in the invariant scalar expansion and is selected as part of the model construction.
  • regularization penalty weight
    Controls sparsity in the regression and is varied in sensitivity studies.
  • dissipation constraint weight
    Explicitly penalizes mismatch in SGS energy dissipation during training.
axioms (2)
  • domain assumption SGS stress must be expressible via a minimal invariant tensor basis
    Invoked to enforce fundamental invariance properties by construction.
  • domain assumption Truncated polynomial expansions of invariants suffice to represent SGS behavior
    Used to regulate model complexity while retaining expressivity.

pith-pipeline@v0.9.0 · 5540 in / 1332 out tokens · 153936 ms · 2026-05-07T14:53:47.701255+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

88 extracted references · 76 canonical work pages

  1. [1]

    Turbulence Modeling in the Age of Data

    K. Duraisamy, G. Iaccarino, H. Xiao, Turbulence Modeling in the Age of Data, Annual Review of Fluid Mechanics 51 (1) (2019) 357–377.doi:10.1146/annurev-fluid-010518-040547

  2. [2]

    S. L. Brunton, B. R. Noack, P. Koumoutsakos, Machine Learning for Fluid Mechanics, Annual Review of Fluid Mechanics 52 (1) (2020) 477–508.doi:10.1146/annurev-fluid-010719-060214

  3. [3]

    A. Beck, M. Kurz, A perspective on machine learning methods in turbulence modeling, GAMM- Mitteilungen 44 (1) (2021) e202100002.doi:10.1002/gamm.202100002

  4. [4]

    Zhang, D

    Y. Zhang, D. Zhang, H. Jiang, Review of Challenges and Opportunities in Turbulence Modeling: A Comparative Analysis of Data-Driven Machine Learning Approaches, Journal of Marine Science and Engineering 11 (7) (2023) 1440.doi:10.3390/jmse11071440

  5. [5]

    B.Sanderse, P.Stinis, R.Maulik, S.E.Ahmed, Scientificmachinelearningforclosuremodelsinmultiscale problems: A review, Foundations of Data Science 7 (1) (2025) 298–337.doi:10.3934/fods.2024043

  6. [6]

    Taghizadeh, F

    S. Taghizadeh, F. D. Witherden, S. S. Girimaji, Turbulence closure modeling with data-driven techniques: physical compatibility and consistency considerations, New Journal of Physics 22 (9) (2020) 093023. doi:10.1088/1367-2630/abadb3

  7. [7]

    J. N. Kutz, S. L. Brunton, Parsimony as the ultimate regularizer for physics-informed machine learning, Nonlinear Dynamics 107 (3) (2022) 1801–1817.doi:10.1007/s11071-021-07118-3

  8. [8]

    S. S. Girimaji, Turbulence closure modeling with machine learning: a foundational physics perspective, New Journal of Physics 26 (7) (2024) 071201.doi:10.1088/1367-2630/ad6689

  9. [9]

    J. Ling, A. Kurzawski, J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics 807 (2016) 155–166.doi:10.1017/jfm.2016.615

  10. [10]

    Parmar, E

    B. Parmar, E. Peters, K. E. Jansen, A. Doostan, J. A. Evans, Generalized Non-Linear Eddy Viscosity Models for Data-Assisted Reynolds Stress Closure, in: AIAA Scitech 2020 Forum, AIAA SciTech Forum, American Institute of Aeronautics and Astronautics, 2020.doi:10.2514/6.2020-0351

  11. [11]

    C. Xie, Z. Yuan, J. Wang, Artificial neural network-based nonlinear algebraic models for large eddy simulation of turbulence, Physics of Fluids 32 (11) (2020) 115101.doi:10.1063/5.0025138

  12. [12]

    C. A. M. Ströfer, H. Xiao, End-to-end differentiable learning of turbulence models from indirect observations, Theoretical and Applied Mechanics Letters 11 (4) (2021) 100280.doi:10.1016/j.taml. 2021.100280. 36

  13. [13]

    E. L. Peters, R. Balin, K. E. Jansen, A. Doostan, J. A. Evans, S-frame discrepancy correction models for data-informed Reynolds stress closure, Journal of Computational Physics 448 (2022) 110717.doi: 10.1016/j.jcp.2021.110717

  14. [14]

    Prakash, K

    A. Prakash, K. E. Jansen, J. A. Evans, Invariant data-driven subgrid stress modeling in the strain-rate eigenframe for large eddy simulation, Computer Methods in Applied Mechanics and Engineering 399 (2022) 115457. doi:10.1016/j.cma.2022.115457

  15. [15]

    Prakash, K

    A. Prakash, K. E. Jansen, J. A. Evans, Invariant data-driven subgrid stress modeling on anisotropic grids for large eddy simulation, Computer Methods in Applied Mechanics and Engineering 422 (2024) 116807. doi:10.1016/j.cma.2024.116807

  16. [16]

    A. Wu, S. K. Lele, A Subgrid Stress Model with Tensor Basis Convolutional Neural Networks: Analysis and Integration, in: AIAA SCITECH 2024 Forum, American Institute of Aeronautics and Astronautics,

  17. [17]

    doi:10.2514/6.2024-1575

  18. [18]

    Weinmann, R

    M. Weinmann, R. Sandberg, Suitability of Explicit Algebraic Stress Models for Predicting Complex Three- Dimensional Flows, in: 19th AIAA Computational Fluid Dynamics, Fluid Dynamics and Co-located Conferences, American Institute of Aeronautics and Astronautics, 2009.doi:10.2514/6.2009-3663

  19. [21]

    Chakrabarty, S

    A. Chakrabarty, S. N. Yakovenko, Data-driven turbulence modelling using symbolic regression, Journal of Physics: Conference Series 2099 (1) (2021) 012020.doi:10.1088/1742-6596/2099/1/012020

  20. [22]

    Y. Zhao, H. D. Akolekar, J. Weatheritt, V. Michelassi, R. D. Sandberg, RANS turbulence model development using CFD-driven machine learning, Journal of Computational Physics 411 (2020) 109413. doi:10.1016/j.jcp.2020.109413

  21. [23]

    H. Xie, Y. Zhao, Y. Zhang, Data-driven nonlinear K-L turbulent mixing model via gene expression pro- gramming method, Acta Mechanica Sinica 39 (2) (2022) 322315.doi:10.1007/s10409-022-22315-x

  22. [24]

    C. Lav, A. Haghiri, R. Sandberg, RANS predictions of trailing-edge slot flows using heat-flux closures developed with CFD-driven machine learning, Journal of the Global Power and Propulsion Society 2021 (May) (2021) 1–13.doi:10.33737/jgpps/133114

  23. [25]

    2018.10.045

    F. Waschkowski, Y. Zhao, R. Sandberg, J. Klewicki, Multi-objective CFD-driven development of coupled turbulence closure models, Journal of Computational Physics 452 (2022) 110922.doi:10.1016/j.jcp. 2021.110922

  24. [27]

    Schmelzer, R

    M. Schmelzer, R. P. Dwight, P. Cinnella, Discovery of Algebraic Reynolds-Stress Models Using Sparse Symbolic Regression, Flow, Turbulence and Combustion 104 (2) (2020) 579–603. doi:10.1007/ s10494-019-00089-x

  25. [28]

    Steiner, R

    J. Steiner, R. P. Dwight, A. Viré, Data-driven RANS closures for wind turbine wakes under neutral conditions, Computers & Fluids 233 (2022) 105213.doi:10.1016/j.compfluid.2021.105213

  26. [29]

    Cherroud, X

    S. Cherroud, X. Merle, P. Cinnella, X. Gloerfelt, Sparse Bayesian Learning of Explicit Algebraic Reynolds-Stress models for turbulent separated flows, International Journal of Heat and Fluid Flow 98 (2022) 109047. doi:10.1016/j.ijheatfluidflow.2022.109047

  27. [30]

    Ben Hassan Saïdi, M

    I. Ben Hassan Saïdi, M. Schmelzer, P. Cinnella, F. Grasso, CFD-driven symbolic identification of algebraic Reynolds-stress models, Journal of Computational Physics 457 (2022) 111037.doi:10.1016/ j.jcp.2022.111037

  28. [31]

    H. Tang, Y. Wang, T. Wang, L. Tian, Discovering explicit Reynolds-averaged turbulence closures for turbulent separated flows through deep learning-based symbolic regression with non-linear corrections, Physics of Fluids 35 (2) (2023) 025118.doi:10.1063/5.0135638. 37

  29. [32]

    B. K. Petersen, M. L. Larma, T. N. Mundhenk, C. P. Santiago, S. K. Kim, J. T. Kim, Deep sym- bolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients, in: International Conference on Learning Representations, 2021

  30. [33]

    M. R. Alhafiz, K. Zakaria, D. V. Dung, P. S. Palar, Y. B. Dwianto, L. R. Zuhal, Kolmogorov-Arnold Networks for Data-Driven Turbulence Modeling, in: AIAA SCITECH 2025 Forum, American Institute of Aeronautics and Astronautics, 2025.doi:10.2514/6.2025-2047

  31. [34]

    Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljacic, T. Y. Hou, M. Tegmark, KAN: Kolmogorov–Arnold Networks, in: International Conference on Learning Representations, 2025

  32. [35]

    C. Wu, Y. Zhang, Enhancing the shear-stress-transport turbulence model with symbolic regression: A generalizable and interpretable data-driven approach, Physical Review Fluids 8 (8) (2023) 084604. doi:10.1103/PhysRevFluids.8.084604

  33. [36]

    C. Wu, S. Zhang, Y. Zhang, Development of a Generalizable Data-Driven Turbulence Model: Conditioned Field Inversion and Symbolic Regression, AIAA Journal 63 (2) (2025) 687–706.doi:10.2514/1.J064416

  34. [37]

    Mandler, B

    H. Mandler, B. Weigand, Generalization Limits of Data-Driven Turbulence Models, Flow, Turbulence and Combustion (Nov. 2024).doi:10.1007/s10494-024-00595-7

  35. [38]

    Beetham, J

    S. Beetham, J. Capecelatro, Formulating turbulence closures using sparse regression with embedded form invariance, Physical Review Fluids 5 (8) (2020) 084611.doi:10.1103/PhysRevFluids.5.084611

  36. [39]

    Beetham, R

    S. Beetham, R. O. Fox, J. Capecelatro, Sparse identification of multiphase turbulence closures for coupled fluid–particle flows, Journal of Fluid Mechanics 914 (2021) A11.doi:10.1017/jfm.2021.53

  37. [41]

    Reissmann, J

    M. Reissmann, J. Hasslberger, R. D. Sandberg, M. Klein, Application of Gene Expression Programming to a-posteriori LES modeling of a Taylor Green Vortex, Journal of Computational Physics 424 (2021) 109859. doi:10.1016/j.jcp.2020.109859

  38. [42]

    S. L. Brunton, J. L. Proctor, J. N. Kutz, Discovering governing equations from data by sparse identifica- tion of nonlinear dynamical systems, Proceedings of the National Academy of Sciences 113 (15) (2016) 3932–3937.doi:10.1073/pnas.1517384113

  39. [43]

    H. Kang, V. Vuorinen, S. Karimkashi, Parameter-Aware Ensemble SINDy for Interpretable Symbolic SGS Closure (Sep. 2025).doi:10.48550/arXiv.2508.14085

  40. [44]

    Zanna and T

    L. Zanna, T. Bolton, Data-Driven Equation Discovery of Ocean Mesoscale Closures, Geophysical Research Letters 47 (17) (2020) e2020GL088376.doi:10.1029/2020GL088376

  41. [45]

    Jakhar, Y

    K. Jakhar, Y. Guan, R. Mojgani, A. Chattopadhyay, P. Hassanzadeh, Learning Closed-Form Equations for Subgrid-Scale Closures From High-Fidelity Data: Promises and Challenges, Journal of Advances in Modeling Earth Systems 16 (7) (2024) e2023MS003874.doi:10.1029/2023MS003874

  42. [46]

    A. Ross, Z. Li, P. Perezhogin, C. Fernandez-Granda, L. Zanna, Benchmarking of Machine Learning Ocean Subgrid Parameterizations in an Idealized Model, Journal of Advances in Modeling Earth Systems 15 (1) (2023) e2022MS003258.doi:10.1029/2022MS003258

  43. [47]

    H. Li, Y. Zhao, F. Waschkowski, R. D. Sandberg, Evolutionary neural networks for learning turbulence closure models with explicit expressions, Physics of Fluids 36 (5) (2024) 055126.doi:10.1063/5. 0203975

  44. [48]

    W. T. Chung, A. A. Mishra, M. Ihme, Interpretable data-driven methods for subgrid-scale closure in LES for transcritical LOX/GCH4 combustion, Combustion and Flame 239 (2022) 111758.doi: 10.1016/j.combustflame.2021.111758

  45. [49]

    Hammond, Y

    J. Hammond, Y. Frey Marioni, F. Montomoli, Error Quantification for the Assessment of Data- Driven Turbulence Models, Flow, Turbulence and Combustion 109 (1) (2022) 1–26.doi:10.1007/ s10494-022-00321-1

  46. [50]

    S. L. Brunton, J. N. Kutz, Promising directions of machine learning for partial differential equations, Nature Computational Science 4 (7) (2024) 483–494.doi:10.1038/s43588-024-00643-2

  47. [51]

    Vreman, B

    B. Vreman, B. Geurts, H. Kuerten, Large-eddy simulation of the turbulent mixing layer, Journal of Fluid Mechanics 339 (1997) 357–390.doi:10.1017/S0022112097005429. 38

  48. [52]

    R. H. Kraichnan, Statistical dynamics of two-dimensional flow, Journal of Fluid Mechanics 67 (1) (1975) 155–175.doi:10.1017/S0022112075000225

  49. [53]

    Pouquet, D

    A. Pouquet, D. Rosenberg, J. Stawarz, R. Marino, Helicity Dynamics, Inverse, and Bidirectional Cascades in Fluid and Magnetohydrodynamic Turbulence: A Brief Review, Earth and Space Science 6 (3) (2019) 351–369.doi:10.1029/2018EA000432

  50. [54]

    Sagaut, Large Eddy Simulation for Incompressible Flows, no

    P. Sagaut, Large Eddy Simulation for Incompressible Flows, no. 3 in Scientific Computation, Springer- Verlag, Berlin/Heidelberg, 2006.doi:10.1007/b137536

  51. [55]

    Borue, S

    V. Borue, S. A. Orszag, Local energy flux and subgrid-scale statistics in three-dimensional turbulence, Journal of Fluid Mechanics 366 (1998) 1–31.doi:10.1017/S0022112097008306

  52. [56]

    Germano, Differential filters for the large eddy numerical simulation of turbulent flows, The Physics of Fluids 29 (6) (1986) 1755–1757.doi:10.1063/1.865649

    M. Germano, Differential filters for the large eddy numerical simulation of turbulent flows, The Physics of Fluids 29 (6) (1986) 1755–1757.doi:10.1063/1.865649

  53. [57]

    Germano, Differential filters of elliptic type, The Physics of Fluids 29 (6) (1986) 1757–1758

    M. Germano, Differential filters of elliptic type, The Physics of Fluids 29 (6) (1986) 1757–1758. doi:10.1063/1.865650

  54. [58]

    J. R. Bull, A. Jameson, Explicit filtering and exact reconstruction of the sub-filter stresses in large eddy simulation, Journal of Computational Physics 306 (2016) 117–136.doi:10.1016/j.jcp.2015.11.037

  55. [59]

    Stolz, N

    S. Stolz, N. A. Adams, An approximate deconvolution procedure for large-eddy simulation, Physics of Fluids 11 (7) (1999) 1699–1701.doi:10.1063/1.869867

  56. [60]

    Stolz, N

    S. Stolz, N. A. Adams, L. Kleiser, The approximate deconvolution model for large-eddy simulations of compressible flows and its application to shock-turbulent-boundary-layer interaction, Physics of Fluids 13 (10) (2001) 2985–3001.doi:10.1063/1.1397277

  57. [61]

    Stolz, N

    S. Stolz, N. A. Adams, L. Kleiser, An approximate deconvolution model for large-eddy simulation with application to incompressible wall-bounded flows, Physics of Fluids 13 (4) (2001) 997–1015. doi:10.1063/1.1350896

  58. [62]

    S. W. Haering, M. Lee, R. D. Moser, Resolution-induced anisotropy in large-eddy simulations, Physical Review Fluids 4 (11) (2019) 114605.doi:10.1103/PhysRevFluids.4.114605

  59. [63]

    Schumann, S

    J.-E. Schumann, S. Toosi, J. Larsson, Assessment of Grid Anisotropy Effects on Large-Eddy-Simulation Models with Different Length Scales, AIAA Journal 58 (10) (2020) 4522–4533.doi:10.2514/1.J059576

  60. [64]

    R. D. Moser, S. W. Haering, G. R. Yalla, Statistical Properties of Subgrid-Scale Turbulence Models, An- nual Review of Fluid Mechanics 53 (1) (2021) 255–286.doi:10.1146/annurev-fluid-060420-023735

  61. [65]

    Prakash, K

    A. Prakash, K. E. Jansen, J. A. Evans, Extension of the Smagorinsky Subgrid Stress Model to Anisotropic Filters, in: AIAA SCITECH 2023 Forum, American Institute of Aeronautics and Astronautics, National Harbor, MD & Online, 2023.doi:10.2514/6.2023-2486

  62. [66]

    Z. Zhou, G. He, S. Wang, G. Jin, Subgrid-scale model for large-eddy simulation of isotropic turbulent flows using an artificial neural network, Computers & Fluids 195 (2019) 104319.doi:10.1016/j. compfluid.2019.104319

  63. [67]

    C. Y. Lee, S. Cant, A Grid-Induced and Physics-Informed Machine Learning CFD Framework for Turbulent Flows, Flow, Turbulence and Combustion (Dec. 2023).doi:10.1007/s10494-023-00506-2

  64. [68]

    Q. Meng, Z. Jiang, J. Wang, Artificial neural network-based subgrid-scale models for LES of compressible turbulent channel flow, Theoretical and Applied Mechanics Letters 13 (1) (2023) 100399.doi:10.1016/ j.taml.2022.100399

  65. [69]

    Vinuesa, S

    R. Vinuesa, S. L. Brunton, Enhancing computational fluid dynamics with machine learning, Nature Computational Science 2 (6) (2022) 358–366.doi:10.1038/s43588-022-00264-7

  66. [70]

    G. F. Smith, On isotropic functions of symmetric tensors, skew-symmetric tensors and vectors, Interna- tional Journal of Engineering Science 9 (10) (1971) 899–916.doi:10.1016/0020-7225(71)90023-1

  67. [71]

    Pennisi, M

    S. Pennisi, M. Trovato, On the irreducibility of professor G.F. Smith’s representations for isotropic functions, International Journal of Engineering Science 25 (8) (1987) 1059–1065. doi:10.1016/ 0020-7225(87)90097-8

  68. [72]

    E. W. Stallcup, A. Kshitij, W. J. Dahm, Adaptive Scale-Similar Closure for Large Eddy Simulations. Part 1: Subgrid Stress Closure, in: AIAA SCITECH 2022 Forum, American Institute of Aeronautics and Astronautics, 2022. doi:10.2514/6.2022-0595

  69. [73]

    S. B. Pope, A more general effective-viscosity hypothesis, Journal of Fluid Mechanics 72 (2) (1975) 39 331–340.doi:10.1017/S0022112075003382

  70. [74]

    T. S. Lund, E. A. Novikov, Parameterization of subgrid-scale stress by the velocity gradient tensor (Jan. 1993)

  71. [75]

    H. Tang, Y. Wang, T. Wang, L. Tian, Y. Qian, Data-driven Reynolds-averaged turbulence modeling with generalizable non-linear correction and uncertainty quantification using Bayesian deep learning, Physics of Fluids 35 (5) (2023) 055119.doi:10.1063/5.0149547

  72. [76]

    H. Li, F. Waschkowski, Y. Zhao, R. D. Sandberg, Turbulence Model Development based on a Novel Method Combining Gene Expression Programming with an Artificial Neural Network, arXiv:2301.07293 [physics] (Jan. 2023).doi:10.48550/arXiv.2301.07293

  73. [77]

    R. Bose, A. M. Roy, Invariance embedded physics-infused deep neural network-based sub-grid scale models for turbulent flows, Engineering Applications of Artificial Intelligence 128 (2024) 107483. doi:10.1016/j.engappai.2023.107483

  74. [78]

    A. J. M. Spencer, R. S. Rivlin, The theory of matrix polynomials and its application to the mechanics of isotropic continua, Archive for Rational Mechanics and Analysis 2 (1) (1958) 309–336.doi:10.1007/ BF00277933

  75. [79]

    J.-L. Wu, H. Xiao, E. Paterson, Physics-informed machine learning approach for augmenting turbulence models: A comprehensive framework, Physical Review Fluids 3 (7) (2018) 074602.doi:10.1103/ PhysRevFluids.3.074602

  76. [80]

    A. Man, M. Jadidi, A. Keshmiri, H. Yin, Y. Mahmoudi, A divide-and-conquer machine learning approach for modeling turbulent flows, Physics of Fluids 35 (5) (2023) 055110.doi:10.1063/5.0149750

  77. [81]

    Prakash, K

    A. Prakash, K. E. Jansen, J. A. Evans, Optimal Clipping of the Gradient Model for Subgrid Stress Closure, in: AIAA Scitech 2021 Forum, AIAA SciTech Forum, American Institute of Aeronautics and Astronautics, 2021. doi:10.2514/6.2021-1665

  78. [82]

    Y. Li, E. Perlman, M. Wan, Y. Yang, C. Meneveau, R. Burns, S. Chen, A. Szalay, G. Eyink, A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence, Journal of Turbulence 9 (2008) N31.doi:10.1080/14685240802376389

  79. [83]

    Perlman, R

    E. Perlman, R. Burns, Y. Li, C. Meneveau, Data exploration of turbulence simulations using a database cluster, in: Proceedings of the 2007 ACM/IEEE conference on Supercomputing, ACM, Reno Nevada, 2007, pp. 1–11.doi:10.1145/1362622.1362654

  80. [84]

    C. H. Whiting, K. E. Jansen, A stabilized finite element method for the incompressible Navier–Stokes equations using a hierarchical basis, International Journal for Numerical Methods in Fluids 35 (1) (2001) 93–116.doi:10.1002/1097-0363(20010115)35:1<93::AID-FLD85>3.0.CO;2-G

Showing first 80 references.