pith. machine review for the scientific record. sign in

arxiv: 2605.13106 · v1 · submitted 2026-05-13 · 🧮 math.NA · cs.NA

Recognition: no theorem link

Hypernetwork-Conditioned WENO5 Conservative-Form CNNs for One-Dimensional Conservation Laws

Authors on Pith no claims yet

Pith reviewed 2026-05-14 18:46 UTC · model grok-4.3

classification 🧮 math.NA cs.NA
keywords hypernetworkWENO schemeconservation lawsconvolutional neural networkfinite volume methodhyperbolic PDEdata-driven discretizationnumerical conservation
0
0 comments X

The pith

Hypernetwork conditions a CNN to predict WENO weights from coarse initial data and mesh info while keeping the conservative finite-volume update.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops Hyper-CFCNN, where a hypernetwork takes coarse initial condition descriptors, mesh spacing, and layout to generate parameters for a target network that computes the nonlinear weights in a fifth-order WENO finite-volume scheme. This construction retains the standard polynomial reconstruction and conservative flux-difference update, so the discretization can adapt across different spatial resolutions and initial conditions without retraining. Numerical tests on Burgers equations, shallow-water systems, and the Shu-Osher Euler problem show accuracy comparable to classical WENO5 together with near machine-precision conservation on fine meshes in the known-flux case. A flux-learning variant replaces the analytical flux with a compact learned network and still remains stable outside the training distribution.

Core claim

The central claim is that a hypernetwork supplied only with coarse initial-condition descriptors, mesh spacing, and layout can output the parameters of a target network that predicts stable, high-order WENO weights, yielding a conservative discretization whose accuracy and conservation properties match those of classical WENO5 on unseen problems and grids.

What carries the argument

The hypernetwork that generates the parameters of the target network used to predict nonlinear WENO weights on each stencil from problem metadata.

If this is right

  • The scheme attains accuracy comparable to classical WENO5 on single- and multi-shock Burgers problems, shallow-water equations, and the Shu-Osher Euler example.
  • Near machine-precision conservation holds in the known-flux setting on fine meshes.
  • Generalization to unseen spatial resolutions and initial conditions occurs without retraining.
  • The flux-learning variant remains stable on meshes outside the training set and exhibits only bounded conservation drift.
  • Multi-step recurrent loss during training reduces long-time error accumulation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the hypernetwork generalizes reliably from coarse descriptors, the same conditioning idea could be applied to other structure-preserving schemes such as discontinuous Galerkin methods.
  • Embedding the conservative update form inside the learned discretization may allow smaller training sets than purely data-driven approaches.
  • Adding boundary-condition descriptors to the hypernetwork input would test whether the framework extends to non-periodic domains.
  • Verification on two-dimensional problems would determine whether the one-dimensional success scales when the hypernetwork is given appropriate multi-dimensional metadata.

Load-bearing premise

The hypernetwork, given only coarse descriptors of the initial condition plus mesh spacing and layout, can produce target-network parameters that yield stable high-order weights for problems outside the training distribution.

What would settle it

Apply the trained model to a new initial condition on a mesh twice as fine as any mesh seen during training and check whether oscillations appear or the conservation error exceeds machine precision.

Figures

Figures reproduced from arXiv: 2605.13106 by Wei Guo, Xinghui Zhong, Yongsheng Chen.

Figure 1
Figure 1. Figure 1: Schematic of a hypernetwork. The hypernetwork [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overall framework of Hyper–CFCNN and its unknown-flux variant Hyper– [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: H–CFCNN and H–CFCNN–F versus the WENO5 reference at [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Conservation remainder C(u) versus time for H–CFCNN and H–CFCNN–F on N = 32, 64, 128, 256 for the single-shock Burgers example. 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (a) N = 48, T = 3 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (b) N = 208, T = 3 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (c) N = 288, … view at source ↗
Figure 5
Figure 5. Figure 5: H–CFCNN and H–CFCNN–F versus the reference at [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: H–CFCNN versus the reference at T = 3 for two out-of-range initial conditions on N = 256 for the single-shock Burgers example. 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (a) ∆t = 0.2∆x 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (b) ∆t = 0.3∆x 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (c) ∆t = 0.48∆x [PIT… view at source ↗
Figure 7
Figure 7. Figure 7: H–CFCNN and H–CFCNN–F versus the reference at [PITH_FULL_IMAGE:figures/full_fig_p015_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: H–CFCNN and H–CFCNN–F versus the reference for the multi-shock Burgers [PITH_FULL_IMAGE:figures/full_fig_p017_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Conservation remainder C(u) versus time for H–CFCNN and H–CFCNN–F on N = 32, 64, 128, 256 for the multi-shock Burgers example. 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (a) ∆t = 0.2∆x 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (b) ∆t = 0.3∆x 0 1 2 3 4 5 6 x 1.0 0.5 0.0 0.5 1.0 u(x) Reference Hyper-CFCNN Hyper-CFCNN-F (c) ∆t = 0.48∆x [PIT… view at source ↗
Figure 10
Figure 10. Figure 10: H–CFCNN and H–CFCNN–F versus the reference at [PITH_FULL_IMAGE:figures/full_fig_p018_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: H–CFCNN and H–CFCNN–F versus the reference at [PITH_FULL_IMAGE:figures/full_fig_p019_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: H–CFCNN and H–CFCNN–F versus the WENO5 reference for the shallow [PITH_FULL_IMAGE:figures/full_fig_p020_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Conservation remainders C(h) and C(hv) for the shallow-water example on N = 64, 128, 256. 5.0 2.5 0.0 2.5 5.0 x 1.5 2.0 2.5 3.0 3.5 u(x) Reference Hyper-CFCNN (a) h (init) 5.0 2.5 0.0 2.5 5.0 x 0.5 0.0 0.5 1.0 1.5 u(x) Reference Hyper-CFCNN (b) hv (init) 5.0 2.5 0.0 2.5 5.0 x 1.2 1.4 1.6 1.8 2.0 u(x) Reference H-CFCNN-F (c) h (mesh) 5.0 2.5 0.0 2.5 5.0 x 0.0 0.2 0.4 0.6 u(x) Reference H-CFCNN-F (d) hv (me… view at source ↗
Figure 14
Figure 14. Figure 14: H–CFCNN and H–CFCNN–F versus the reference for the shallow-water [PITH_FULL_IMAGE:figures/full_fig_p021_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: H–CFCNN and H–CFCNN–F versus the WENO5 reference for the Euler [PITH_FULL_IMAGE:figures/full_fig_p023_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Conservation remainders C(ρ) and C(E) for H–CFCNN and H–CFCNN–F on N = 64, 128, 256 for the Euler example. 5.0 2.5 0.0 2.5 5.0 x 0 1 2 3 4 5 u(x) Reference Hyper-CFCNN (a) Density ρ on the unseen mesh N = 224 at T = 1.6. 5.0 2.5 0.0 2.5 5.0 x 0 10 20 30 40 50 u(x) Reference Hyper-CFCNN (b) Total energy E on the unseen mesh N = 224 at T = 1.6 [PITH_FULL_IMAGE:figures/full_fig_p024_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: H–CFCNN versus the reference for the Euler example on the unseen mesh [PITH_FULL_IMAGE:figures/full_fig_p024_17.png] view at source ↗
read the original abstract

We study a conservative data-driven discretization for one-dimensional hyperbolic conservation laws based on the classical fifth-order WENO finite-volume scheme and a hypernetwork architecture. In the proposed Hyper--WENO5 Conservative-Form Convolutional Neural Network (Hyper--CFCNN), a lightweight target network predicts the nonlinear WENO weights on each stencil, while a hypernetwork generates the target-network parameters from problem metadata, including the mesh spacing, mesh layout, and coarse descriptors of the initial condition. The construction preserves the standard polynomial reconstruction and conservative flux-difference update of WENO, which enables adaptation across problem instances and spatial resolutions without retraining. We also consider an unknown-flux variant, Hyper--CFCNN--F, in which a compact FluxNet is used in place of the analytical flux inside the numerical flux function while retaining a conservative update form. To improve long-time prediction quality, training uses a multi-step recurrent loss that penalizes error accumulation over successive time advances. Numerical experiments on one-dimensional test problems, including single- and multi-shock Burgers equations, the shallow-water system, and the Shu--Osher Euler example, show that Hyper--CFCNN attains accuracy comparable to classical WENO5, achieves near machine-precision conservation in the known-flux setting on fine meshes, and generalizes to unseen spatial resolutions and initial conditions without retraining. The flux-learning variant remains stable on meshes outside the training set and exhibits bounded conservation drift. These results show that hypernetwork-conditioned conservative WENO discretizations provide an effective framework for adaptive high-order learning of nonlinear conservation laws with either known or unknown fluxes.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces Hyper-CFCNN, a hypernetwork-conditioned CNN that learns to predict the nonlinear weights in a fifth-order WENO finite-volume scheme for 1D conservation laws while preserving the conservative flux-difference form. A hypernetwork generates the parameters of a target network from mesh spacing, layout, and coarse initial condition descriptors. An unknown-flux variant replaces the analytical flux with a learned FluxNet. Training employs a multi-step loss, and experiments on Burgers, shallow water, and Euler equations show accuracy comparable to classical WENO5, near machine-precision conservation, and generalization to new resolutions and initial conditions without retraining.

Significance. If the generalization claims hold, the approach offers a significant advance in data-driven discretizations by maintaining conservation properties by construction and enabling adaptation across problem instances via hypernetworks. This could be valuable for high-order methods in scenarios with varying meshes or partially unknown physics, building on the strengths of both classical WENO and neural network flexibility.

major comments (3)
  1. [Abstract and hypernetwork description] Abstract and hypernetwork description: The generalization to unseen spatial resolutions and initial conditions without retraining is a central claim, but the precise definition and dimensionality of the 'coarse descriptors of the initial condition' fed to the hypernetwork are not specified. This is load-bearing because if these descriptors lack local regularity information, the generated weights may not ensure stability or accuracy on fine-scale features absent from training.
  2. [Numerical Experiments] Numerical Experiments: The abstract states near machine-precision conservation on fine meshes for the known-flux case; however, to support this, the experiments section should include explicit verification of the discrete conservation error (e.g., total mass or momentum drift over time) for the specific test problems, beyond qualitative statements.
  3. [Flux-learning variant] Flux-learning variant: For Hyper-CFCNN-F, the manuscript claims the conservative update form is retained; a detailed explanation or equation showing how the learned FluxNet output is incorporated into the flux-difference update (ensuring telescoping property) is needed to confirm no violation of conservation.
minor comments (2)
  1. [Abstract] The acronym 'Hyper--CFCNN' is used but its full expansion 'Hypernetwork-Conditioned WENO5 Conservative-Form CNN' should be stated clearly on first use.
  2. [Throughout] Some notation for the target network parameters and hypernetwork outputs could be clarified with a diagram or explicit equations to aid reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments and the recommendation for major revision. We address each major point below with clarifications and revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract and hypernetwork description] The generalization to unseen spatial resolutions and initial conditions without retraining is a central claim, but the precise definition and dimensionality of the 'coarse descriptors of the initial condition' fed to the hypernetwork are not specified. This is load-bearing because if these descriptors lack local regularity information, the generated weights may not ensure stability or accuracy on fine-scale features absent from training.

    Authors: We agree that precise specification is essential. The coarse descriptors are the cell-averaged initial-condition values on a uniformly coarsened mesh (coarsening factor 4 relative to the target mesh), concatenated with the target mesh spacing Δx and a layout flag (periodic or non-periodic). For M coarse cells the hypernetwork input dimension is therefore M+2. This supplies local regularity information at a scale sufficient for stable weight generation while enabling resolution generalization. In the revised manuscript we will expand Section 3.2 with the exact definition, input dimensionality, and a short justification of the coarsening choice. revision: partial

  2. Referee: [Numerical Experiments] The abstract states near machine-precision conservation on fine meshes for the known-flux case; however, to support this, the experiments section should include explicit verification of the discrete conservation error (e.g., total mass or momentum drift over time) for the specific test problems, beyond qualitative statements.

    Authors: We concur that explicit quantitative verification strengthens the claim. We will add to the Numerical Experiments section (and the associated figures) plots of the time evolution of the global conservation error, defined as the absolute deviation of the domain-integrated conserved quantities from their initial values, for every test problem. These plots will confirm that the error remains at machine-precision levels (O(10^{-15})) throughout the simulations for the known-flux Hyper-CFCNN. revision: yes

  3. Referee: [Flux-learning variant] For Hyper-CFCNN-F, the manuscript claims the conservative update form is retained; a detailed explanation or equation showing how the learned FluxNet output is incorporated into the flux-difference update (ensuring telescoping property) is needed to confirm no violation of conservation.

    Authors: We thank the referee for noting the need for explicit detail. FluxNet replaces the analytic flux function f(u) inside the numerical-flux evaluation; the WENO reconstruction and interface flux computation proceed exactly as in classical WENO5, after which the update is performed in the standard conservative difference form u_i^{n+1} = u_i^n - λ (F_{i+1/2} - F_{i-1/2}), where F denotes the WENO numerical flux that now uses FluxNet outputs. Because the update remains strictly a telescoping flux difference, global conservation is preserved by construction irrespective of the internal flux approximation. We will insert a clarifying equation and paragraph in Section 3.3. revision: yes

Circularity Check

0 steps flagged

No significant circularity; architecture and claims rest on explicit construction plus external numerical validation

full rationale

The paper defines Hyper-CFCNN by retaining the classical WENO5 polynomial reconstruction and conservative flux-difference update exactly, while the hypernetwork supplies only the nonlinear weights from metadata. Conservation and the update form are therefore preserved by the retained finite-volume structure rather than derived from the learned parameters. Generalization and accuracy claims are established solely through numerical experiments on Burgers, shallow-water, and Euler problems (including out-of-distribution resolutions and initial conditions), not by redefining fitted quantities as predictions or by self-citation chains. No step in the provided text reduces the target result to its own inputs by construction.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 2 invented entities

The method rests on the standard properties of finite-volume conservation and WENO polynomial reconstruction; the hypernetwork itself introduces learned parameters that are not free parameters in the classical sense but are fitted during training.

free parameters (1)
  • hypernetwork weights
    All parameters of the hypernetwork and target network are fitted to training trajectories; they are not derived from first principles.
axioms (2)
  • standard math The finite-volume update in flux-difference form exactly conserves the discrete total when boundary fluxes are accounted for.
    Invoked throughout the construction to guarantee conservation even after the weights are replaced by the neural network.
  • domain assumption WENO5 polynomial reconstruction on each stencil remains valid when the nonlinear weights are supplied by a neural network rather than the classical smoothness indicators.
    The paper keeps the standard reconstruction polynomials and only replaces the weight computation.
invented entities (2)
  • Hyper-CFCNN target network no independent evidence
    purpose: Predicts nonlinear WENO weights from local stencil data
    New compact network whose parameters are generated by the hypernetwork; no independent evidence outside the learned weights is provided.
  • Hypernetwork conditioner no independent evidence
    purpose: Maps mesh metadata and coarse initial-condition descriptors to target-network parameters
    Core architectural invention; its generalization ability is demonstrated only empirically.

pith-pipeline@v0.9.0 · 5594 in / 1627 out tokens · 38133 ms · 2026-05-14T18:46:29.400376+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

82 extracted references · 3 canonical work pages

  1. [1]

    doi:10.1137/140951758 , journal =

    An Adaptive Shifted Power Method for Computing Generalized Tensor Eigenpairs , author =. doi:10.1137/140951758 , journal =

  2. [2]

    Nick Higham , title =

  3. [3]

    Kolda and Ali Pinar , eprint =

    Chengbin Peng and Tamara G. Kolda and Ali Pinar , eprint =. Accelerating Community Detection by Using

  4. [4]

    and Zhang, Shanrong and Merritt, Matthew E

    Woessner, Donald E. and Zhang, Shanrong and Merritt, Matthew E. and Sherry, A. Dean , title =. Magnetic Resonance in Medicine , doi =

  5. [5]

    2003 , eid =

    Properties of Highly Clustered Networks , author =. 2003 , eid =. doi:10.1103/PhysRevE.68.026121 , journal =

  6. [6]

    Clawpack Software , author =

  7. [7]

    : A Document Preparation System

    Leslie Lamport. : A Document Preparation System. 1986

  8. [8]

    Frank Mittlebach and Michel Goossens , title =

  9. [9]

    and Van Loan, Charles F

    Golub, Gene H. and Van Loan, Charles F. , title =

  10. [10]

    Paul Dawkins , title =

  11. [11]

    User's Guide for the

  12. [12]

    Michael Downes , title =

  13. [13]

    Christian Feuers\"anger , title =

  14. [14]

    2002 , publisher=

    Finite volume methods for hyperbolic problems , author=. 2002 , publisher=

  15. [15]

    2013 , publisher=

    Numerical approximation of hyperbolic systems of conservation laws , author=. 2013 , publisher=

  16. [16]

    Communications on pure and applied mathematics , volume=

    Hyperbolic systems of conservation laws II , author=. Communications on pure and applied mathematics , volume=. 1957 , publisher=

  17. [17]

    Mathematics of computation , volume=

    Total variation diminishing Runge-Kutta schemes , author=. Mathematics of computation , volume=

  18. [18]

    Communications on pure and applied mathematics , volume=

    Weak solutions of nonlinear hyperbolic equations and their numerical computation , author=. Communications on pure and applied mathematics , volume=. 1954 , publisher=

  19. [19]

    Journal of computational physics , volume=

    High resolution schemes for hyperbolic conservation laws , author=. Journal of computational physics , volume=. 1997 , publisher=

  20. [20]

    SIAM journal on numerical analysis , volume=

    High resolution schemes using flux limiters for hyperbolic conservation laws , author=. SIAM journal on numerical analysis , volume=. 1984 , publisher=

  21. [21]

    Journal of computational physics , volume=

    Efficient implementation of essentially non-oscillatory shock-capturing schemes , author=. Journal of computational physics , volume=. 1988 , publisher=

  22. [22]

    Journal of computational physics , volume=

    Weighted essentially non-oscillatory schemes , author=. Journal of computational physics , volume=. 1994 , publisher=

  23. [23]

    Journal of computational physics , volume=

    Efficient implementation of weighted ENO schemes , author=. Journal of computational physics , volume=. 1996 , publisher=

  24. [24]

    Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws , author=. Advanced Numerical Approximation of Nonlinear Hyperbolic Equations: Lectures given at the 2nd Session of the Centro Internazionale Matematico Estivo (CIME) held in Cetraro, Italy, June 23--28, 1997 , pages=. 2006 , publisher=

  25. [25]

    Journal of Computational Physics , volume=

    Mapped weighted essentially non-oscillatory schemes: achieving optimal order near critical points , author=. Journal of Computational Physics , volume=. 2005 , publisher=

  26. [26]

    Journal of computational physics , volume=

    An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws , author=. Journal of computational physics , volume=. 2008 , publisher=

  27. [27]

    Journal of Computational Physics , volume=

    Comparison of some flux corrected transport and total variation diminishing numerical schemes for hydrodynamic and magnetohydrodynamic problems , author=. Journal of Computational Physics , volume=. 1996 , publisher=

  28. [28]

    Hydrodynamics , author=

    An HLLC Riemann solver for relativistic flows—I. Hydrodynamics , author=. Monthly Notices of the Royal Astronomical Society , volume=. 2005 , publisher=

  29. [29]

    Journal of Computational Physics , volume=

    High order finite difference WENO schemes with the exact conservation property for the shallow water equations , author=. Journal of Computational Physics , volume=. 2005 , publisher=

  30. [30]

    Computer Methods in Applied Mechanics and Engineering , volume=

    Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems , author=. Computer Methods in Applied Mechanics and Engineering , volume=. 2022 , publisher=

  31. [31]

    Journal of Computational Physics , volume=

    Well-balanced finite volume schemes of arbitrary order of accuracy for shallow water flows , author=. Journal of Computational Physics , volume=. 2006 , publisher=

  32. [32]

    Journal of Computational Physics , volume=

    Weighted essentially non-oscillatory schemes on triangular meshes , author=. Journal of Computational Physics , volume=. 1999 , publisher=

  33. [33]

    Journal of Computational Physics , volume=

    A new fifth order finite difference WENO scheme for solving hyperbolic conservation laws , author=. Journal of Computational Physics , volume=. 2016 , publisher=

  34. [34]

    Journal of Computational Physics , volume=

    ADER-WENO finite volume schemes with space--time adaptive mesh refinement , author=. Journal of Computational Physics , volume=. 2013 , publisher=

  35. [35]

    Physical Review Fluids , volume=

    Learned discretizations for passive scalar advection in a two-dimensional turbulent flow , author=. Physical Review Fluids , volume=. 2021 , publisher=

  36. [36]

    Journal of Computational Physics , volume=

    A neural network based shock detection and localization approach for discontinuous Galerkin methods , author=. Journal of Computational Physics , volume=. 2020 , publisher=

  37. [37]

    SIAM Journal on Scientific Computing , volume=

    Runge--Kutta discontinuous Galerkin method using WENO limiters , author=. SIAM Journal on Scientific Computing , volume=. 2005 , publisher=

  38. [38]

    Journal of Computational Physics , volume=

    Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy , author=. Journal of Computational Physics , volume=. 2000 , publisher=

  39. [39]

    Journal of Computational Physics , volume=

    Accurate monotonicity-preserving schemes with Runge--Kutta time stepping , author=. Journal of Computational Physics , volume=. 1997 , publisher=

  40. [40]

    Journal of Computational physics , volume=

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations , author=. Journal of Computational physics , volume=. 2019 , publisher=

  41. [41]

    International Conference on Learning Representations , year=

    Fourier Neural Operator for Parametric Partial Differential Equations , author=. International Conference on Learning Representations , year=

  42. [42]

    Journal of computational physics , volume=

    Meta-learning PINN loss functions , author=. Journal of computational physics , volume=. 2022 , publisher=

  43. [43]

    New Journal of Physics , volume=

    Neural partial differential equations for chaotic systems , author=. New Journal of Physics , volume=. 2021 , publisher=

  44. [44]

    AI for Accelerated Materials Design-NeurIPS 2023 Workshop , year=

    PIHLoRA: Physics-informed hypernetworks for low-ranked adaptation , author=. AI for Accelerated Materials Design-NeurIPS 2023 Workshop , year=

  45. [45]

    Nature machine intelligence , volume=

    Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators , author=. Nature machine intelligence , volume=. 2021 , publisher=

  46. [46]

    Physical review letters , volume=

    Enforcing analytic constraints in neural networks emulating physical systems , author=. Physical review letters , volume=. 2021 , publisher=

  47. [47]

    Physical Review Fluids , volume=

    Embedding hard physical constraints in neural network coarse-graining of three-dimensional turbulence , author=. Physical Review Fluids , volume=. 2023 , publisher=

  48. [48]

    International Conference on Machine Learning , pages=

    Learning physical models that can respect conservation laws , author=. International Conference on Machine Learning , pages=. 2023 , organization=

  49. [49]

    Multimedia Tools and Applications , volume=

    Graph neural network operators: a review , author=. Multimedia Tools and Applications , volume=. 2024 , publisher=

  50. [50]

    Journal of Computational Physics , volume=

    Deep learning of first-order nonlinear hyperbolic conservation law solvers , author=. Journal of Computational Physics , volume=. 2024 , publisher=

  51. [51]

    Results in Applied Mathematics , volume=

    Enhanced fifth order WENO shock-capturing schemes with deep learning , author=. Results in Applied Mathematics , volume=. 2021 , publisher=

  52. [52]

    Journal of Computational Physics , volume=

    Deep reinforcement learning for adaptive mesh refinement , author=. Journal of Computational Physics , volume=. 2023 , publisher=

  53. [53]

    AIP Conference Proceedings , volume=

    A six-point shock-capturing scheme with neural network , author=. AIP Conference Proceedings , volume=. 2023 , organization=

  54. [54]

    Theoretical and Computational Fluid Dynamics , volume=

    Enhancement of shock-capturing methods via machine learning , author=. Theoretical and Computational Fluid Dynamics , volume=. 2020 , publisher=

  55. [55]

    Journal of Computational Physics , volume=

    A learned conservative semi-Lagrangian finite volume scheme for transport simulations , author=. Journal of Computational Physics , volume=. 2023 , publisher=

  56. [56]

    Proceedings of the National Academy of Sciences , volume=

    Learning data-driven discretizations for partial differential equations , author=. Proceedings of the National Academy of Sciences , volume=. 2019 , publisher=

  57. [57]

    Computer Methods in Applied Mechanics and Engineering , volume=

    DiscretizationNet: A machine-learning based solver for Navier--Stokes equations using finite volume discretization , author=. Computer Methods in Applied Mechanics and Engineering , volume=. 2021 , publisher=

  58. [58]

    Advances in neural information processing systems , volume=

    Learning Koopman invariant subspaces for dynamic mode decomposition , author=. Advances in neural information processing systems , volume=

  59. [59]

    SIAM Journal on Scientific Computing , volume=

    Physics-informed generative adversarial networks for stochastic differential equations , author=. SIAM Journal on Scientific Computing , volume=. 2020 , publisher=

  60. [60]

    Computer Methods in Applied Mechanics and Engineering , volume=

    Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems , author=. Computer Methods in Applied Mechanics and Engineering , volume=. 2020 , publisher=

  61. [61]

    IMA Journal of Numerical Analysis , volume=

    Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs , author=. IMA Journal of Numerical Analysis , volume=. 2022 , publisher=

  62. [62]

    Journal of Computational Physics , volume=

    Parallel physics-informed neural networks via domain decomposition , author=. Journal of Computational Physics , volume=. 2021 , publisher=

  63. [63]

    Nature Reviews Physics , volume=

    Physics-informed machine learning , author=. Nature Reviews Physics , volume=

  64. [64]

    Journal of Computational Physics , volume=

    Constraint-aware neural networks for Riemann problems , author=. Journal of Computational Physics , volume=

  65. [65]

    AIAA scitech 2020 forum , pages=

    A machine learning approach for detecting shocks with high-order hydrodynamic methods , author=. AIAA scitech 2020 forum , pages=

  66. [66]

    communications in Numerical Methods in Engineering , volume=

    Neural-network-based approximations for solving partial differential equations , author=. communications in Numerical Methods in Engineering , volume=. 1994 , publisher=

  67. [67]

    International Conference on Learning Representations , year=

    HyperNetworks , author=. International Conference on Learning Representations , year=

  68. [68]

    International conference on machine learning , pages=

    Meta networks , author=. International conference on machine learning , pages=. 2017 , organization=

  69. [69]

    Advances in Neural Information Processing Systems , volume=

    Neural conservation laws: A divergence-free perspective , author=. Advances in Neural Information Processing Systems , volume=

  70. [70]

    2.0) , author=

    Conservation laws in a neural network architecture: enforcing the atom balance of a Julia-based photochemical model (v0. 2.0) , author=. Geoscientific Model Development , volume=. 2022 , publisher=

  71. [71]

    8th International Conference on Learning Representations, ICLR 2020 , year=

    CONTINUAL LEARNING WITH ADAPTIVE WEIGHTS (CLAW) , author=. 8th International Conference on Learning Representations, ICLR 2020 , year=

  72. [72]

    arXiv preprint arXiv:2106.04489 , year=

    Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks , author=. arXiv preprint arXiv:2106.04489 , year=

  73. [73]

    International Conference on Learning Representations , year=

    Continual learning with hypernetworks , author=. International Conference on Learning Representations , year=

  74. [74]

    Artificial Intelligence Review , volume=

    A brief review of hypernetworks in deep learning , author=. Artificial Intelligence Review , volume=. 2024 , publisher=

  75. [75]

    The symbiosis of deep learning and differential equations , volume=

    HyperPINN: Learning parameterized differential equations with physics-informed hypernetworks , author=. The symbiosis of deep learning and differential equations , volume=

  76. [76]

    Advances in Neural Information Processing Systems , volume=

    Hypernetwork-based meta-learning for low-rank physics-informed neural networks , author=. Advances in Neural Information Processing Systems , volume=

  77. [77]

    Advances in neural information processing systems , volume=

    Ode to an ODE , author=. Advances in neural information processing systems , volume=

  78. [78]

    Journal of computational physics , volume=

    Meta-mgnet: Meta multigrid networks for solving parameterized partial differential equations , author=. Journal of computational physics , volume=. 2022 , publisher=

  79. [79]

    Science advances , volume=

    Learning the solution operator of parametric partial differential equations with physics-informed DeepONets , author=. Science advances , volume=. 2021 , publisher=

  80. [80]

    ACM/IMS Journal of Data Science , volume=

    Physics-informed neural operator for learning partial differential equations , author=. ACM/IMS Journal of Data Science , volume=. 2024 , publisher=

Showing first 80 references.