pith. machine review for the scientific record. sign in

arxiv: 2604.07746 · v1 · submitted 2026-04-09 · 💻 cs.LG · cs.CE· physics.comp-ph

Recognition: unknown

Towards Rapid Constitutive Model Discovery from Multi-Modal Data: Physics Augmented Finite Element Model Updating (paFEMU)

Govinda Anantha Padmanabha, Jingye Tan, Nikolaos Bouklas, Steven J. Yang

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:39 UTC · model grok-4.3

classification 💻 cs.LG cs.CEphysics.comp-ph
keywords constitutive modelingmodel discoveryfinite element methodtransfer learningsparse regressionmulti-modal dataphysics augmentationadjoint optimization
0
0 comments X

The pith

Physics augmented finite element model updating discovers interpretable constitutive models from multi-modal data across similar materials.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces paFEMU to accelerate the discovery of constitutive models for materials by integrating sparse regression techniques with finite element simulations and transfer learning. It combines data from simple mechanical tests, possibly on a different but related material, with full-field measurements like digital image correlation. This approach enforces physical constraints while producing sparse, interpretable models that can be easily integrated into existing simulation software. A sympathetic reader would care because traditional model calibration is time-consuming and data-intensive, and this method promises to speed up the process for classes of materials by reusing data efficiently.

Core claim

paFEMU is a transfer learning framework that augments physics-based finite element model updating with AI-enabled sparse constitutive model discovery, allowing the combination of multi-modal data sources to rapidly identify low-dimensional, interpretable material models.

What carries the argument

The paFEMU method, which uses adjoint optimization in finite element models to update sparse parameters in constitutive libraries during transfer learning from multi-modal data.

If this is right

  • Low-dimensional sparse models integrate seamlessly into standard finite element workflows.
  • Transfer learning reduces the need for extensive full-field data on every new material variant.
  • Sparsification enables better uncertainty quantification and interpretability in discovered models.
  • Multi-modal data fusion improves the robustness of constitutive model calibration.
  • Rapid updating supports iterative design and testing cycles in materials engineering.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar transfer learning could extend to other physics-based simulation domains like fluid dynamics or structural analysis.
  • The approach might allow real-time model adaptation during ongoing experiments.
  • Potential for hybrid human-AI workflows where initial simple tests guide more targeted full-field measurements.

Load-bearing premise

That combining data from potentially distinct materials in the same class through transfer learning does not introduce unacceptable bias or reduce the accuracy of the discovered constitutive model.

What would settle it

Applying the paFEMU method to discover a model for a new material and finding that its predictions deviate significantly from independent experimental tests compared to a traditionally calibrated model.

Figures

Figures reproduced from arXiv: 2604.07746 by Govinda Anantha Padmanabha, Jingye Tan, Nikolaos Bouklas, Steven J. Yang.

Figure 1
Figure 1. Figure 1: Outline of the paFEMU framework and corresponding multi-modal transfer learning scheme [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Illustration of the proposed physics-augmented neural network framework integrating an input convex neural [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The invariant space showing the convex hull and the selected invariant triplets used for synthetic data [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Sampled principal stress points as functions of invariants for synthetic data generated using the Gent model. [PITH_FULL_IMAGE:figures/full_fig_p015_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Sampled training data and validation paths in stress–deformation space for constrained uniaxial, biaxial, and [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Discretized 2D domain of the digital image correlation speciment, displacement is fixed at [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Evaluation of the polyconvexity indicator inequalities over training data. Positive values indicate satisfaction [PITH_FULL_IMAGE:figures/full_fig_p019_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Generalization performance under constrained uniaxial, equibiaxial, and simple shear tests. The top row [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Validation of the learned strain energy functions along canonical loading paths over an extended range. The [PITH_FULL_IMAGE:figures/full_fig_p021_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Uniaxial test for examining performance of the models in a nonlinear solution scheme. Top row: polyconvex; [PITH_FULL_IMAGE:figures/full_fig_p022_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Comparison of transfer learning targets to the pretrained models as well as the pretrain truth at the strain [PITH_FULL_IMAGE:figures/full_fig_p023_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Loss components for transfer learning scheme, utilizing the Neo-Hookean synthetic DIC dataset [PITH_FULL_IMAGE:figures/full_fig_p024_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Transfer learning to Neo-Hookean DIC dataset: validation of learned models to the analytical Neo-Hookean [PITH_FULL_IMAGE:figures/full_fig_p024_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Polyconvexity indicator inequality evaluations, for polyconvex NN (left), reduced ICNN (middle) and [PITH_FULL_IMAGE:figures/full_fig_p025_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Transfer to Neo-Hookean Target, displacement error at [PITH_FULL_IMAGE:figures/full_fig_p025_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Loss components for transfer learning scheme, utilizing the Ogden synthetic DIC dataset [PITH_FULL_IMAGE:figures/full_fig_p026_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Transfer learning to Generalized Ogden DIC dataset: Validation of learned models to the analytical [PITH_FULL_IMAGE:figures/full_fig_p026_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Polyconvexity indicator inequality evaluations, for polyconvex NN (left), reduced ICNN (middle), and [PITH_FULL_IMAGE:figures/full_fig_p027_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Transfer to Generalized Ogden, displacement error at [PITH_FULL_IMAGE:figures/full_fig_p027_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Stress error in the deployment of the transferred model in 3D torsion simulation with a maximum twist angle [PITH_FULL_IMAGE:figures/full_fig_p028_20.png] view at source ↗
read the original abstract

Recent progress in AI-enabled constitutive modeling has concentrated on moving from a purely data-driven paradigm to the enforcement of physical constraints and mechanistic principles, a concept referred to as physics augmentation. Classical phenomenological approaches rely on selecting a pre-defined model and calibrating its parameters, while machine learning methods often focus on discovery of the model itself. Sparse regression approaches lie in between, where large libraries of pre-defined models are probed during calibration. Sparsification in the aforementioned paradigm, but also in the context of neural network architecture, has been shown to enable interpretability, uncertainty quantification, but also heterogeneous software integration due to the low-dimensional nature of the resulting models. Most works in AI-enabled constitutive modeling have also focused on data from a single source, but in reality, materials modeling workflows can contain data from many different sources (multi-modal data), and also from testing other materials within the same materials class (multi-fidelity data). In this work, we introduce physics augmented finite element model updating (paFEMU), as a transfer learning approach that combines AI-enabled constitutive modeling, sparsification for interpretable model discovery, and finite element-based adjoint optimization utilizing multi-modal data. This is achieved by combining simple mechanical testing data, potentially from a distinct material, with digital image correlation-type full-field data acquisition to ultimately enable rapid constitutive modeling discovery. The simplicity of the sparse representation enables easy integration of neural constitutive models in existing finite element workflows, and also enables low-dimensional updating during transfer learning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces physics-augmented finite element model updating (paFEMU) as a transfer-learning framework that fuses AI-enabled constitutive modeling, library-based sparsification for interpretable discovery, and adjoint-based finite-element optimization. It claims that simple mechanical-test data (possibly from a related but distinct material) can be combined with full-field DIC data on the target material to enable rapid, low-dimensional constitutive-model discovery while preserving physical constraints and allowing seamless integration into existing FE workflows.

Significance. If the central transfer-learning step can be shown to remain stable and unbiased under realistic intra-class material variation, the method would offer a practical route to accelerate constitutive-model calibration by exploiting multi-modal and multi-fidelity data sources. The emphasis on sparse, interpretable representations that integrate directly into legacy FE codes is a concrete strength that could improve reproducibility and uncertainty quantification in engineering practice.

major comments (2)
  1. The transfer-learning claim rests on the assumption that auxiliary data from a distinct material within the same class can be used without introducing unacceptable bias into the discovered sparse coefficients or functional form. No derivation, stability bound, or cross-material sensitivity analysis is provided to quantify how modest changes in hardening exponent or yield-surface shape propagate through the adjoint update and sparsification step; this is load-bearing for the central claim of rapid discovery.
  2. The abstract states that the low-dimensional update during transfer learning is enabled by the sparse representation, yet no explicit statement of the library construction, regularization path, or convergence criterion for the adjoint optimization is given. Without these details it is impossible to assess whether the discovered model is uniquely determined or whether post-hoc choices in the library could affect the reported fidelity.
minor comments (1)
  1. Notation for the multi-modal data sources and the distinction between “multi-modal” and “multi-fidelity” should be defined once at the beginning of the method section to avoid later ambiguity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive comments on our manuscript. We provide point-by-point responses to the major comments below and indicate the revisions made to address them.

read point-by-point responses
  1. Referee: The transfer-learning claim rests on the assumption that auxiliary data from a distinct material within the same class can be used without introducing unacceptable bias into the discovered sparse coefficients or functional form. No derivation, stability bound, or cross-material sensitivity analysis is provided to quantify how modest changes in hardening exponent or yield-surface shape propagate through the adjoint update and sparsification step; this is load-bearing for the central claim of rapid discovery.

    Authors: We acknowledge the validity of this concern regarding the lack of formal analysis for the transfer-learning step. A full derivation of stability bounds is not provided in the original manuscript, as the focus was on the practical framework and numerical demonstrations. In the revised version, we have included a new subsection on cross-material sensitivity analysis. Using both synthetic and experimental data, we demonstrate the propagation of variations in material parameters (such as hardening exponent) through the paFEMU pipeline and show that the sparse coefficients exhibit bounded changes for intra-class variations. We also discuss the assumptions under which the transfer remains unbiased. This addition directly addresses the load-bearing aspect of the claim. revision: partial

  2. Referee: The abstract states that the low-dimensional update during transfer learning is enabled by the sparse representation, yet no explicit statement of the library construction, regularization path, or convergence criterion for the adjoint optimization is given. Without these details it is impossible to assess whether the discovered model is uniquely determined or whether post-hoc choices in the library could affect the reported fidelity.

    Authors: We agree that the manuscript would benefit from more explicit details on these aspects to ensure reproducibility and to allow assessment of uniqueness. We have revised the manuscript by adding explicit descriptions of the library construction (the set of candidate terms for the constitutive relations), the regularization path (including how the sparsity parameter is chosen), and the convergence criteria used in the adjoint optimization. Furthermore, we have added a note on the potential influence of library choices and how the sparse regression promotes uniqueness within a given library. These changes are incorporated in the Methods section and the supplementary material. revision: yes

Circularity Check

0 steps flagged

No circularity detected in derivation chain

full rationale

The paper proposes paFEMU as an integrative framework that combines sparse regression for model discovery, transfer learning across multi-modal/multi-fidelity data, and adjoint-based FE updating. No equations, parameter-fitting steps presented as predictions, or self-citations are shown in the abstract or description that would reduce the central claim to its own inputs by construction. The 'discovery' is explicitly the output of the sparsification process applied to data, not a renamed fit or self-referential derivation. The method description stands as a self-contained proposal without load-bearing circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Ledger inferred from abstract claims only; paper relies on standard domain assumptions about sparse representability and transferability but introduces no new free parameters or entities visible at this level.

axioms (2)
  • domain assumption Constitutive behavior admits a sparse representation from a pre-defined library of basis functions.
    Invoked by the sparsification step for interpretability and integration.
  • ad hoc to paper Multi-modal data from distinct but related materials can be transferred without major fidelity loss.
    Central premise of the transfer learning approach described.

pith-pipeline@v0.9.0 · 5590 in / 1415 out tokens · 113886 ms · 2026-05-10T17:39:47.662155+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

60 extracted references · 6 canonical work pages

  1. [1]

    MIT press Cambridge, 2016

    Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio.Deep learning, volume 1. MIT press Cambridge, 2016

  2. [2]

    Kevin P Murphy.Probabilistic machine learning: an introduction. 2022

  3. [3]

    Vlassis, Moritz Flaschel, Pietro Carrara, and Laura De Lorenzis

    Jan Niklas Fuhg, Govinda Anantha Padmanabha, Nikolaos Bouklas, Bahador Bahmani, WaiChing Sun, Nikolaos N. Vlassis, Moritz Flaschel, Pietro Carrara, and Laura De Lorenzis. A review on data-driven constitutive laws for solids.Archives of Computational Methods in Engineering, 32:1841–1883, 2025

  4. [4]

    Learning Sparse Neural Networks through $L_0$ Regularization

    Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through l_0 regulariza- tion.arXiv preprint arXiv:1712.01312, 2017

  5. [5]

    Pierre, Kevin Linka, and Ellen Kuhl

    Jeremy A McCulloch, Skyler R St. Pierre, Kevin Linka, and Ellen Kuhl. On sparse regression, lp-regularization, and automated model discovery.International Journal for Numerical Methods in Engineering, 125(14):e7481, 2024

  6. [6]

    Jan N Fuhg and Nikolaos Bouklas. On physics-informed data-driven isotropic and anisotropic constitutive models through probabilistic machine learning and space-filling sampling.Computer Methods in Applied Mechanics and Engineering, 394:114915, 2022

  7. [7]

    A new family of constitutive artificial neural networks towards automated model discovery.Computer Methods in Applied Mechanics and Engineering, 403:115731, 2023

    Kevin Linka and Ellen Kuhl. A new family of constitutive artificial neural networks towards automated model discovery.Computer Methods in Applied Mechanics and Engineering, 403:115731, 2023

  8. [8]

    Extreme sparsification of physics-augmented neural networks for interpretable model discovery in mechanics.Computer Methods in Applied Mechanics and Engineering, 426:116973, 2024

    Jan Niklas Fuhg, Reese Edward Jones, and Nikolaos Bouklas. Extreme sparsification of physics-augmented neural networks for interpretable model discovery in mechanics.Computer Methods in Applied Mechanics and Engineering, 426:116973, 2024

  9. [9]

    Steven Yang, Michal Levin, Govinda Anantha Padmanabha, Miriam Borshevsky, Ohad Cohen, D Thomas Seidl, Reese E Jones, Nikolaos Bouklas, and Noy Cohen. Physics augmented machine learning discovery of composition-dependent constitutive laws for 3d printed digital materials.International Journal of Engineering Science, 217:104381, 2025

  10. [10]

    Regression shrinkage and selection via the lasso.Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288, 1996

    Robert Tibshirani. Regression shrinkage and selection via the lasso.Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288, 1996

  11. [11]

    Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the national academy of sciences, 113(15):3932– 3937, 2016

    Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the national academy of sciences, 113(15):3932– 3937, 2016

  12. [12]

    Distilling free-form natural laws from experimental data.science, 324(5923):81– 85, 2009

    Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data.science, 324(5923):81– 85, 2009

  13. [13]

    Ai feynman: A physics-inspired method for symbolic regression

    Silviu-Marian Udrescu and Max Tegmark. Ai feynman: A physics-inspired method for symbolic regression. Science Advances, 6(16):eaay2631, 2020

  14. [14]

    Automated discovery of generalized standard material models with euclid.Computer Methods in Applied Mechanics and Engineering, 405:115867, 2023

    Moritz Flaschel, Siddhant Kumar, and Laura De Lorenzis. Automated discovery of generalized standard material models with euclid.Computer Methods in Applied Mechanics and Engineering, 405:115867, 2023

  15. [15]

    Automated identification of linear viscoelastic constitutive laws with euclid.Mechanics of Materials, 181:104643, 2023

    Enzo Marino, Moritz Flaschel, Siddhant Kumar, and Laura De Lorenzis. Automated identification of linear viscoelastic constitutive laws with euclid.Mechanics of Materials, 181:104643, 2023

  16. [16]

    Bayesian-euclid: Discovering hyperelastic material laws with uncertainties.Computer Methods in Applied Mechanics and Engineering, 398:115225, 2022

    Akshay Joshi, Prakash Thakolkaran, Yiwen Zheng, Maxime Escande, Moritz Flaschel, Laura De Lorenzis, and Siddhant Kumar. Bayesian-euclid: Discovering hyperelastic material laws with uncertainties.Computer Methods in Applied Mechanics and Engineering, 398:115225, 2022

  17. [17]

    Digital image correlation: from displacement measurement to identification of elastic properties–a review.Strain, 42(2):69–80, 2006

    François Hild and Stéphane Roux. Digital image correlation: from displacement measurement to identification of elastic properties–a review.Strain, 42(2):69–80, 2006

  18. [18]

    Springer Science & Business Media, 2012

    Fabrice Pierron and Michel Grédiac.The virtual fields method: extracting constitutive mechanical parameters from full-field deformation measurements. Springer Science & Business Media, 2012

  19. [19]

    Sanjeev Kumar, D Thomas Seidl, Brian N Granzow, Jin Yang, and Jan Niklas Fuhg. A comparative study of calibration techniques for finite strain elastoplasticity: Numerically-exact sensitivities for femu and vfm.Computer Methods in Applied Mechanics and Engineering, 444:118159, 2025

  20. [20]

    Bernardo P Ferreira and Miguel A Bessa. Automatically differentiable model updating (adimu): conventional, hy- brid, and neural network material model discovery including history-dependency.arXiv preprint arXiv:2505.07801, 2025. 37 APREPRINT- 2026-04-10

  21. [21]

    The internal law of a material can be discovered from its boundary.arXiv preprint arXiv:2603.26517, 2026

    Francesco Regazzoni. The internal law of a material can be discovered from its boundary.arXiv preprint arXiv:2603.26517, 2026

  22. [22]

    Solution of inverse problems in elasticity imaging using the adjoint method.Inverse problems, 19(2):297, 2003

    Assad A Oberai, Nachiket H Gokhale, and Gonzalo R Feijóo. Solution of inverse problems in elasticity imaging using the adjoint method.Inverse problems, 19(2):297, 2003

  23. [23]

    Overview of identification methods of mechanical parameters based on full-field measurements.Experimental Mechanics, 48(4):381–402, 2008

    Stéphane Avril, Marc Bonnet, Anne-Sophie Bretelle, Michel Grédiac, François Hild, Patrick Ienny, Félix Latourte, Didier Lemosse, Stéphane Pagano, Emmanuel Pagnacco, et al. Overview of identification methods of mechanical parameters based on full-field measurements.Experimental Mechanics, 48(4):381–402, 2008

  24. [24]

    Variation-matching sensitivity-based virtual fields for hyperelastic material model calibration.arXiv preprint arXiv:2509.03363, 2025

    Denislav P Nikolov, Zhiren Zhu, and Jonathan B Estrada. Variation-matching sensitivity-based virtual fields for hyperelastic material model calibration.arXiv preprint arXiv:2509.03363, 2025

  25. [25]

    Bayesian approach to micromechanical parameter identification using integrated digital image correlation.International Journal of Solids and Structures, 280:112388, 2023

    Liya Gaynutdinova, Ond ˇrej Rokoš, Jan Havelka, Ivana Pultarová, and Jan Zeman. Bayesian approach to micromechanical parameter identification using integrated digital image correlation.International Journal of Solids and Structures, 280:112388, 2023

  26. [26]

    Calibrating constitutive models with full-field data via physics informed neural networks.Strain, 59(2):e12431, 2023

    Craig M Hamel, Kevin N Long, and Sharlotte LB Kramer. Calibrating constitutive models with full-field data via physics informed neural networks.Strain, 59(2):e12431, 2023

  27. [27]

    Differentiable hybrid neural modeling for fluid-structure interaction.Journal of Computational Physics, 496:112584, 2024

    Xiantao Fan and Jian-Xun Wang. Differentiable hybrid neural modeling for fluid-structure interaction.Journal of Computational Physics, 496:112584, 2024

  28. [28]

    Differentiable simulations for pytorch, tensorflow and jax

    Philipp Holl and Nils Thuerey. Differentiable simulations for pytorch, tensorflow and jax. InForty-first Interna- tional Conference on Machine Learning, 2024

  29. [29]

    Warp: Differentiable spatial computing for python

    Miles Macklin. Warp: Differentiable spatial computing for python. InACM SIGGRAPH 2024 Courses, pages 1–147. 2024

  30. [30]

    Automatic differentiation in machine learning: a survey.Journal of machine learning research, 18(153):1–43, 2018

    Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey.Journal of machine learning research, 18(153):1–43, 2018

  31. [31]

    Jax-fem: A differentiable gpu-accelerated 3d finite element solver for automatic inverse design and mechanistic data science

    Tianju Xue, Shuheng Liao, Zhengtao Gan, Chanwook Park, Xiaoyu Xie, Wing Kam Liu, and Jian Cao. Jax-fem: A differentiable gpu-accelerated 3d finite element solver for automatic inverse design and mechanistic data science. Computer Physics Communications, 291:108802, 2023

  32. [32]

    Gaoyuan Wu. A framework for structural shape optimization based on automatic differentiation, the adjoint method and accelerated linear algebra.Structural and Multidisciplinary Optimization, 66(7):151, 2023

  33. [33]

    Jax-sso: Differentiable finite element analysis solver for structural optimization and seamless integration with neural networks.arXiv preprint arXiv:2407.20026, 2024

    Gaoyuan Wu. Jax-sso: Differentiable finite element analysis solver for structural optimization and seamless integration with neural networks.arXiv preprint arXiv:2407.20026, 2024

  34. [34]

    Efficient gpu-computing simulation platform jax-cpfem for differentiable crystal plasticity finite element method.npj Computational Materials, 11(1):46, 2025

    Fanglei Hu, Stephen Niezgoda, Tianju Xue, and Jian Cao. Efficient gpu-computing simulation platform jax-cpfem for differentiable crystal plasticity finite element method.npj Computational Materials, 11(1):46, 2025

  35. [35]

    A survey on transfer learning.IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2009

    Sinno Jialin Pan and Qiang Yang. A survey on transfer learning.IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2009

  36. [36]

    Tissue-scale biomechanical testing of brain tissue for the calibration of nonlinear material models.Current Protocols, 2(4):e381, 2022

    Jessica Faber, Jan Hinrichsen, Alexander Greiner, Nina Reiter, and Silvia Budday. Tissue-scale biomechanical testing of brain tissue for the calibration of nonlinear material models.Current Protocols, 2(4):e381, 2022

  37. [37]

    Correction: Tissue-scale biomechanical testing of brain tissue for the calibration of nonlinear material models.Current Protocols, 2(4):e438– e438, 2022

    Jessica Faber, Jan Hinrichsen, Alexander Greiner, Nina Reiter, and Silvia Budday. Correction: Tissue-scale biomechanical testing of brain tissue for the calibration of nonlinear material models.Current Protocols, 2(4):e438– e438, 2022

  38. [38]

    Philip Church, Rory Cornish, Ian Cullis, Peter Gould, and Ian Lewtas. Using the split hopkinson pressure bar to validate material models.Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 372(2023):20130294, 2014

  39. [39]

    Transfer learning of recurrent neural network-based plasticity models.International Journal for Numerical Methods in Engineering, 125(1):e7357, 2024

    Julian N Heidenreich, Colin Bonatti, and Dirk Mohr. Transfer learning of recurrent neural network-based plasticity models.International Journal for Numerical Methods in Engineering, 125(1):e7357, 2024

  40. [40]

    A transfer learning enhanced physics-informed neural network for parameter identification in soft materials.Applied Mathematics and Mechanics, 45(10):1685–1704, 2024

    Jing’ang Zhu, Yiheng Xue, and Zishun Liu. A transfer learning enhanced physics-informed neural network for parameter identification in soft materials.Applied Mathematics and Mechanics, 45(10):1685–1704, 2024

  41. [41]

    Convexity conditions and existence theorems in nonlinear elasticity.Archive for rational mechanics and Analysis, 63(4):337–403, 1976

    John M Ball. Convexity conditions and existence theorems in nonlinear elasticity.Archive for rational mechanics and Analysis, 63(4):337–403, 1976

  42. [42]

    On isotropic, frame-invariant, polyconvex strain-energy functions.Quarterly Journal of Mechanics and Applied Mathematics, 56(4):483–491, 2003

    David J Steigmann. On isotropic, frame-invariant, polyconvex strain-energy functions.Quarterly Journal of Mechanics and Applied Mathematics, 56(4):483–491, 2003

  43. [43]

    The exponentiated hencky-logarithmic strain energy

    Patrizio Neff, Johannes Lankeit, Ionel-Dumitrel Ghiba, Robert Martin, and David Steigmann. The exponentiated hencky-logarithmic strain energy. part ii: coercivity, planar polyconvexity and existence of minimizers.Zeitschrift für angewandte Mathematik und Physik, 66(4):1671–1693, 2015. 38 APREPRINT- 2026-04-10

  44. [44]

    Polyconvex physics- augmented neural network constitutive models in principal stretches.International Journal of Solids and Structures, page 113469, 2025

    Adrian Buganza Tepole, Asghar Arshad Jadoon, Manuel Rausch, and Jan Niklas Fuhg. Polyconvex physics- augmented neural network constitutive models in principal stretches.International Journal of Solids and Structures, page 113469, 2025

  45. [45]

    Input convex neural networks

    Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. InInternational Conference on Machine Learning, pages 146–155. PMLR, 2017

  46. [46]

    Polyconvex anisotropic hyperelasticity with neural networks.Journal of the Mechanics and Physics of Solids, 159:104703, 2022

    Dominik K Klein, Mauricio Fernández, Robert J Martin, Patrizio Neff, and Oliver Weeger. Polyconvex anisotropic hyperelasticity with neural networks.Journal of the Mechanics and Physics of Solids, 159:104703, 2022

  47. [47]

    PhD thesis

    Dominik Klemens Klein.Polyconvex Hyperelasticity with Neural Networks: On Invariant-and Coordinate-Based Models, Benefits and Limitations. PhD thesis

  48. [48]

    Modular machine learning- based elastoplasticity: Generalization in the context of limited data.Computer Methods in Applied Mechanics and Engineering, 407:115930, 2023

    Jan Niklas Fuhg, Craig M Hamel, Kyle Johnson, Reese Jones, and Nikolaos Bouklas. Modular machine learning- based elastoplasticity: Generalization in the context of limited data.Computer Methods in Applied Mechanics and Engineering, 407:115930, 2023

  49. [49]

    K.et al.Neural networks meet hyperelasticity: A monotonic approach (2025)

    Dominik K Klein, Mokarram Hossain, Konstantin Kikinov, Maximilian Kannapinn, Stephan Rudykh, and Antonio J Gil. Neural networks meet hyperelasticity: A monotonic approach.arXiv preprint arXiv:2501.02670, 2025

  50. [50]

    Benjamin Alheit, Mathias Peirlinck, and Siddhant Kumar. Commet: orders-of-magnitude speed-up in finite element method via batch-vectorized neural constitutive updates.Computer Methods in Applied Mechanics and Engineering, 452:118728, 2026

  51. [51]

    Toward selecting optimal predictive multiscale models.Computer Methods in Applied Mechanics and Engineering, 402:115517, 2022

    Jingye Tan, Baoshan Liang, Pratyush Kumar Singh, Kathryn A Farrell-Maupin, and Danial Faghihi. Toward selecting optimal predictive multiscale models.Computer Methods in Applied Mechanics and Engineering, 402:115517, 2022

  52. [52]

    Jan N Fuhg, Nikolaos Bouklas, and Reese E Jones. Stress representations for tensor basis neural networks: alternative formulations to finger–rivlin–ericksen.Journal of Computing and Information Science in Engineering, 24(11):111007, 2024

  53. [53]

    Hodges, Figgis, 1892

    William Snow Burnside and Arthur William Panton.The theory of equations: with an introduction to the theory of binary algebraic forms. Hodges, Figgis, 1892

  54. [54]

    A good practices guide for digital image correlation.International Digital Image Correlation Society, 10:1–110, 2018

    Elizabeth MC Jones, Mark A Iadicola, et al. A good practices guide for digital image correlation.International Digital Image Correlation Society, 10:1–110, 2018

  55. [55]

    Assessment of digital image correlation measurement errors: methodology and results.Experimental mechanics, 49(3):353–370, 2009

    Michel Bornert, Fabrice Brémand, Pascal Doumalin, J-C Dupré, Marina Fazzini, Michel Grédiac, François Hild, Sebastien Mistou, Jérôme Molimard, J-J Orteu, et al. Assessment of digital image correlation measurement errors: methodology and results.Experimental mechanics, 49(3):353–370, 2009

  56. [56]

    Finn Lindgren, Håvard Rue, and Johan Lindström. An explicit link between gaussian fields and gaussian markov random fields: the stochastic partial differential equation approach.Journal of the Royal Statistical Society Series B: Statistical Methodology, 73(4):423–498, 2011

  57. [57]

    A scalable framework for multi-objective pde-constrained design of building insulation under uncertainty.Computer Methods in Applied Mechanics and Engineering, 419:116628, 2024

    Jingye Tan and Danial Faghihi. A scalable framework for multi-objective pde-constrained design of building insulation under uncertainty.Computer Methods in Applied Mechanics and Engineering, 419:116628, 2024

  58. [58]

    Springer Science & Business Media, 2012

    Anders Logg, Kent-Andre Mardal, and Garth Wells.Automated solution of differential equations by the finite element method: The FEniCS book, volume 84. Springer Science & Business Media, 2012

  59. [59]

    The fenics project version 1.5.Archive of numerical software, 3(100), 2015

    Martin Alnæs, Jan Blechta, Johan Hake, August Johansson, Benjamin Kehlet, Anders Logg, Chris Richardson, Johannes Ring, Marie E Rognes, and Garth N Wells. The fenics project version 1.5.Archive of numerical software, 3(100), 2015

  60. [60]

    Scipy 1.0: fundamental algorithms for scientific computing in python.Nature methods, 17(3):261–272, 2020

    Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. Scipy 1.0: fundamental algorithms for scientific computing in python.Nature methods, 17(3):261–272, 2020. 39