pith. machine review for the scientific record. sign in

arxiv: 2605.06246 · v1 · submitted 2026-05-07 · 💻 cs.LG · cs.RO

Recognition: unknown

Structure-Preserving Gaussian Processes Via Discrete Euler-Lagrange Equations

Jan-Hendrik Ewering, Kathrin Fla{\ss}kamp, Niklas Wahlstr\"om, Thomas B. Sch\"on, Thomas Seel

Authors on Pith no claims yet

Pith reviewed 2026-05-08 13:23 UTC · model grok-4.3

classification 💻 cs.LG cs.RO
keywords Lagrangian Gaussian processesdiscrete Euler-Lagrangestructure preservationdynamics learningGaussian processesvariational methodsrobotics
0
0 comments X

The pith

Gaussian processes conditioned on discrete Euler-Lagrange equations preserve dynamical structure for stable predictions

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops Lagrangian Gaussian Processes for learning dynamical systems from data. It conditions the Gaussian process on linear operators obtained from discrete forced Euler-Lagrange equations and variational discretizations. As a result, the models respect the geometric structure of the Lagrange-d'Alembert principle without external forces. This built-in preservation prevents unphysical energy drift and supports accurate long-term forecasts. The approach works from position measurements only, which is useful when velocities are unavailable.

Core claim

Lagrangian Gaussian Processes are constructed such that their conditioning operators derive directly from the discrete forced Euler-Lagrange equations. In the absence of external forces, this ensures that the discrete trajectories satisfy the geometric structure of the underlying continuous dynamics by construction, yielding physically consistent models that support stable long-term predictions from sparse positional observations.

What carries the argument

Linear operators for Gaussian process conditioning, constructed from discrete forced Euler-Lagrange equations via variational discretization schemes; these operators embed the structure preservation into the learning process.

If this is right

  • Dynamics can be learned solely from discrete position snapshots without velocity data
  • Learned models exhibit no artificial energy drift during long-term integration
  • The method provides probabilistic predictions with uncertainty estimates
  • It demonstrates effectiveness on real-world systems such as soft robots with hysteresis

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same discretization-based conditioning could be applied to other physical principles beyond Lagrangian mechanics
  • In robotic control, these models might enable more reliable planning over extended horizons
  • Further analysis could quantify the approximation error introduced by the discretization for different sampling rates

Load-bearing premise

The chosen variational discretization schemes applied to sequences of position measurements faithfully capture the continuous-time dynamics without errors that would undermine the structure preservation.

What would settle it

A long-term simulation of a learned force-free system showing substantial deviation from conserved energy would indicate that the structure preservation does not hold in practice.

Figures

Figures reproduced from arXiv: 2605.06246 by Jan-Hendrik Ewering, Kathrin Fla{\ss}kamp, Niklas Wahlstr\"om, Thomas B. Sch\"on, Thomas Seel.

Figure 1
Figure 1. Figure 1: We propose Lagrangian Gaussian Processes (LGPs) for learning probabilistic, non-conservative dynamics models using only position measurements, without access to velocity or momentum data. Incorporating Lagrange-d’Alembert principle into Gaussian Processes (GPs), the methods enable physically consistent long-term predictions. structural preservation of the underlying energy conservation law, (ii) uncertaint… view at source ↗
Figure 2
Figure 2. Figure 2: The proposed LGPs enable learning probabilistic models of a simulated pendulum’s Lagrangian L and external forces F only from position data. Incorporating further physics knowledge into the kernel improves the accuracy of L and F as well as the predictive performance. See Appendices A.5 and C for further details. -3 q 3 -4 q˙ 4 H = const. True Hamiltonian t = 0 t = 0.5 t = 1 t = 1.5 -3 q 3 Learned Hamilton… view at source ↗
Figure 3
Figure 3. Figure 3: The proposed LGPs yield structure-preserving and accurate long-term forward simulations of a controlled pendulum. Continuous LGPs generalize to prediction time step sizes not seen during training ∆tpred ̸= ∆ttrain. Figure 2a illustrates how additional physics information in the kernel choice can improve the accuracy of the learned Lagrangian and force GPs in terms of their posterior mean and covariance. We… view at source ↗
Figure 4
Figure 4. Figure 4: Prediction tasks in a controlled real-world double pendulum. The LGPs yield accurate forward simulations despite learning only from noisy real-world position data (∆ttrain = 61 ms, nq = 2, N = 300 data points). In the absence of inputs, the learning-based system energy (Hamiltonian H) along the trajectory—consistent with the underlying physics—decays due to dissipation. Task 2.1 is visualized in Figure A10… view at source ↗
Figure 5
Figure 5. Figure 5: Prediction task in a controlled pneumatic real-world soft robot. Although the soft robot exhibits complex nonlinear kinematics and dynamics with hysteresis effects, the LGPs enable accurate forward simulations of the shape-describing parameters q⊤ = [∆x, ∆y, δℓ] (∆ttrain = 20 ms, nq = 3, N = 200 data points). Photo and data adopted from [30]. 4.2 Task 2: Controlled Real-World Double Pendulum Second, we eva… view at source ↗
read the original abstract

In this paper, we propose Lagrangian Gaussian Processes (LGPs) for probabilistic and data-efficient learning of dynamics via discrete forced Euler-Lagrange equations. Importantly, the geometric structure of the Lagrange-d'Alembert principle, which governs the motion of dynamical systems, is preserved by construction in the absence of external forces. This allows learning physically consistent models that overcome erroneous drift in the system's energy, thereby providing stable long-term predictions. At the core of our approach lie linear operators for Gaussian process conditioning, constructed from discrete forced Euler-Lagrange equations and variational discretization schemes. Thereby and unlike prior work, the method enables learning dynamics from discrete position snapshots, i.e., without access to a system's velocities or momenta. This is particularly relevant for a large class of practical scenarios where only position measurements are available, for instance, in motion capture or visual servoing applications. We demonstrate the data-efficiency and generalization capabilities of the LGPs in various synthetic and real-world case studies, including a real-world soft robot with hysteresis. The experimental results underscore that the LGPs learn physically consistent dynamics with uncertainty quantification solely from sparse positional data and enable stable long-term predictions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes Lagrangian Gaussian Processes (LGPs) that combine Gaussian process regression with linear operators derived from discrete forced Euler-Lagrange equations and variational discretization schemes. This construction is intended to preserve the geometric structure of the Lagrange-d'Alembert principle by construction in the absence of external forces, enabling learning of physically consistent dynamics solely from discrete position snapshots (without velocities or momenta) and yielding energy-stable long-term predictions with uncertainty quantification. The approach is demonstrated on synthetic benchmarks and a real-world soft robot exhibiting hysteresis.

Significance. If the central claim of exact structure preservation holds through the GP conditioning and rollout, the work would offer a useful advance in physics-informed probabilistic modeling of dynamics. It addresses a practical gap by operating from position-only data while enforcing variational principles, potentially improving long-term stability and data efficiency in robotics and control applications. The emphasis on uncertainty-aware, structure-preserving models from sparse observations is a positive contribution, though its impact hinges on rigorous confirmation that the GP posterior and continuous predictions retain the claimed conservation properties.

major comments (2)
  1. Abstract: The claim that the Lagrange-d'Alembert structure 'is preserved by construction' is load-bearing for the central contribution, yet the description leaves open whether the GP posterior mean (or samples) and the numerical integration used for continuous-time rollout exactly satisfy the discrete forced Euler-Lagrange equations at every step. Any deviation introduced by the kernel, conditioning, or integrator could allow energy drift, undermining the stability guarantee.
  2. Core construction of linear operators: It is unclear from the provided description how the variational discretization scheme interacts with the GP covariance function to ensure that the conditioned model remains exactly consistent with the discrete EL equations outside the training discretization points, particularly when external forces are absent.
minor comments (1)
  1. The abstract refers to 'various synthetic and real-world case studies' without quantifying data sparsity, baseline comparisons, or specific metrics for energy drift; adding these details would improve clarity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive comments on our manuscript. The points raised highlight opportunities to strengthen the presentation of our core claims regarding structure preservation. We will revise the manuscript to provide additional mathematical detail and clarifications on how the GP posterior and rollout satisfy the discrete Euler-Lagrange equations.

read point-by-point responses
  1. Referee: Abstract: The claim that the Lagrange-d'Alembert structure 'is preserved by construction' is load-bearing for the central contribution, yet the description leaves open whether the GP posterior mean (or samples) and the numerical integration used for continuous-time rollout exactly satisfy the discrete forced Euler-Lagrange equations at every step. Any deviation introduced by the kernel, conditioning, or integrator could allow energy drift, undermining the stability guarantee.

    Authors: We appreciate this observation and agree that precision on this point is essential. In the revised manuscript we will clarify that the linear operators derived from the discrete forced Euler-Lagrange equations are used to condition the GP directly on the position snapshots. Consequently, both the posterior mean and samples satisfy the discrete EL equations exactly at every discrete time step corresponding to the observed positions (and at the same steps during rollout). For continuous-time predictions we employ the identical variational discretization scheme as a structure-preserving integrator; this guarantees that the discrete conservation properties (including energy stability in the absence of external forces) are maintained at every integration step. We will update the abstract for accuracy and add an appendix containing a short proof that the conditioned posterior and the discrete rollout incur no spurious energy drift. revision: yes

  2. Referee: Core construction of linear operators: It is unclear from the provided description how the variational discretization scheme interacts with the GP covariance function to ensure that the conditioned model remains exactly consistent with the discrete EL equations outside the training discretization points, particularly when external forces are absent.

    Authors: Thank you for identifying this need for elaboration. The variational discretization yields a set of linear operators (finite-difference approximations to velocities and accelerations consistent with the Lagrange-d'Alembert principle) that are applied to the underlying GP. Conditioning the GP on these operators equaling the (learned) force terms enforces the discrete EL equations at the discrete time instants where the operators are evaluated. Because the operators are linear, the posterior covariance is modified globally; any draw from the posterior therefore satisfies the discrete EL equations exactly at those instants, independent of whether they coincide with training locations. When external forces are absent the same operators reduce to the homogeneous discrete EL equations, yielding exact discrete conservation. For points between the discrete steps the GP provides a smooth interpolation that is consistent with the learned variational dynamics. In the revision we will expand the methods section with the explicit operator-kernel interaction and a short derivation showing that consistency holds at all discrete steps used by the model. revision: yes

Circularity Check

0 steps flagged

Structure preservation is enforced by construction in the discrete scheme; no reduction of central claim to fitted inputs or self-citation chain

full rationale

The derivation constructs linear operators for GP conditioning directly from discrete forced Euler-Lagrange equations and variational discretization schemes (external to the paper). This enforces the Lagrange-d'Alembert structure by design for the discrete case in the absence of forces, rather than deriving it from data or self-referential definitions. The GP learning step remains a standard conditioning on position snapshots, with structure preservation as an added constraint rather than a tautology. No load-bearing self-citation, fitted parameter renamed as prediction, or ansatz smuggled via prior work is present; the approach is self-contained against the cited variational principles.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Review based solely on abstract; full details on parameters and assumptions unavailable. The approach relies on standard variational mechanics but introduces the LGP construction.

axioms (1)
  • domain assumption The Lagrange-d'Alembert principle governs the motion of dynamical systems and its geometric structure can be preserved via discrete forced Euler-Lagrange equations.
    Directly invoked in the abstract as the foundation for structure preservation by construction.

pith-pipeline@v0.9.0 · 5523 in / 1193 out tokens · 46811 ms · 2026-05-08T13:23:07.298972+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

51 extracted references · 4 canonical work pages

  1. [1]

    Hoffmann

    Joe Watson, Chen Song, Oliver Weeger, Theo Gruner, Le Thai An , Kay Hansel, Ahmed Hendawy, Oleg Arenz, Will Trojak, Miles Cranmer, Carlo D’Eramo, Fabian Buelow, Tanmay Goyal, Jan Peters, and Martin W. Hoffmann. Machine Learning with Physics Knowledge for Prediction: A Survey.Transactions on Machine Learning Research, 2025

  2. [2]

    Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations.Science (New York, N.Y.), 367(6481): 1026–1030, 2020

    Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations.Science (New York, N.Y.), 367(6481): 1026–1030, 2020

  3. [3]

    Physics–Informed Neural Networks to Model and Control Robots: A Theoretical and Experimental Investigation.Advanced Intelligent Systems, 6(5), 2024

    Jingyue Liu, Pablo Borja, and Cosimo Della Santina. Physics–Informed Neural Networks to Model and Control Robots: A Theoretical and Experimental Investigation.Advanced Intelligent Systems, 6(5), 2024

  4. [4]

    Tim-Lukas Habich, Aran Mohammad, Simon F. G. Ehlers, Martin Bensch, Thomas Seel, and Moritz Schappler. Generalizable and Fast Surrogates: Model Predictive Control of Articulated Soft Robots Using Physics-Informed Neural Networks.IEEE Transactions on Robotics, 42: 619–636, 2026

  5. [5]

    Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning

    Michael Lutter, Christian Ritter, and Jan Peters. Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning. InInt. Conf. on Learning Representations, 2019

  6. [6]

    Hamiltonian Neural Networks

    Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian Neural Networks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, 2019

  7. [7]

    Raissi, P

    M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686–707, 2019

  8. [8]

    La- grangian Neural Network-Based Control: Improving Robotic Trajectory Tracking via Linearized Feedback.IEEE Robotics and Automation Letters, 11(3):2546–2553, 2026

    Manuel Weiss, Alexander Pawluchin, Jan-Hendrik Ewering, Thomas Seel, and Ivo Boblan. La- grangian Neural Network-Based Control: Improving Robotic Trajectory Tracking via Linearized Feedback.IEEE Robotics and Automation Letters, 11(3):2546–2553, 2026

  9. [9]

    Lagrangian Neural Networks

    Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Lagrangian Neural Networks. InICLR Workshop on Integration of Deep Neural Models and Differential Equations, 2019. 10

  10. [10]

    Generalized Lagrangian Neural Networks

    Shanshan Xiao, Jiawei Zhang, and Yifa Tang. Generalized Lagrangian Neural Networks. preprint. arXiv: 2401.03728, 2024

  11. [11]

    Sina Ober-Blöbaum, Oliver Junge, and Jerrold E. Marsden. Discrete mechanics and optimal control: An analysis.ESAIM: Control, Optimisation and Calculus of Variations, 17(2):322–352, 2011

  12. [12]

    Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, and Angela P

    Lukas Brunke, Melissa Greeff, Adam W. Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, and Angela P. Schoellig. Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning.Annual Review of Control, Robotics, and Autonomous Systems, 5(1): 411–444, 2022

  13. [13]

    Learning mechanical systems from real-world data using discrete forced Lagrangian dynamics.preprint, arXiv: 2505.20370, 2025

    Martine Dyring Hansen, Elena Celledoni, and Benjamin Kwanen Tapley. Learning mechanical systems from real-world data using discrete forced Lagrangian dynamics.preprint, arXiv: 2505.20370, 2025

  14. [14]

    Learning Accurate Robot Dynamics From Position-Only Data With Discrete Lagrangian Neural Networks.IEEE Robotics and Automation Letters, 11(3):2927–2934, 2026

    Zhiming Li, Fengyu Sun, Shuangshuang Wu, Fuchun Sun, Peilin Xiong, Cong Liu, and Wenbai Chen. Learning Accurate Robot Dynamics From Position-Only Data With Discrete Lagrangian Neural Networks.IEEE Robotics and Automation Letters, 11(3):2927–2934, 2026

  15. [15]

    Thomas Beckers, Jacob Seidman, Paris Perdikaris, and George J. Pappas. Gaussian Process Port-Hamiltonian Systems: Bayesian Learning with Physics Prior. InConf. on Decision and Control, pages 1447–1453. IEEE, 2022

  16. [16]

    Physically consistent modeling & identification of nonlinear friction with dissipative Gaussian processes

    Rui Dai, Giulio Evangelisti, and Sandra Hirche. Physically consistent modeling & identification of nonlinear friction with dissipative Gaussian processes. InLearning for Dynamics and Control Conference, volume 242, pages 1415–1426. PMLR, 2024

  17. [17]

    A Black-Box Physics-Informed Estimator Based on Gaussian Process Regression for Robot Inverse Dynamics Identification.IEEE Transactions on Robotics, 40:4820–4836, 2024

    Giulio Giacomuzzo, Ruggero Carli, Diego Romeres, and Alberto Dalla Libera. A Black-Box Physics-Informed Estimator Based on Gaussian Process Regression for Robot Inverse Dynamics Identification.IEEE Transactions on Robotics, 40:4820–4836, 2024

  18. [18]

    Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data

    Yusuke Tanaka, Tomoharu Iwata, and Naonori Ueda. Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data. InAdvances in Neural Information Processing Systems, volume 35, pages 20795–20808. Curran Associates, 2022

  19. [19]

    Herrmann, Niklas Wahlström, Thomas B

    Jan-Hendrik Ewering, Robin E. Herrmann, Niklas Wahlström, Thomas B. Schön, and Thomas Seel. Learning Dynamics from Input-Output Data with Hamiltonian Gaussian Processes. In Learning for Dynamics and Control Conference, 2026

  20. [20]

    Variational learning of Euler–Lagrange dynamics from data.Journal of Computational and Applied Mathematics, 421:114780, 2023

    Sina Ober-Blöbaum and Christian Offen. Variational learning of Euler–Lagrange dynamics from data.Journal of Computational and Applied Mathematics, 421:114780, 2023

  21. [21]

    Machine learning of continuous and discrete variational ODEs with conver- gence guarantee and uncertainty quantification.Mathematics of Computation, 2025

    Christian Offen. Machine learning of continuous and discrete variational ODEs with conver- gence guarantee and uncertainty quantification.Mathematics of Computation, 2025

  22. [22]

    Physically Consistent Learning of Conservative La- grangian Systems with Gaussian Processes

    Giulio Evangelisti and Sandra Hirche. Physically Consistent Learning of Conservative La- grangian Systems with Gaussian Processes. InConf. on Decision and Control, pages 4078–4085. IEEE, 2022

  23. [23]

    Multi-task Gaussian Process Prediction

    Edwin Bonilla, Kian Chai, and Christopher Williams. Multi-task Gaussian Process Prediction. InAdvances in Neural Information Processing Systems, volume 20. Curran Associates, Inc, 2007

  24. [24]

    arXiv:2212.12474 , year=

    Marvin Pförtner, Ingo Steinwart, Philipp Hennig, and Jonathan Wenger. Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers.preprint. arXiv: 2212.12474, 2022

  25. [25]

    Data-Driven Momentum Observers With Physically Consistent Gaussian Processes.IEEE Transactions on Robotics, 40:1938–1951, 2024

    Giulio Evangelisti and Sandra Hirche. Data-Driven Momentum Observers With Physically Consistent Gaussian Processes.IEEE Transactions on Robotics, 40:1938–1951, 2024

  26. [26]

    Lagrangian inspired polynomial estimator for black-box learning and control of underactuated systems

    Giulio Giacomuzzo, Riccardo Cescon, Diego Romeres, Ruggero Carli, and Alberto Dalla Libera. Lagrangian inspired polynomial estimator for black-box learning and control of underactuated systems. InLearning for Dynamics and Control Conference, volume 242, pages 1292–1304. PMLR, 2024. 11

  27. [27]

    Shala, Heiner Peters, Shubham Vyas, and Melya Boukheddimi

    Shivesh Kumar, Felix Wiebe, Mahdi Javadi, Jonathan Babel, Lasse Maywald, Lasse J. Shala, Heiner Peters, Shubham Vyas, and Melya Boukheddimi. Dual Purpose Acrobot & Pendubot Platform, 2025. URL https://github.com/dfki-ric-underactuated-lab/double_ pendulum

  28. [28]

    Shala, Shubham Vyas, Mahdi Javadi, and Frank Kirchner

    Felix Wiebe, Shivesh Kumar, Lasse J. Shala, Shubham Vyas, Mahdi Javadi, and Frank Kirchner. Open Source Dual-Purpose Acrobot and Pendubot Platform: Benchmarking Control Algorithms for Underactuated Robotics.IEEE Robotics & Automation Magazine, 31(2):113–124, 2024

  29. [29]

    On an Improved State Parametrization for Soft Robots With Piecewise Constant Curvature and Its Use in Model Based Control.IEEE Robotics and Automation Letters, 5(2):1001–1008, 2020

    Cosimo Della Santina, Antonio Bicchi, and Daniela Rus. On an Improved State Parametrization for Soft Robots With Piecewise Constant Curvature and Its Use in Model Based Control.IEEE Robotics and Automation Letters, 5(2):1001–1008, 2020

  30. [30]

    Maximilian Mehl, Max Bartholdt, Simon F. G. Ehlers, Thomas Seel, and Moritz Schappler. Adaptive State Estimation with Constant-Curvature Dynamics Using Force-Torque Sensors with Application to a Soft Pneumatic Actuator. InInt. Conference on Robotics and Autom., pages 14939–14945. IEEE, 2024

  31. [31]

    Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural ordinary differential equations. InAdvances in Neural Information Processing Systems, volume 31. Curran Associates, 2018

  32. [32]

    and Greydanus, S

    Andrew Sosanya and Sam Greydanus. Dissipative Hamiltonian Neural Networks: Learning Dissipative and Conservative Dynamics Separately.preprint. arXiv: 2201.10085, 2022

  33. [33]

    Roth, Dominik K

    Fabian J. Roth, Dominik K. Klein, Maximilian Kannapinn, Jan Peters, and Oliver Weeger. Stable Port-Hamiltonian Neural Networks. InConference on Neural Information Processing Systems, 2025

  34. [34]

    Desai, Marios Mattheakis, David Sondak, Pavlos Protopapas, and Stephen J

    Shaan A. Desai, Marios Mattheakis, David Sondak, Pavlos Protopapas, and Stephen J. Roberts. Port-Hamiltonian neural networks for learning explicit time-dependent dynamical systems. Physical Review E, 104(3-1):034312, 2021

  35. [35]

    Kevrekidis

    Tom Bertalan, Felix Dietrich, Igor Mezi´c, and Ioannis G. Kevrekidis. On learning Hamiltonian systems from data.Chaos, 29(12):121107, 2019

  36. [36]

    Albert, Bernd Bischl, and Udo von Toussaint

    Katharina Rath, Christopher G. Albert, Bernd Bischl, and Udo von Toussaint. Symplectic Gaussian process regression of maps in Hamiltonian systems.Chaos, 31(5):053121, 2021

  37. [37]

    Offen and S

    C. Offen and S. Ober-Blöbaum. Symplectic integration of learned Hamiltonian systems.Chaos, 32(1):013122, 2022

  38. [38]

    Structure-Preserving Gaussian Process Dynamics

    Katharina Ensinger, Friedrich Solowjow, Sebastian Ziesche, Michael Tiemann, and Sebastian Trimpe. Structure-Preserving Gaussian Process Dynamics. InMachine Learning and Knowledge Discovery in Databases, volume 13717, pages 140–156. Springer Nature, Cham, 2023

  39. [39]

    Learning Energy Conserving Dynamics Efficiently with Hamiltonian Gaussian Processes.Transactions on Machine Learning Research, 2023

    Magnus Ross and Markus Heinonen. Learning Energy Conserving Dynamics Efficiently with Hamiltonian Gaussian Processes.Transactions on Machine Learning Research, 2023

  40. [40]

    PhD thesis, The University of Manchester, 2024

    Magnus Ross.Advances in Physics-informed Gaussian Process Regression. PhD thesis, The University of Manchester, 2024

  41. [41]

    A structure-preserving kernel method for learning Hamiltonian systems.Mathematics of Computation, 2025

    Jianyu Hu, Juan-Pablo Ortega, and Daiying Yin. A structure-preserving kernel method for learning Hamiltonian systems.Mathematics of Computation, 2025

  42. [42]

    René Geist, Josefine Monnet, Stefan Vilceanu, Sebastian Trimpe, and Christian Brecher

    Minh Trinh, A. René Geist, Josefine Monnet, Stefan Vilceanu, Sebastian Trimpe, and Christian Brecher. Newtonian and Lagrangian Neural Networks: A Comparison Towards Efficient Inverse Dynamics Identification.IFAC-PapersOnLine, 59(18):31–36, 2025

  43. [43]

    Numerical Differentiation of Noisy, Nonsmooth Data.ISRN Applied Mathe- matics, 2011:1–11, 2011

    Rick Chartrand. Numerical Differentiation of Noisy, Nonsmooth Data.ISRN Applied Mathe- matics, 2011:1–11, 2011

  44. [44]

    Discrete Lagrangian Neural Networks with Automatic Symmetry Discovery.IFAC-PapersOnLine, 56(2):3203–3210, 2023

    Yana Lishkova, Paul Scherer, Steffen Ridderbusch, Mateja Jamnik, Pietro Liò, Sina Ober- Blöbaum, and Christian Offen. Discrete Lagrangian Neural Networks with Automatic Symmetry Discovery.IFAC-PapersOnLine, 56(2):3203–3210, 2023. 12

  45. [45]

    Duong, Melvin Leok, and Nikolay Atanasov

    Valentin Duruisseaux, Thai P. Duong, Melvin Leok, and Nikolay Atanasov. Lie Group Forced Variational Integrator Networks for Learning and Control of Robot Systems. InLearning for Dynamics and Control Conference, volume 211, pages 731–744. PMLR, 2023

  46. [46]

    Structure-Preserving Learning Using Gaussian Processes and Variational Integrators

    Jan Brüdigam, Martin Schuck, Alexandre Capone, Stefan Sosnowski, and Sandra Hirche. Structure-Preserving Learning Using Gaussian Processes and Variational Integrators. InLearn- ing for Dynamics and Control Conference, volume 168, pages 1150–1162. PMLR, 2022

  47. [47]

    Poole, and John L

    Herbert Goldstein, Charles P. Poole, and John L. Safko.Classical Mechanics. Addison Wesley, 3rd edition, 2002

  48. [48]

    MIT Press, 2005

    Carl Edward Rasmussen and Christopher Williams.Gaussian Processes for Machine Learning. MIT Press, 2005

  49. [49]

    Neal.Bayesian Learning for Neural Networks, volume 118

    Radford M. Neal.Bayesian Learning for Neural Networks, volume 118. Springer New York, New York, NY , 1996

  50. [50]

    Dynamic Modeling of Soft-Material Actuators Combining Constant Curvature Kinematics and Floating-Base Approach

    Maximilian Mehl, Max Bartholdt, and Moritz Schappler. Dynamic Modeling of Soft-Material Actuators Combining Constant Curvature Kinematics and Floating-Base Approach. InInt. Conf. on Soft Robotics, pages 1–8. IEEE, 2022

  51. [51]

    L F ¯y # ∼ N  

    Jan-Hendrik Ewering, Max Bartholdt, Simon F. G. Ehlers, Niklas Wahlström, Thomas B. Schön, and Thomas Seel. Simultaneous State Estimation and Online Model Learning in a Soft Robotic System. InInt. Conf. on Information Fusion (FUSION). IEEE, 2026. 13 Appendix A Method Details i A.1 Construction of Covariance Functions . . . . . . . . . . . . . . . . . . . ...