Recognition: 2 theorem links
· Lean TheoremSteerable Neural ODEs on Homogeneous Spaces
Pith reviewed 2026-05-13 06:32 UTC · model grok-4.3
The pith
Steerable neural ODEs on homogeneous spaces are G-equivariant when the generating vector field and parallel transport connection are both G-invariant.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Steerable NODEs on M = G/H interpret features as sections of associated vector bundles over M and describe their evolution as parallel transport. This yields a coupled system consisting of a flow equation on M and a steering equation on the features. The resulting models are G-equivariant whenever the vector field generating the flow and the connection governing parallel transport are both G-invariant. The framework also shows how existing NODE models and continuous normalizing flows on Lie groups arise as special cases within this geometric setting.
What carries the argument
The coupled flow and steering ODEs where features transform as sections of associated vector bundles via parallel transport under a G-invariant connection.
If this is right
- Steerable NODEs provide a geometric foundation for learning continuous-time equivariant dynamics of vector-valued features on homogeneous spaces.
- Existing manifold neural ODEs and continuous normalizing flows on Lie groups are incorporated as special cases of the framework.
- G-equivariance is guaranteed by the invariance of the vector field and the connection.
- The approach extends manifold NODEs by transporting associated feature vectors under the local symmetry group H.
Where Pith is reading between the lines
- Designing networks where the connection is learned but constrained to be G-invariant could enforce equivariance automatically in continuous flows.
- This suggests similar steerable constructions might apply to discrete neural networks on homogeneous spaces by discretizing the parallel transport.
- Applications to physical systems with continuous symmetries, such as rigid body dynamics on rotation groups, could benefit from the built-in equivariance.
Load-bearing premise
That features can be consistently interpreted as sections of associated vector bundles over the homogeneous space M and that their evolution is governed by parallel transport in a neural network setting.
What would settle it
Integrating the coupled ODE system with explicitly G-invariant vector field and connection on a test homogeneous space such as the sphere and verifying whether the output features transform correctly under group actions applied before and after the integration.
Figures
read the original abstract
We introduce steerable neural ordinary differential equations on homogeneous spaces $M=G/H$. These models constitute a novel geometric extension of manifold neural ordinary differential equations (NODEs) that transport associated feature vectors transforming under the local symmetry group $H$. We interpret features as sections of associated vector bundles over $M$, and describe their evolution as parallel transport. This results in a coupled system of ODEs consisting of a flow equation on $M$ and a steering equation acting on features. We show that steerable NODEs are $G$-equivariant whenever the vector field generating the flow and the connection governing parallel transport are both $G$-invariant. Furthermore, we demonstrate how steerable NODEs incorporate existing NODE models and continuous normalizing flows on Lie groups. Our framework provides the geometric foundation for learning continuous-time equivariant dynamics of general vector-valued features on homogeneous spaces.
Editorial analysis
A structured set of objections, weighed in public.
Circularity Check
No significant circularity; central equivariance follows from standard differential geometry
full rationale
The paper's derivation chain rests on the standard fact that a G-invariant vector field on M generates a G-equivariant flow and that parallel transport with a G-invariant connection commutes with the G-action on associated bundles. This is invoked directly in the abstract and full text without reducing to fitted parameters, self-definitions, or load-bearing self-citations. The coupled ODE system (flow on M plus steering on fibers) inherits equivariance under the stated hypotheses by construction from classical geometry, not from any internal renaming or ansatz smuggling. The bundle interpretation of features is the conventional one in geometric deep learning and introduces no self-referential loop. No equations equate a 'prediction' to its own inputs by definition.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption M is a homogeneous space of the form G/H
- domain assumption Features transform as sections of associated vector bundles
invented entities (1)
-
steerable neural ODE
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We interpret features as sections of associated vector bundles over M, and describe their evolution as parallel transport. This results in a coupled system of ODEs consisting of a flow equation on M and a steering equation acting on features.
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We show that steerable NODEs are G-equivariant whenever the vector field generating the flow and the connection governing parallel transport are both G-invariant.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural ordinary differential equations. InAdvances in Neural Information Processing Systems, volume 31, 2018
work page 2018
-
[2]
Jason Yim, Andrew Campbell, Andrew Y. K. Foong, Michael Gastegger, José Jiménez-Luna, Sarah Lewis, Victor Garcia Satorras, Bastiaan S. Veeling, Regina Barzilay, Tommi Jaakkola, and Frank Noé. Fast protein backbone generation with SE(3) flow matching. InProceedings of the Machine Learning in Structural Biology Workshop (NeurIPS), 2023. 33
work page 2023
-
[3]
SE(3) stochastic flow matching for protein backbone generation
Avishek (Joey) Bose, Tara Akhound-Sadegh, Guillaume Huguet, Kilian Fatras, Jarrid Rector-Brooks, Cheng-Hao Liu, Andrei Cristian Nica, Maksym Korablyov, Michael Bronstein, and Alexander Tong. SE(3) stochastic flow matching for protein backbone generation. InProceedings of the 12th International Conference on Learning Representations (ICLR), 2024
work page 2024
-
[4]
Alex J. Li and Tanja Kortemme. ProteinZen: combining latent and SE(3) flow matching for all-atom protein generation. InProceedings of the Machine Learning for Structural Biology Workshop (NeurIPS), 2024
work page 2024
-
[5]
Tomas Geffner, Kieran Didi, Zhonglin Cao, Danny Reidenbach, Zuobai Zhang, Christian Dallago, Emine Kucukbenli, Karsten Kreis, and Arash Vahdat. La- Proteina: Atomistic protein generation via partially latent flow matching.Arxiv e-print, arXiv:2507.09466 [cs.LG], 2025
-
[6]
Ian Dunn and David R. Koes. FlowMol3: flow matching for 3d de novo small- molecule generation.Digital Discovery, 2026
work page 2026
-
[7]
Mathis Gerdes, Pim de Haan, Roberto Bondesan, and Miranda C. N. Cheng. Nonperturbative trivializing flows for lattice gauge theories.Physical Review D, 112:094516, 2025
work page 2025
-
[8]
Longde Huang, Oleksandr Balabanov, Hampus Linander, Mats Granath, Daniel Persson, and Jan E. Gerken. Learning Chern numbers of topological insulators with gauge equivariant neural networks. InAdvances in Neural Information Processing Systems, volume 38, 2025
work page 2025
-
[9]
Neural ordinary differential equations on mani- folds
Luca Falorsi and Patrick Forré. Neural ordinary differential equations on mani- folds. InProceedings of the INNF+ Workshop of the International Conference on Machine Learning (ICML), 2020
work page 2020
-
[10]
Aaron Lou, Derek Lim, Isay Katsman, Leo Huang, Qingxuan Jiang, Ser Nam Lim, and Christopher M. De Sa. Neural manifold ordinary differential equations. InAdvances in Neural Information Processing Systems, volume 33, 2020
work page 2020
-
[11]
Riemannian continuous normalizing flows
Emile Mathieu and Maximilian Nickel. Riemannian continuous normalizing flows. InAdvances in Neural Information Processing Systems, volume 33, 2020
work page 2020
-
[12]
Equivariant flows: Exact likeli- hood generative learning for symmetric densities
Jonas Köhler, Leon Klein, and Frank Noé. Equivariant flows: Exact likeli- hood generative learning for symmetric densities. InProceedings of the 37th International Conference on Machine Learning, pages 5361–5370. PMLR, 2020
work page 2020
-
[13]
Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser-Nam Lim, and Christopher De Sa. Equivariant manifold flows. InAdvances in Neural Informa- tion Processing Systems, volume 34, 2021
work page 2021
-
[14]
Emma Andersdotter, Daniel Persson, and Fredrik Ohlsson. Equivariant manifold neural ODEs and differential invariants.Journal of Machine Learning Research, 26:1–30, 2025
work page 2025
-
[15]
A general theory of equivariant CNNs on homogeneous spaces
Taco Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant CNNs on homogeneous spaces. InAdvances in Neural Information Processing Systems, volume 32, 2019. 34
work page 2019
-
[16]
Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, and Max Welling. Covariance in physics and convolutional neural networks. InProceedings of the Theoretical Physics for Deep Learning Workshop of the International Conference on Machine Learning (ICML), 2019
work page 2019
-
[17]
Jan E. Gerken, Jimmy Aronsson, Oscar Carlsson, Hampus Linander, Fredrik Ohlsson, Christoffer Petersson, and Daniel Persson. Geometric deep learning and equivariant neural networks.Artificial Intelligence Review, 56:14605–14662, 2023
work page 2023
-
[18]
Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. InProceedings of the 35th International Conference on Machine Learning, volume 80, pages 2747–2755. PMLR, 2018
work page 2018
-
[19]
Jimmy Aronsson. Homogeneous vector bundles and G-equivariant convolutional neural networks.Sampling Theory, Signal Processing, and Data Analysis, 20, 2022
work page 2022
-
[20]
Elias Nyholm, Oscar Carlsson, Maurice Weiler, and Daniel Persson. Equivariant non-linear maps for neural networks on homogeneous spaces.Mathematical Foundations of Machine Learning, 2, 2026
work page 2026
-
[21]
On invariant connections over a principal fibre bundle
Hsien-Chung Wang. On invariant connections over a principal fibre bundle. Nagoya Mathematical Journal, 13:1–19, 1958
work page 1958
-
[22]
Foundations of Differential Geometry I
Shoshichi Kobayashi and Katsumi Nomizu. Foundations of Differential Geometry I. InInterscience Tracts in Pure and Applied Mathematics, volume 15. Wiley, 1963
work page 1963
- [23]
-
[24]
Lee.Introduction to Smooth Manifolds
John M. Lee.Introduction to Smooth Manifolds. Springer, 2012
work page 2012
-
[25]
Addison-Wesley Publishing Company, Inc., 1981
David Bleecker.Gauge Theory and Variational Principles. Addison-Wesley Publishing Company, Inc., 1981
work page 1981
-
[26]
Michor, and Jan Slovak.Natural Operations in Differential Geometry
Ivan Kolar, Peter W. Michor, and Jan Slovak.Natural Operations in Differential Geometry. Springer Science & Business Media, 2013
work page 2013
-
[27]
Taco S. Cohen and Max Welling. Steerable CNNs. InProceedings of the 5th International Conference on Learning Representations (ICLR), 2017
work page 2017
-
[28]
Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. InProceedings of the 11th International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[29]
Fuchs, Ingmar Posner, and Max Welling
Victor Garcia Satorras, Emiel Hoogeboom, Fabian B. Fuchs, Ingmar Posner, and Max Welling. E(n) equivariant normalizing flows. InAdvances in Neural Information Processing Systems, volume 34, 2021
work page 2021
-
[30]
Bundle scale spaces and local gauge symmetries for graph networks
Jonas Cassel, Fabio Schlindwein, Peter Albers, and Christoph Schnörr. Bundle scale spaces and local gauge symmetries for graph networks. InScale Space and Variational Methods in Computer Vision, pages 245–257, 2025. 35
work page 2025
-
[31]
Jakob Hansen and Thomas Gebhart. Sheaf neural networks. InProceedings of the Topological Data Analysis and Beyond Workshop (NeurIPS), 2020
work page 2020
-
[32]
Sheaf neural networks with connection laplacians
Federico Barbero, Cristian Bodnar, Haitz Sáez de Ocáriz Borde, Michael Bron- stein, Petar Veličković, and Pietro Liò. Sheaf neural networks with connection laplacians. InProceedings of the Topological, Algebraic, and Geometric Learning Workshops (ICML), 2022
work page 2022
-
[33]
Hampus Linander, Christoffer Petersson, Daniel Persson, and Jan E. Gerken. PEAR: equal area weather forecasting on the sphere. InProceedings of the AI for Science workshop (NeurIPS), 2025
work page 2025
-
[34]
Matteo Favoni, Andreas Ipp, David I. Müller, and Daniel Schuh. Lattice gauge equivariant convolutional neural networks.Physical Review Letters, 128:032003, 2022. 36 A Proof of Lemma 3.4 In §3.2 we introduced two definitions of a feature field. Here, we prove the lemma stating that these two definitions are equivalent. Lemma 3.4.Definitions 3.2 and 3.3 are...
work page 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.