pith. machine review for the scientific record. sign in

arxiv: 2605.05024 · v1 · submitted 2026-05-06 · 📊 stat.ML · cs.LG· stat.CO· stat.ME

Recognition: unknown

Hypergraph Generation via Structured Stochastic Diffusion

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:28 UTC · model grok-4.3

classification 📊 stat.ML cs.LGstat.COstat.ME
keywords hypergraph generationstochastic diffusionincidence matricesgenerative modelsreverse SDEpermutation-equivariant networkshigher-order interactions
0
0 comments X

The pith

A diffusion model on relaxed hypergraph incidence matrices learns exact reverse-drift targets in closed form and generates samples via a reverse-time SDE.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to show that hypergraph generation can be performed directly in incidence space by a structured diffusion whose forward process is linear-Gaussian conditional on the data. This linearity supplies exact conditional means, covariances, scores, and reverse-drift targets, so a permutation-equivariant network can be trained by simple regression rather than score matching. A reader would care because hypergraphs encode higher-order interactions that pairwise reductions lose, and faithful generation would let models simulate complex systems such as collaboration networks or metabolic pathways without discarding multi-way structure. The work also supplies theoretical statements of exactness and finite-horizon stability under ideal conditions.

Core claim

The central claim is that a forward diffusion formed by a hypergraph-specific two-sided heat operator plus an Ornstein-Uhlenbeck component remains linear-Gaussian when conditioned on an observed hypergraph. Because the process is linear-Gaussian, closed-form expressions exist for every quantity required to train a state-only reverse-drift field; this field is realized as a permutation-equivariant map in incidence space and is regressed directly onto the exact targets. Generation proceeds by integrating the learned reverse-time SDE from a known Gaussian terminal distribution. The authors prove exact recovery in the ideal setting, finite-horizon stability, and report improved empirical quality

What carries the argument

The hypergraph-specific two-sided heat operator combined with an Ornstein-Uhlenbeck component, which keeps the forward process linear-Gaussian and thereby supplies closed-form reverse-drift targets.

If this is right

  • Exact sampling holds in the ideal state-only setting.
  • Finite-horizon stability guarantees apply to trajectories of the reverse SDE.
  • Generated hypergraphs show improved fidelity relative to strong baselines on standard quality metrics.
  • The state-only formulation removes any need for history-dependent conditioning during training.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same linear-Gaussian construction could be tested on other incidence-based objects such as simplicial complexes once analogous heat operators are defined.
  • Permutation equivariance automatically respects the symmetries of the hypergraph without extra regularization.
  • Continuous relaxation of the incidence matrix permits gradient-based optimization before final thresholding to discrete hyperedges.

Load-bearing premise

The forward process combining the hypergraph-specific two-sided heat operator with an Ornstein-Uhlenbeck component remains linear-Gaussian conditional on an observed hypergraph.

What would settle it

A direct numerical verification in which the empirical conditional mean or covariance of the noised incidence matrix deviates from the closed-form linear-Gaussian prediction derived from the operator.

Figures

Figures reproduced from arXiv: 2605.05024 by Christopher Nemeth.

Figure 1
Figure 1. Figure 1: Left: An actor–movie hypergraph. Right: H is the incidence matrix representation of the hypergraph, where rows correspond to actors, columns to movies, and Hij = 1 indicates that actor i appears in movie j. The forward SDE removes the hypergraph structure and converges to a Gaussian terminal distribution. The reverse SDE reconstructs the hypergraph from Gaussian initialisation. targets are all computable. … view at source ↗
Figure 2
Figure 2. Figure 2: Real-data comparisons. Left: higher-order metrics across real hypergraph datasets. Lower is better for view at source ↗
Figure 3
Figure 3. Figure 3: Actor dataset: Bipartite visualisation of held-out subhypergraph incidence matrices and matched generated view at source ↗
Figure 4
Figure 4. Figure 4: Citeseer dataset: Bipartite visualisation of held-out subhypergraph incidence matrices and matched generated view at source ↗
Figure 5
Figure 5. Figure 5: Cora dataset: Bipartite visualisation of held-out subhypergraph incidence matrices and matched generated view at source ↗
Figure 6
Figure 6. Figure 6: House committees dataset: Bipartite visualisation of held-out subhypergraph incidence matrices and matched view at source ↗
read the original abstract

Hypergraphs model higher-order interactions, but realistic hypergraph generation remains difficult because incidence, hyperedge-size heterogeneity, and overlap structure are not faithfully captured by pairwise reductions. We propose \HEDGE, a generative model defined directly on relaxed incidence matrices via a structured stochastic diffusion. The forward process combines a hypergraph-specific two-sided heat operator with an Ornstein--Uhlenbeck component, preserving structure-aware noising near the data while yielding an explicit Gaussian terminal law. Conditional on an observed hypergraph, this forward process is linear-Gaussian, so conditional means, covariances, scores, and reverse-drift targets are available in closed form. We therefore learn a permutation-equivariant state-only reverse-drift field in incidence space by regressing onto exact conditional targets, and generate samples by simulating a learned reverse-time SDE from the Gaussian base law. We establish exactness in the ideal state-only setting together with finite-horizon stability guarantees, and empirically show improved hypergraph generation quality relative to strong baselines.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces HEDGE, a generative model for hypergraphs defined directly on relaxed incidence matrices via structured stochastic diffusion. The forward process combines a hypergraph-specific two-sided heat operator with an Ornstein-Uhlenbeck component, which the authors assert remains linear-Gaussian conditional on an observed hypergraph. This enables closed-form conditional means, covariances, scores, and reverse-drift targets. A permutation-equivariant state-only reverse-drift field is learned by regressing onto these exact targets, with samples generated by simulating the learned reverse-time SDE from a Gaussian base law. The work claims exactness in the ideal state-only setting, finite-horizon stability guarantees, and improved empirical hypergraph generation quality over strong baselines.

Significance. If the linearity of the forward process holds and the closed-form derivations are valid, this provides a principled diffusion framework for higher-order structures that avoids pairwise reductions and enables exact score matching without post-hoc approximations. The finite-horizon stability guarantees and structure-aware noising are notable strengths that could advance generative modeling in network science and combinatorial data.

major comments (2)
  1. [§3] §3 (Forward Process): The central claim that the forward SDE remains linear-Gaussian conditional on a fixed observed hypergraph (enabling closed-form means, covariances, scores, and reverse-drift targets) requires explicit verification that the two-sided heat operator is a fixed linear map on the relaxed incidence matrix that commutes appropriately with the OU drift and diffusion. If the operator incorporates state-dependent normalization (e.g., by current hyperedge cardinalities) or nonlinear thresholding, the Gaussian family is not preserved; the abstract asserts this property but the provided text does not include the derivation steps or error analysis needed to confirm it.
  2. [§4] §4 (Theoretical Guarantees): The exactness result in the ideal state-only setting and the finite-horizon stability guarantees are load-bearing for the method's validity. These should be stated with the precise assumptions on the operator and the reverse-drift regression; without the full derivation, it is unclear whether the permutation-equivariant network can regress onto the exact targets without introducing bias that violates the claimed exactness.
minor comments (2)
  1. [§2] Notation for the relaxed incidence matrix and the two-sided heat operator should be defined with explicit matrix dimensions and time-dependence (or lack thereof) in the first section where they appear.
  2. [§5] The empirical section would benefit from an ablation isolating the contribution of the hypergraph-specific heat operator versus a standard OU process.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and insightful comments on our manuscript. We address each major comment below with clarifications on the structure of the forward process and the theoretical results. Where the presentation can be strengthened, we commit to revisions.

read point-by-point responses
  1. Referee: [§3] §3 (Forward Process): The central claim that the forward SDE remains linear-Gaussian conditional on a fixed observed hypergraph (enabling closed-form means, covariances, scores, and reverse-drift targets) requires explicit verification that the two-sided heat operator is a fixed linear map on the relaxed incidence matrix that commutes appropriately with the OU drift and diffusion. If the operator incorporates state-dependent normalization (e.g., by current hyperedge cardinalities) or nonlinear thresholding, the Gaussian family is not preserved; the abstract asserts this property but the provided text does not include the derivation steps or error analysis needed to confirm it.

    Authors: The two-sided heat operator is defined as a fixed linear map on the relaxed incidence matrix; it does not incorporate state-dependent normalization by hyperedge cardinalities or any nonlinear thresholding. The operator is constructed explicitly to commute with the linear Ornstein-Uhlenbeck drift and diffusion, which directly yields the linear-Gaussian property conditional on an observed hypergraph and the associated closed-form expressions. We acknowledge that the main text presents these facts at a high level without the full commutation proof and error bounds. In the revision we will add an appendix containing the explicit derivation, verification that the map remains linear and fixed, and the resulting closed-form score and drift targets. revision: yes

  2. Referee: [§4] §4 (Theoretical Guarantees): The exactness result in the ideal state-only setting and the finite-horizon stability guarantees are load-bearing for the method's validity. These should be stated with the precise assumptions on the operator and the reverse-drift regression; without the full derivation, it is unclear whether the permutation-equivariant network can regress onto the exact targets without introducing bias that violates the claimed exactness.

    Authors: The exactness claim in the ideal state-only setting is conditioned on the reverse-drift field being regressed exactly onto the closed-form targets; because those targets inherit permutation equivariance from the forward process, any sufficiently expressive permutation-equivariant network can in principle recover them without systematic bias. The finite-horizon stability guarantees rest on the bounded operator norm of the fixed linear heat map over finite time and standard Lipschitz conditions on the learned drift. We agree that these assumptions should be stated explicitly. In the revision we will expand §4 with a precise list of assumptions, a sketch of the stability proof, and a short discussion of approximation error in the regression step. revision: yes

Circularity Check

0 steps flagged

Standard derivation of reverse targets from linear-Gaussian forward process; no reduction to fitted inputs or self-citations.

full rationale

The paper defines a forward SDE combining a fixed two-sided heat operator with OU noise, states that this yields linear-Gaussian transitions conditional on a fixed hypergraph, and derives closed-form conditional statistics directly from that definition. The reverse-drift regression target is then the exact expression obtained from the forward process; this is the standard score-matching construction in diffusion models and does not equate any learned quantity to its own input by construction. No self-citation is invoked to justify uniqueness or the form of the operator, and the stability claims are presented as consequences of the same linear-Gaussian assumption rather than as independent predictions. The derivation chain is therefore self-contained once the linearity assumption is granted.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Review performed on abstract only; the central construction rests on the unverified claim that the two-sided heat operator plus OU process yields an exactly linear-Gaussian forward process on relaxed incidence matrices.

axioms (1)
  • domain assumption The forward diffusion process on relaxed incidence matrices is linear-Gaussian conditional on the observed hypergraph
    Abstract states this property enables closed-form conditional means, covariances, scores, and reverse-drift targets.

pith-pipeline@v0.9.0 · 5467 in / 1269 out tokens · 36772 ms · 2026-05-08T16:28:25.548346+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

48 extracted references · 4 canonical work pages · 1 internal anchor

  1. [1]

    Higher order learning with graphs

    Sameer Agarwal, Kristin Branson, and Serge Belongie. Higher order learning with graphs. InProceedings of the 23rd International Conference on Machine Learning (ICML), 2006

  2. [2]

    Brian D. O. Anderson. Reverse-time diffusion equation models.Stochastic Processes and their Applications, 12 (3):313–326, 1982

  3. [3]

    Networks beyond pairwise interactions: Structure and dynamics.Physics Reports, 874:1–92, 2020

    Federico Battiston, Giulia Cencetti, Iacopo Iacopini, Vito Latora, Maxime Lucas, Alice Patania, Jean-Gabriel Young, and Giovanni Petri. Networks beyond pairwise interactions: Structure and dynamics.Physics Reports, 874:1–92, 2020

  4. [4]

    Benson, Rediet Abebe, Michael T

    Austin R. Benson, Rediet Abebe, Michael T. Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial closure and higher-order link prediction.Proceedings of the National Academy of Sciences, 115(48):E11221–E11230, 2018

  5. [5]

    Wiley, New York, 3 edition, 1995

    Patrick Billingsley.Probability and Measure. Wiley, New York, 3 edition, 1995

  6. [6]

    The general mixture-diffusion sde and its relationship with an uncertain-volatility option model with volatility-asset decorrelation.arXiv preprint arXiv:0812.4052, 2008

    Damiano Brigo. The general mixture-diffusion sde and its relationship with an uncertain-volatility option model with volatility-asset decorrelation.arXiv preprint arXiv:0812.4052, 2008

  7. [7]

    Transport elliptical slice sampling

    Alberto Cabezas and Christopher Nemeth. Transport elliptical slice sampling. InInternational Conference on Artificial Intelligence and Statistics, pages 3664–3676. PMLR, 2023

  8. [8]

    Time reversal of diffusion processes under a finite entropy condition.Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 59(4):1844 – 1881, 2023

    Patrick Cattiaux, Giovanni Conforti, Ivan Gentil, and Christian Léonard. Time reversal of diffusion processes under a finite entropy condition.Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 59(4):1844 – 1881, 2023

  9. [9]

    Chan, Anand Louis, and Zhihao Gavin Tang

    Timothy H. Chan, Anand Louis, and Zhihao Gavin Tang. Spectral properties of hypergraph laplacians and applications.Journal of the ACM, 65(4), 2018

  10. [10]

    Springer, 2025

    Sinho Chewi, Jonathan Niles-Weed, and Philippe Rigollet.Statistical optimal transport. Springer, 2025

  11. [11]

    You are AllSet: a multiset function framework for hypergraph neural networks

    Eli Chien, Chao Pan, Jianhao Peng, and Olgica Milenkovic. You are AllSet: a multiset function framework for hypergraph neural networks. InInternational Conference on Learning Representations (ICLR), 2022

  12. [12]

    Philip S. Chodrow. Configuration models of random hypergraphs.Journal of Complex Networks, 8(3):cnaa018, 2020. 9 HEDGE- MAY7, 2026

  13. [13]

    Chodrow, Nate Veldt, and Austin R

    Philip S. Chodrow, Nate Veldt, and Austin R. Benson. Generative hypergraph clustering: from blockmodels to modularity.Science Advances, 7(28), 2021

  14. [14]

    American Mathematical Soc., 1997

    Fan RK Chung.Spectral graph theory, volume 92. American Mathematical Soc., 1997

  15. [15]

    Structural patterns and generative models of real-world hypergraphs

    Manh Tuan Do, Se-eun Yoon, Bryan Hooi, and Kijung Shin. Structural patterns and generative models of real-world hypergraphs. InProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 176–186, 2020

  16. [16]

    Transport monte carlo: High-accuracy posterior approximation via random transport.Journal of the American Statistical Association, 118(543):1659–1670, 2023

    Leo L Duan. Transport monte carlo: High-accuracy posterior approximation via random transport.Journal of the American Statistical Association, 118(543):1659–1670, 2023

  17. [17]

    Oates, and Chris Sherlock.Scalable Monte Carlo for Bayesian Learning

    Paul Fearnhead, Christopher Nemeth, Chris J. Oates, and Chris Sherlock.Scalable Monte Carlo for Bayesian Learning. Institute of Mathematical Statistics Monographs. Cambridge University Press, 2025

  18. [18]

    Hygene: A diffusion-based hypergraph generation method

    Dorian Gailhard, Enzo Tartaglione, Lirida Naviner, and Jhony H Giraldo. Hygene: A diffusion-based hypergraph generation method. InProceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 16682– 16690, 2025

  19. [19]

    Consistency of spectral hypergraph partitioning under planted partition model.The Annals of Statistics, 45(1):289–315, 2017

    Debarghya Ghoshdastidar and Ambedkar Dukkipati. Consistency of spectral hypergraph partitioning under planted partition model.The Annals of Statistics, 45(1):289–315, 2017

  20. [20]

    Time reversal of diffusions.The Annals of Probability, pages 1188–1205, 1986

    Ulrich G Haussmann and Etienne Pardoux. Time reversal of diffusions.The Annals of Probability, pages 1188–1205, 1986

  21. [21]

    Desmond J. Higham. An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Review, 43(3):525–546, 2001

  22. [22]

    Denoising diffusion probabilistic models.Advances in Neural Information Processing Systems, 33:6840–6851, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.Advances in Neural Information Processing Systems, 33:6840–6851, 2020

  23. [23]

    Peter Holderrieth, Marton Havasi, Jason Yim, Neta Shaul, Itai Gat, Tommi Jaakkola, Brian Karrer, Ricky T. Q. Chen, and Yaron Lipman. Generator matching: Generative modeling with arbitrary markov processes. InThe Thirteenth International Conference on Learning Representations, 2025

  24. [24]

    Hypergraphs and cellular networks.PLoS Computational Biology, 5(5), 2009

    Steffen Klamt, Utz-Uwe Haus, and Fabian Theis. Hypergraphs and cellular networks.PLoS Computational Biology, 5(5), 2009

  25. [25]

    Kloeden and Eckhard Platen.Numerical Solution of Stochastic Differential Equations

    Peter E. Kloeden and Eckhard Platen.Numerical Solution of Stochastic Differential Equations. Applications of Mathematics. Springer, 1992

  26. [26]

    DHG-bench: A comprehensive benchmark for deep hypergraph learning

    Fan Li, Xiaoyang Wang, Wenjie Zhang, Ying Zhang, and Xuemin Lin. DHG-bench: A comprehensive benchmark for deep hypergraph learning. InThe Fourteenth International Conference on Learning Representations, 2026

  27. [27]

    When hypergraph meets heterophily: New benchmark datasets and baseline.Proceedings of the AAAI Conference on Artificial Intelligence, 39(17):18377–18384, Apr

    Ming Li, Yongchun Gu, Yi Wang, Yujie Fang, Lu Bai, Xiaosheng Zhuang, and Pietro Liò. When hypergraph meets heterophily: New benchmark datasets and baseline.Proceedings of the AAAI Conference on Artificial Intelligence, 39(17):18377–18384, Apr. 2025

  28. [28]

    Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. InInternational Conference on Learning Representations, 2023

  29. [29]

    Finding bipartite components in hypergraphs

    Peter Macgregor and He Sun. Finding bipartite components in hypergraphs. InAdvances in Neural Information Processing Systems (NeurIPS), 2021

  30. [30]

    Invariant and equivariant graph networks

    Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In International Conference on Learning Representations, 2019

  31. [31]

    Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators

    Karolis Martinkus, Andreas Loukas, Nathanaël Perraudin, and Roger Wattenhofer. Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators. InInternational Conference on Machine Learning, pages 15159–15179. PMLR, 2022

  32. [32]

    M. E. J. Newman. The structure of scientific collaboration networks.PNAS, 98(2):404–409, 2001

  33. [33]

    Stochastic differential equations

    Bernt Øksendal. Stochastic differential equations. InStochastic differential equations: an introduction with applications. Springer, 2000

  34. [34]

    Gaussian processes on hypergraphs

    Thomas Pinder, Kathryn Turnbull, Christopher Nemeth, and David Leslie. Gaussian processes on hypergraphs. arXiv preprint arXiv:2106.01982, 2021

  35. [35]

    Cambridge University Press, 2019

    Simo Särkkä and Arno Solin.Applied Stochastic Differential Equations. Cambridge University Press, 2019

  36. [36]

    Bayes and big data: The consensus monte carlo algorithm

    Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I George, and Robert E McCulloch. Bayes and big data: The consensus monte carlo algorithm. InBig Data and Information Theory, pages 8–18. Routledge, 2022. 10 HEDGE- MAY7, 2026

  37. [37]

    Tuning-free maximum likelihood training of latent variable models via coin betting

    Louis Sharrock, Daniel Dodd, and Christopher Nemeth. Tuning-free maximum likelihood training of latent variable models via coin betting. InInternational Conference on Artificial Intelligence and Statistics, pages 1810–1818. PMLR, 2024

  38. [38]

    Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole

    Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score- based generative modeling through stochastic differential equations. InInternational Conference on Learning Representations (ICLR), 2021

  39. [39]

    Generator-based graph generation via heat diffusion

    Anthony Stephenson, Ian Gallagher, and Christopher Nemeth. Generator-based graph generation via heat diffusion. arXiv preprint arXiv:2602.03612, 2026

  40. [40]

    Social influence analysis in large-scale networks

    Jie Tang, Jimeng Sun, Chi Wang, and Zi Yang. Social influence analysis in large-scale networks. InProceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 807–816, 2009

  41. [41]

    Latent space modeling of hypergraph data.Journal of the American Statistical Association, 119(548):2634–2646, 2024

    Kathryn Turnbull, Simón Lunagómez, Christopher Nemeth, and Edoardo Airoldi. Latent space modeling of hypergraph data.Journal of the American Statistical Association, 119(548):2634–2646, 2024

  42. [42]

    Digress: Discrete denoising diffusion for graph generation

    Clement Vignac, Igor Krawczuk, Antoine Siraudin, Bohan Wang, V olkan Cevher, and Pascal Frossard. Digress: Discrete denoising diffusion for graph generation. InThe Eleventh International Conference on Learning Representations, 2023

  43. [43]

    Swiss: A scalable markov chain monte carlo divide-and- conquer strategy.Stat, 12(1):e523, 2023

    Callum Vyner, Christopher Nemeth, and Chris Sherlock. Swiss: A scalable markov chain monte carlo divide-and- conquer strategy.Stat, 12(1):e523, 2023

  44. [44]

    Denoising Diffused Embeddings: a Generative Approach for Hypergraphs

    Shihao Wu, Junyi Yang, Gongjun Xu, and Ji Zhu. Denoising diffused embeddings: A generative approach for hypergraphs.arXiv preprint arXiv:2501.01541, 2025

  45. [45]

    Hypergcn: A new method for training graph convolutional networks on hypergraphs

    Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. InAdvances in Neural Information Processing Systems, volume 32, 2019

  46. [46]

    Defog: Discrete flow matching for graph generation

    Qin Yiming, Manuel Madeira, Dorina Thanou, and Pascal Frossard. Defog: Discrete flow matching for graph generation. InForty-second International Conference on Machine Learning, 2025

  47. [47]

    Learning with hypergraphs: Clustering, classification, and embedding.Advances in Neural Information Processing Systems, 19, 2006

    Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding.Advances in Neural Information Processing Systems, 19, 2006. A Proofs of Theoretical Results This appendix contains the main theoretical results underpinning the structured heat–OU diffusion used by HEDGE. The development proceeds in ...

  48. [48]

    Hence Erev = 0, the stability term in (32) vanishes, and only theO(∆t 1/2)Euler–Maruyama term remains

    = 0. Hence Erev = 0, the stability term in (32) vanishes, and only theO(∆t 1/2)Euler–Maruyama term remains. A.7 Equivariance of theL 2-optimal target LetP∈R n×n andQ∈R m×m be permutation matrices. The action ofS n ×S m on incidence-space states is X7− →P XQ ⊤, X∈R n×m. We write TP,Q(X) :=P XQ ⊤. This map is orthogonal with respect to the Frobenius inner p...