Recognition: unknown
GRIFDIR: Graph Resolution-Invariant FEM Diffusion Models in Function Spaces over Irregular Domains
Pith reviewed 2026-05-07 16:59 UTC · model grok-4.3
The pith
Representing graph convolutional kernels as finite element functions allows diffusion models to generate functions on irregular domains while preserving resolution invariance.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors establish that a novel architecture representing generalised graph convolutional kernels as finite element functions serves as an effective backbone for score-based diffusion models in function spaces. This enables the models to handle unstructured meshes and complex domain topologies naturally, as shown through unconditional and conditional sampling experiments that maintain resolution invariance and achieve high fidelity on diverse geometries including non-convex and multiply-connected domains.
What carries the argument
Generalised graph convolutional kernels represented as finite element functions, which replace grid-biased operators to process irregular discretizations in the diffusion process.
If this is right
- The proposed method maintains resolution invariance across varying discretizations of the same domain.
- It achieves high fidelity when capturing functional distributions on non-trivial geometries.
- Both unconditional and conditional sampling become feasible on complex domains.
- The approach generalizes to non-convex and multiply-connected domains where grid-based methods fail.
Where Pith is reading between the lines
- This could facilitate the use of diffusion models in conjunction with traditional finite element simulations for generating physically consistent functions.
- Similar representations might improve other types of operator learning beyond diffusion, such as in solving PDEs on irregular domains.
- Testing on even more intricate topologies or higher-dimensional function spaces would further validate the resolution invariance property.
Load-bearing premise
That casting generalized graph convolutional kernels as finite element functions removes the grid biases of existing methods and enables effective modeling on unstructured meshes and complex geometries.
What would settle it
Training the model on a set of functions defined on one unstructured mesh of a complex domain and then checking whether samples generated on a much finer or differently connected mesh of the identical domain exhibit the same statistical properties and maintain high fidelity.
Figures
read the original abstract
Score-based diffusion models in infinite-dimensional function spaces provide a mathematically principled framework for modelling function-valued data, offering key advantages such as resolution invariance and the ability to handle irregular discretisations. However, practical implementations have struggled to fully realise these benefits. Existing backbones like Fourier neural operators are often biased towards regular grids and fail to generalise to complex domain topologies. We propose a novel architecture for function-space diffusion models that represents generalised graph convolutional kernels as finite element functions, enabling the model to naturally handle unstructured meshes and complex geometries. We demonstrate the efficacy of our network architecture through a series of unconditional and conditional sampling experiments across diverse geometries, including non-convex and multiply-connected domains. Our results show that the proposed method maintains resolution invariance and achieves high fidelity in capturing functional distributions on non-trivial geometries.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces GRIFDIR, a score-based diffusion model architecture for function spaces over irregular domains. It represents generalized graph convolutional kernels as finite element functions to achieve resolution invariance and handle unstructured meshes and complex (non-convex, multiply-connected) geometries, where prior methods like Fourier neural operators are biased toward regular grids. The approach is evaluated via unconditional and conditional sampling experiments demonstrating high-fidelity capture of functional distributions.
Significance. If the resolution-invariance claim holds without mesh-dependent artifacts, the work would meaningfully extend function-space diffusion models to scientific applications on realistic geometries, addressing a key practical limitation of existing backbones. The FEM-graph kernel construction is a targeted idea that could generalize beyond the reported experiments.
major comments (2)
- [§4] §4 (Experiments) and the associated method derivation: the headline claim of resolution invariance rests on the assumption that the FEM projection of the graph convolutional kernel commutes with mesh refinement and leaves the learned score unchanged. No explicit verification (e.g., operator-norm bounds or convergence rates under h-refinement) is provided; the reported experiments on diverse geometries show visual fidelity but do not isolate whether discretization artifacts vary with mesh density, directly engaging the stress-test concern.
- [§3.1] §3.1 (Architecture): the generalized graph convolutional kernel is cast as a finite-element function, yet the paper does not specify how the graph Laplacian discretization is chosen or whether it is independent of the underlying mesh density. If the effective operator changes with refinement, the diffusion dynamics become resolution-dependent, undermining the central invariance result.
minor comments (2)
- The abstract and introduction would benefit from a concise statement of the precise function space (e.g., Sobolev or L2) in which the diffusion is defined.
- Figure captions should explicitly state mesh resolutions and domain topologies for each panel to allow readers to assess the invariance claim visually.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed comments, which help clarify the presentation of our resolution-invariance results. We respond to each major comment below, providing additional explanation of the method's design while indicating revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [§4] §4 (Experiments) and the associated method derivation: the headline claim of resolution invariance rests on the assumption that the FEM projection of the graph convolutional kernel commutes with mesh refinement and leaves the learned score unchanged. No explicit verification (e.g., operator-norm bounds or convergence rates under h-refinement) is provided; the reported experiments on diverse geometries show visual fidelity but do not isolate whether discretization artifacts vary with mesh density, directly engaging the stress-test concern.
Authors: We agree that explicit operator-norm bounds or convergence rates under h-refinement are not derived in the original submission. The architectural choice to represent generalized graph convolutional kernels as finite-element functions is intended to ensure that the kernel lives in the underlying function space; by standard FEM approximation theory, the projection error vanishes with mesh refinement, so the learned score remains consistent in the continuous limit. We have revised the manuscript to add a short discussion of this consistency property (new paragraph in Section 3) together with a reference to classical FEM error estimates. In addition, we have inserted a brief analysis in Section 4 that reports sampling consistency (via distributional metrics) across successively refined meshes on the same underlying functions, thereby isolating discretization effects more explicitly than the original visual comparisons. A full operator-norm analysis lies outside the scope of the present work but could be pursued in follow-up research. revision: partial
-
Referee: [§3.1] §3.1 (Architecture): the generalized graph convolutional kernel is cast as a finite-element function, yet the paper does not specify how the graph Laplacian discretization is chosen or whether it is independent of the underlying mesh density. If the effective operator changes with refinement, the diffusion dynamics become resolution-dependent, undermining the central invariance result.
Authors: The graph is constructed from mesh connectivity, yet the convolutional kernel itself is parameterized directly as a function in the finite-element space rather than as a purely discrete matrix. The Laplacian discretization follows the standard weak-form FEM formulation (consistent mass and stiffness matrices), which converges to the continuous operator under refinement. Consequently, the effective diffusion dynamics are resolution-invariant in the function-space sense once the mesh adequately resolves the domain. We have updated Section 3.1 to state this discretization choice explicitly and to note its asymptotic independence from any particular mesh density, supported by the FEM convergence properties already implicit in the kernel representation. revision: yes
Circularity Check
No significant circularity in derivation or claims.
full rationale
The paper proposes a new architecture representing generalized graph convolutional kernels as finite element functions for function-space diffusion models on irregular domains. It supports the resolution-invariance and fidelity claims through unconditional and conditional sampling experiments on diverse geometries including non-convex and multiply-connected domains. No equations or steps in the provided text reduce by construction to self-definitions, fitted parameters renamed as predictions, or load-bearing self-citations; the central results rest on empirical demonstrations rather than tautological inputs.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 1 Pith paper
-
Christoffel-DPS: Optimal sensor placement in diffusion posterior sampling for arbitrary distributions
Christoffel-DPS is a distribution-free optimal sensor placement framework for diffusion posterior sampling that provides non-asymptotic recovery bounds and outperforms Gaussian baselines on non-Gaussian benchmarks.
Reference graph
Works this paper leans on
-
[1]
Neural operator: Graph kernel network for partial differential equations
Anandkumar, A., Azizzadenesheli, K., Bhattacharya, K., Kovachki, N., Li, Z., Liu, B., and Stuart, A. Neural operator: Graph kernel network for partial differential equations. InICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations,
2020
-
[2]
L., Denker, A., and Frellsen, J
Baker, E. L., Denker, A., and Frellsen, J. Supervised guid- ance training for infinite-dimensional diffusion models. arXiv preprint arXiv:2601.20756,
work page internal anchor Pith review arXiv
-
[3]
In: Handbook of Uncertainty Quantification, pp
ISBN 978-3-319-12385-1. doi: 10.1007/978-3-319-12385-1
-
[4]
doi: 10.1016/J.JCP.2023.112636. Fey, M., Lenssen, J. E., Weichert, F., and M ¨uller, H. Splinecnn: Fast geometric deep learning with continuous b-spline kernels. InProceedings of the IEEE conference on computer vision and pattern recognition, pp. 869–877,
-
[5]
and Ji, S
Gao, H. and Ji, S. Graph U-Nets. Ininternational conference on machine learning, pp. 2083–2092. PMLR,
2083
-
[6]
Ju, X., Yao, J., Anandkumar, A., Benson, S. M., and Wen, G. Function-space decoupled diffusion for forward and inverse modeling in carbon capture and storage.arXiv preprint arXiv:2602.12274,
-
[7]
Y ., Yao, J., Chiang, L., Berner, J., and Anandkumar, A
Lin, T. Y ., Yao, J., Chiang, L., Berner, J., and Anandkumar, A. Decoupled diffusion sampling for inverse problems on function spaces.arXiv preprint arXiv:2601.23280,
-
[8]
Y ., De B´ezenac, E., Perera, S
Lingsch, L., Michelis, M. Y ., De B´ezenac, E., Perera, S. M., Katzschmann, R. K., and Mishra, S. Beyond regular grids: Fourier-based neural operators on arbitrary do- mains.arXiv preprint arXiv:2305.19663,
-
[9]
Monti, F., Boscaini, D., Masci, J., Rodol`a, E., Svoboda, J., and Bronstein, M. M. Geometric deep learning on graphs and manifolds using mixture model cnns. In2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 5425–5434. IEEE Computer Society,
2017
-
[10]
Peebles, W
doi: 10.1109/ CVPR.2017.576. Peebles, W. and Xie, S. Scalable diffusion models with transformers. InProceedings of the IEEE/CVF interna- tional conference on computer vision, pp. 4195–4205,
2017
-
[11]
10 GRIFDIR: Graph Resolution-Invariant FEM Diffusion Models in Function Spaces over Irregular Domains Simonovsky, M
ISSN 2835-8856. 10 GRIFDIR: Graph Resolution-Invariant FEM Diffusion Models in Function Spaces over Irregular Domains Simonovsky, M. and Komodakis, N. Dynamic edge- conditioned filters in convolutional neural networks on graphs. In2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 29–38. IE...
2017
-
[12]
Song, Y ., Sohl-Dickstein, J., Kingma, D
doi: 10.1109/CVPR.2017.11. Song, Y ., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er- mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. InInternational Conference on Learning Representations,
-
[13]
W¨urth, T., Freymuth, N., Neumann, G., and K ¨arger, L. Diffusion-based hierarchical graph neural networks for simulating nonlinear solid mechanics.arXiv preprint arXiv:2506.06045,
-
[14]
Related Work Graph DiffusionIt is important to distinguish the infinite-dimensional framework from recent works that apply GNNs within standard finite-dimensional diffusion models
11 GRIFDIR: Graph Resolution-Invariant FEM Diffusion Models in Function Spaces over Irregular Domains A. Related Work Graph DiffusionIt is important to distinguish the infinite-dimensional framework from recent works that apply GNNs within standard finite-dimensional diffusion models. For instance, Valencia et al. (2025) proposeGraph Diffusion Modelsto le...
2025
-
[15]
FFM generalises flow matching to operate in infinite-dimensional function spaces
is a related framework to infinite- dimensional diffusion. FFM generalises flow matching to operate in infinite-dimensional function spaces. Rather than learning a vector field on a finite-dimensional latent, FFM defines a path of probability measures interpolating between a Gaussian measure (often N(0, C) ) and the data distribution, and learns a functio...
2025
-
[16]
Similar to the finite-dimensional setting, the conditional distribution A0|At =a t is generally intractable. Following (Zhang et al., 2025), we use a Gaussian approximation A0|At =a t ∼ N(ˆa0(at), r2 t C),(27) where C is the prior covariance operator and ˆa0(at) denotes a Tweedie estimate computed from the score model. Assume measurements are generated ac...
2025
-
[17]
The function-space (preconditioned) Langevin dynamics then take the form ˆa(j+1) 0 ←ˆa(j) 0 −η tC 1 r2 t C −1 ˆa(j) 0 −ˆa(0) 0 +∇Φ(ˆa(j) 0 ) + p 2ηt ϵj, ϵ j ∼ N(0, C),(29) with a decaying step sizeη t. 13 GRIFDIR: Graph Resolution-Invariant FEM Diffusion Models in Function Spaces over Irregular Domains Algorithm 1Function-Space Decoupled Annealing Posteri...
2024
-
[18]
Coarse-mesh coordinates are encoded by a small MLP and added as positional embeddings before the first block
with AdaLN-Zero conditioning, capturing long-range dependencies that local message passing cannot resolve: h←h+γ 1 ⊙MHSA AdaLN(h;γ(t)) , h←h+γ 2 ⊙FFN AdaLN(h;γ(t)) , with output gates γ1,γ 2 zero-initialised. Coarse-mesh coordinates are encoded by a small MLP and added as positional embeddings before the first block. The decoder mirrors the encoder, mappi...
2022
-
[19]
is then applied in image space, after which the predictions are interpolated back onto the original mesh. Default configuration: hidden dimension128, 4down/up-sampling stages with single convolutions per block, residual connections, attention at the bottleneck, max-pool downsampling and bilinear upsampling, no dropout. D.2. Evaluation Metrics For conditio...
2007
-
[20]
In Figure 11 we also 19 GRIFDIR: Graph Resolution-Invariant FEM Diffusion Models in Function Spaces over Irregular Domains Figure 8.Two rollouts of the pinball dataset for different velocitiesµfrom the training set. length scale = 0.0500 N = 1934 length scale = 0.1000 N = 557 length scale = 0.2000 N = 226 length scale = 0.4000 N = 140 Figure 9.The pinball...
1934
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.