pith. machine review for the scientific record. sign in

arxiv: 2605.09284 · v1 · submitted 2026-05-10 · 💻 cs.LG · cs.AI· cs.CE· physics.app-ph· physics.comp-ph

Recognition: 2 theorem links

· Lean Theorem

Semi-Supervised Neural Super-Resolution for Mesh-Based Simulations

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:06 UTC · model grok-4.3

classification 💻 cs.LG cs.AIcs.CEphysics.app-phphysics.comp-ph
keywords semi-supervised learningsuper-resolutionmesh-based simulationsmessage passing neural networksPDE solvinginductive biasescomplementary learninggraph neural networks
0
0 comments X

The pith

SuperMeshNet uses semi-supervised complementary MPNNs to super-resolve mesh simulations with 90% less high-resolution data while beating fully supervised accuracy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Mesh-based simulations of PDEs deliver high accuracy only when run on fine meshes that are computationally expensive. Super-resolution seeks to reconstruct those fine solutions from cheap coarse-mesh runs, yet standard neural approaches require large volumes of costly high-resolution labels. SuperMeshNet trains two complementary message-passing networks jointly on a small set of paired low-to-high examples plus many unpaired low-resolution runs. One network supplies pseudo-supervision to the other from the unpaired data while inductive biases are injected into both models. Experiments show the resulting super-resolved fields have lower error than a fully supervised baseline trained on ten times more high-resolution data.

Core claim

SuperMeshNet is a semi-supervised super-resolution framework for mesh-based PDE simulations that introduces complementary learning: two MPNN-based models are trained jointly so that one leverages abundant unpaired low-resolution data to provide supervisory signal to the other, which is itself trained on limited paired low-to-high data; the framework is further augmented by inductive biases that improve reconstruction quality, allowing it to reach lower RMSE than a fully supervised benchmark while using only 10 percent of the high-resolution training examples.

What carries the argument

Complementary learning between two jointly trained MPNN models that exchange supervisory signals derived from paired LR-HR examples and unpaired LR simulations

If this is right

  • High-fidelity mesh simulations become feasible with far fewer expensive high-resolution training runs.
  • Super-resolution post-processing can be applied to existing libraries of cheap low-resolution simulation outputs.
  • Inductive biases can be combined with semi-supervised training to further reduce error without extra labeled data.
  • The same complementary-learning pattern may apply to other graph-structured physical simulation tasks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach may generalize to other graph neural network architectures used for PDE solving where high-resolution labels are scarce.
  • It could enable real-time or interactive super-resolution inside engineering design loops that currently cannot afford fine-mesh training sets.
  • Consistency between the two complementary models on unpaired data might serve as an implicit regularizer that improves robustness to mesh topology changes.
  • Testing the method on time-dependent or multi-physics simulations would reveal whether the data-efficiency gain persists beyond steady-state problems.

Load-bearing premise

The two jointly trained complementary MPNN models can extract useful supervisory signal from abundant unpaired low-resolution data without introducing systematic biases that degrade super-resolved output on unseen meshes.

What would settle it

Train the model on a new collection of meshes where the distribution of the unpaired low-resolution runs differs markedly from the paired runs; if the RMSE on held-out high-resolution test cases is then higher than that of the fully supervised baseline trained on all available high-resolution data, the data-efficiency claim is falsified.

Figures

Figures reproduced from arXiv: 2605.09284 by Jiyeon Kim, Won-Yong Shin, Youngjoon Hong.

Figure 2
Figure 2. Figure 2: Dataset setting. Complementary learning utilizes a paired LR–HR training dataset, including Nh paired data samples (green hexagons), and an unpaired LR training dataset, containing N − Nh unpaired LR data samples (white hexagons). In total, complementary learning can reduce N − Nh HR data samples compared to the case of fully supervised learning. ventional semi-supervised methods, where multiple mod￾els ge… view at source ↗
Figure 1
Figure 1. Figure 1: Problem setting. We aim to predict uˆh on HR mesh Mh, containing nodes at positions Ph and edges Eh, from LR data sample ul defined on LR mesh Ml, comprising nodes at positions Pl and edges El. 2.2. Complementary Learning1 2.2.1. DATASET SETTING As depicted in [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: A schematic overview of complementary learning in SuperMeshNet. It first samples paired LR–HR data (u α l ,u α h ), (u β l ,u β h ) and unpaired LR data u γ l . Complementary learning leverages both supervised and unsupervised learning to jointly train two neural network models, Fθ and Gϕ. Fθ predicts an HR solution from its LR counterpart, while Gϕ predicts the difference between two HR solutions from two… view at source ↗
Figure 5
Figure 5. Figure 5: Model architecture of Gϕ. 2.3. Model Architecture The architecture of the primary model Fθ, illustrated in Fig￾ure 4, is built upon SRGNN (Barwey et al., 2024). The role of Fθ is to transform LR data into HR data, which is con￾ducted by the lowermost upsampler in [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Training time increase (left) and data generation time decrease (right), resulting from the use of SuperMeshNet (Nh = 20, N = 200), relative to fully supervised learning (Nh = N = 200) on Dataset 1 and its mesh-size variants. All experiments use MGN as the underlying MPNN architecture. inherently designed to effectively leverage the 180 unpaired LR samples that fully supervised learning cannot utilize. Rem… view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of the squared error of pressure between SuperMeshNet and fully supervised baselines on a real-world geometry dataset. Here, Nh and N denote the numbers of HR and LR data samples, respectively. For all cases, MGN is used as the underlying MPNN [PITH_FULL_IMAGE:figures/full_fig_p008_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: Comparison of the LR input, ground truth HR data, and two vorticity predictions produced by SuperMeshNet and full su￾pervision on the time-dependent PDE dataset 2. Here, Nh and N represent the number of HR and LR data samples, respectively. For all cases, MGN is utilized as the underlying MPNN. To further validate the superiority of SuperMeshNet from both physical and practical perspectives, we evaluate th… view at source ↗
Figure 10
Figure 10. Figure 10: Schematic illustration of kNN interpolation with k = 3. Yellow nodes belong to the target mesh, blue nodes to the source mesh, and the darker blue nodes indicate the k nearest neighbors of the darker yellow node. Given the positions of the k nearest source nodes pi (1 ≤ i ≤ k), their corresponding values yi, and the target node position p0, the value at the target node y0 can be estimated via weighted ave… view at source ↗
Figure 11
Figure 11. Figure 11: The schematic overview of the primary model Fθ. The model Fθ aims to predict uˆ q h targeting HR data sample u q h from LR data sample u q l . The LR data sample u q l is input to Fθ as a part of node feature 0X q l of an input graph g q l . The role of Fθ is to transform LR data into HR data, which is conducted by the lowermost upsampler in [PITH_FULL_IMAGE:figures/full_fig_p017_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: The schematic overview of the auxiliary model Gϕ. The Gϕ aims to predict uˆ rs h targeting difference between two input LR data samples u r l and u s l . The two input LR data samples u r l and u s l are fed into the Gϕ as parts of node features of two input graphs g r l and g s l , respectively. In order to reduce computational cost, Fθ and Gϕ share a feature extractor in [PITH_FULL_IMAGE:figures/full_f… view at source ↗
Figure 13
Figure 13. Figure 13: Examples of LR and HR data samples with various angles of applied force relative to the x-axis from Dataset 1. 21 [PITH_FULL_IMAGE:figures/full_fig_p021_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Examples of LR and HR data samples with various ratios between the lengths of the major and minor axes from Dataset 2. 22 [PITH_FULL_IMAGE:figures/full_fig_p022_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Examples of LR and HR data samples with various ratios between the lengths of the major and minor axes from Dataset 3. 23 [PITH_FULL_IMAGE:figures/full_fig_p023_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: An example of LR and HR data samples corresponding to an angle of attack of 0◦ from the real-world geometry dataset. 24 [PITH_FULL_IMAGE:figures/full_fig_p024_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Example velocity magnitudes of LR and HR data samples corresponding to multiple timestamps from the time-dependent PDE dataset 1. 25 [PITH_FULL_IMAGE:figures/full_fig_p025_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Examples of LR and HR data samples corresponding to multiple timestamps from the time-dependent PDE dataset 2. 26 [PITH_FULL_IMAGE:figures/full_fig_p026_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Mesh convergence tests of stress and electric field in high-concentration regions across three FEM datasets. H.2. Real-World Geometry Dataset [PITH_FULL_IMAGE:figures/full_fig_p027_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Mesh convergence tests of the drag and lift coefficients for the real-world geometry dataset. The mesh convergence results in [PITH_FULL_IMAGE:figures/full_fig_p027_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Mesh convergence tests of the mean and amplitude of drag coefficients, and the amplitude of lift coefficients, for the time-dependent PDE dataset 1. 28 [PITH_FULL_IMAGE:figures/full_fig_p028_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Relationship between the mean of the input and the ground truth output for the super-resolution task. 32 [PITH_FULL_IMAGE:figures/full_fig_p032_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Effect of inductive biases on the loss landscape for the super-resolution task when SuperMeshNet with and without inductive biases (corresponding to w IB and w/o IB, respectively) is used. Here, MGN is employed as an MPNN for each method and is trained when Nh=20 and N=200 for Dataset 1. Effect of inductive biases on RMSE performance As summarized in [PITH_FULL_IMAGE:figures/full_fig_p033_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Relationship between the mean of the input and the ground truth output for the norm prediction task. 34 [PITH_FULL_IMAGE:figures/full_fig_p034_24.png] view at source ↗
Figure 25
Figure 25. Figure 25: Effect of inductive biases on the loss landscape for the norm prediction task when SuperMeshNet with and without inductive biases (corresponding to w IB and w/o IB, respectively) is used. Here, MGN is employed as an MPNN for each method and is trained when Nh=20 and N=200 for Dataset 1. Effect of inductive biases on RMSE performance As summarized in [PITH_FULL_IMAGE:figures/full_fig_p035_25.png] view at source ↗
Figure 26
Figure 26. Figure 26: Training time increase (left) and data generation time decrease (right), resulting from the use of SuperMeshNet (Nh = 20, N = 200), relative to fully supervised learning (Nh = N = 200) on Dataset 1 and its mesh-size variants. All experiments use MGN as the underlying MPNN architecture [PITH_FULL_IMAGE:figures/full_fig_p039_26.png] view at source ↗
Figure 27
Figure 27. Figure 27: Training time increase (left) and data generation time decrease (right), resulting from the use of SuperMeshNet (Nh = 20, N = 200), relative to fully supervised learning (Nh = N = 200) on Dataset 2 and its mesh-size variants. All experiments use MGN as the underlying MPNN architecture. 39 [PITH_FULL_IMAGE:figures/full_fig_p039_27.png] view at source ↗
Figure 28
Figure 28. Figure 28: Training time increase (left) and data generation time decrease (right), resulting from the use of SuperMeshNet (Nh = 20, N = 200), relative to fully supervised learning (Nh = N = 200) on Dataset 3 and its mesh-size variants. All experiments use MGN as the underlying MPNN architecture. 40 [PITH_FULL_IMAGE:figures/full_fig_p040_28.png] view at source ↗
Figure 29
Figure 29. Figure 29: demonstrates that SuperMeshNet substantially reduces the computational cost of HR simulation. Specifically, the combined computational cost of LR simulation and subsequent SuperMeshNet inference is significantly lower than the cost of HR simulation. The time saving becomes even more pronounced for small mesh sizes, where the computational cost required for HR simulation grows rapidly [PITH_FULL_IMAGE:fig… view at source ↗
Figure 30
Figure 30. Figure 30: LR input, ground truth HR data, and two vorticity predictions produced by SuperMeshNet and full supervision for the time-dependent PDE dataset 2 over multiple timestamps. Here, Nh and N represent the number of HR and LR data samples, respectively. For all cases, MGN is utilized as the underlying MPNN. 48 [PITH_FULL_IMAGE:figures/full_fig_p048_30.png] view at source ↗
Figure 31
Figure 31. Figure 31: Loss curves for five different random seeds, each shown in a different color. For all cases, the MGN-based SuperMeshNet is trained using Nh = 20 and N = 200 from Dataset 1. To further investigate training stability, we conduct controlled experiments by injecting errors into pseudo-labels. As illustrated in [PITH_FULL_IMAGE:figures/full_fig_p052_31.png] view at source ↗
Figure 32
Figure 32. Figure 32: Stability analysis via controlled pseudo-label perturbations. For all cases, the MGN-based SuperMeshNet is trained using Nh = 20 and N = 200 from Dataset 1. 52 [PITH_FULL_IMAGE:figures/full_fig_p052_32.png] view at source ↗
Figure 33
Figure 33. Figure 33: Examples of LR and HR data samples from the dataset including a singular point. We compare the HR fields predicted by SuperMeshNet with those obtained via pure kNN interpolation without neural network-based prediction in [PITH_FULL_IMAGE:figures/full_fig_p054_33.png] view at source ↗
Figure 34
Figure 34. Figure 34: Comparison of predictions by SuperMeshNet and pure kNN interpolation. 54 [PITH_FULL_IMAGE:figures/full_fig_p054_34.png] view at source ↗
read the original abstract

Mesh-based simulations provide high-fidelity solutions to partial differential equations (PDEs), but achieving such accuracy typically requires fine meshes, leading to substantial computational overhead. Super-resolution techniques aim to mitigate this cost by reconstructing high-resolution (HR), high-fidelity solutions from low-cost, low-resolution (LR) counterparts. However, training neural networks for super-resolution often demands large amounts of expensive HR supervision data. To address this challenge, we propose SuperMeshNet, an HR data-efficient super-resolution framework for mesh-based simulations aided by message passing neural networks (MPNNs). At its core, SuperMeshNet introduces complementary learning, a semi-supervised approach that effectively leverages both 1) a small amount of paired LR-HR data and 2) abundant unpaired LR data via two jointly trained, complementary MPNN-based models. Additionally, our model is enriched by inductive biases, which are empirically shown to further improve super-resolution performance. Extensive experiments demonstrate that SuperMeshNet requires 90% less HR data to achieve even lower root mean square error (RMSE) than that of the fully supervised benchmark without the inductive biases. The source code and datasets are available at https://github.com/jykim-git/SuperMeshNet.git.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript introduces SuperMeshNet, a semi-supervised super-resolution framework for mesh-based PDE simulations. It uses two jointly trained complementary MPNN models that leverage a small amount of paired LR-HR data together with abundant unpaired LR data, augmented by inductive biases. The central empirical claim is that the method achieves lower RMSE than a fully supervised benchmark while requiring only 10% of the HR training data.

Significance. If the performance claims are robust, the work could meaningfully reduce the cost of generating high-fidelity training data for neural surrogates in computational science. The public release of code and datasets is a clear strength that supports reproducibility and follow-on work.

major comments (1)
  1. [Experimental evaluation] The fully supervised benchmark explicitly omits the inductive biases present in SuperMeshNet. No ablation is reported that trains the bias-equipped architecture in a purely supervised regime on the identical 10% paired HR data (without the second complementary model or unpaired LR data). This comparison is required to determine whether the semi-supervised complementary learning, rather than the inductive biases alone, is responsible for the reported data-efficiency and RMSE improvement.
minor comments (2)
  1. The abstract and methods would benefit from an explicit statement of how the inductive biases are realized inside the MPNN layers (e.g., specific message-passing rules or architectural constraints).
  2. Table or figure captions should clearly indicate the exact fraction of HR data used in each compared method so that the 90% reduction claim can be verified at a glance.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The suggestion to clarify the source of performance gains through an additional ablation is well-taken and will strengthen the experimental section. We address the major comment below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Experimental evaluation] The fully supervised benchmark explicitly omits the inductive biases present in SuperMeshNet. No ablation is reported that trains the bias-equipped architecture in a purely supervised regime on the identical 10% paired HR data (without the second complementary model or unpaired LR data). This comparison is required to determine whether the semi-supervised complementary learning, rather than the inductive biases alone, is responsible for the reported data-efficiency and RMSE improvement.

    Authors: We agree that this ablation is necessary to isolate the contribution of the semi-supervised complementary learning from the inductive biases. In the submitted manuscript, the fully supervised baseline was implemented without inductive biases to provide a standard comparison point from the literature, while SuperMeshNet combines both the biases and the semi-supervised training. To directly address the concern, we will include in the revised manuscript results for the bias-equipped MPNN architecture trained in a purely supervised regime on the same 10% paired HR data (without the complementary model or unpaired LR data). This will enable a clearer attribution of the observed RMSE improvements and data efficiency. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical method with independent experimental validation

full rationale

The paper describes an empirical semi-supervised training procedure for mesh super-resolution using complementary MPNN models and added inductive biases. No equations, derivations, or predictions are presented that reduce the reported RMSE or data-efficiency claims to fitted constants or self-referential definitions by construction. Performance is evaluated via standard train/test splits on simulation datasets against external benchmarks, with no load-bearing self-citations or ansatzes that collapse the central result to its inputs. The method is self-contained as a practical ML framework rather than a closed-form theoretical derivation.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The central claim rests on the empirical effectiveness of the complementary training procedure and the chosen inductive biases; no explicit free parameters, axioms, or invented physical entities are described in the abstract.

pith-pipeline@v0.9.0 · 5527 in / 1101 out tokens · 28063 ms · 2026-05-12T04:06:40.978645+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

13 extracted references · 13 canonical work pages

  1. [1]

    Brefeld, U., Gärtner, T., Scheffer, T., and Wrobel, S

    URL https://openreview.net/forum? id=bx2roi8hca8. Brefeld, U., Gärtner, T., Scheffer, T., and Wrobel, S. Ef- ficient co-regularised least squares regression.Pro- ceedings of the 23rd International Conference on Machine Learning, 2006. URL https://api. semanticscholar.org/CorpusID:2025415. Chen, Y ., Welling, M., and Smola, A. Super-samples from kernel her...

  2. [2]

    org/CorpusID:220424832

    URL https://api.semanticscholar. org/CorpusID:220424832. Guo, Y ., Song, J., Cao, X., Zhao, C., and Leng, H. Physics field super-resolution reconstruction via enhanced diffusion model and Fourier neu- ral operator.Theoretical and Applied Mechanics Letters, 15(5):100604, 2025. ISSN 2095-0349. doi: https://doi.org/10.1016/j.taml.2025.100604. URL https://www...

  3. [3]

    URL https://www.sciencedirect.com/ science/article/pii/S2092678216303879

    doi: https://doi.org/10.2478/IJNAOE-2013-0011. URL https://www.sciencedirect.com/ science/article/pii/S2092678216303879. Kipf, T. N. and Welling, M. Semi-supervised classifica- tion with graph convolutional networks. InInternational Conference on Learning Representations (ICLR), 2017. Kochkov, D., Smith, J. A., Alieva, A., Wang, Q., Brenner, M. P., and Ho...

  4. [4]

    doi: 10.1115/1.4053671

    ISSN 1530-9827. doi: 10.1115/1.4053671. URL https://doi.org/10.1115/1.4053671. Obiols-Sales, O., Vishnu, A., Malaya, N. P., and Chan- dramowlishwaran, A. SURFNet: Super-resolution of tur- bulent flows with transfer learning using small datasets. InProceedings of the 30th International Conference on Parallel Architectures and Compilation Techniques, PACT ’...

  5. [5]

    Ribeiro, B

    URL https://www.sciencedirect.com/ science/article/pii/S0021999118307125. Ribeiro, B. A., Ribeiro, J. A., Ahmed, F., Penedones, H., Belinha, J., Sarmento, L., Bessa, M. A., and Tavares, S. SimuStruct: Simulated structural plate with holes 11 Semi-Supervised Neural Super-Resolution for Mesh-Based Simulations dataset with machine learning applications. InWo...

  6. [6]

    Santurkar, S., Tsipras, D., Ilyas, A., and M ˛ adry, A

    URL https://openreview.net/forum? id=s3tOuyR1vM7. Santurkar, S., Tsipras, D., Ilyas, A., and M ˛ adry, A. How does batch normalization help optimization? InProceed- ings of the 32nd International Conference on Neural In- formation Processing Systems, NIPS’18, pp. 2488–2498, Red Hook, NY , USA, 2018. Curran Associates Inc. Scroggs, M. W., Baratta, I. A., R...

  7. [7]

    Tarvainen, A

    URL https://openreview.net/forum? id=B9t708KMr9d. Tarvainen, A. and Valpola, H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. InProceedings of the 31st International Conference on Neural Informa- tion Processing Systems, NIPS’17, pp. 1195–1204, Red Hook, NY , USA, 2017. Curran Asso...

  8. [8]

    Wetzel, S

    URL https://openreview.net/forum? id=rJXMpikCZ. Wetzel, S. J., Melko, R. G., and Tamblyn, I. Twin neural net- work regression is a semi-supervised regression algorithm. Machine Learning: Science and Technology, 3(4):045007, oct 2022. doi: 10.1088/2632-2153/ac9885. URL https: //dx.doi.org/10.1088/2632-2153/ac9885. Xu, K., Hu, W., Leskovec, J., and Jegelka,...

  9. [9]

    For example, as illustrated in Figure 10, the darker blue nodes represent the three nearest neighbors of the darker yellow node

    Find k nearest neighbors (kNN).For each node in the target mesh, identify the k closest nodes in the source mesh. For example, as illustrated in Figure 10, the darker blue nodes represent the three nearest neighbors of the darker yellow node

  10. [10]

    3.Unknown quantity.The value at the target node, denoted byy 0

    Known information.The nodal positions of the k nearest source nodes pi (1≤i≤k ), their values yi, and the target node positionp 0. 3.Unknown quantity.The value at the target node, denoted byy 0

  11. [11]

    The interpolated value at the target nodey 0 is then obtained as y0 = Pk i=1 wiyi Pk i=1 wi

    Compute the target node value via weighted averaging.The interpolation weight for each neighbor is defined as the inverse squared distance from the target node: wi = 1 d(p0, pi)2 , whered(·,·)denotes the Euclidean distance. The interpolated value at the target nodey 0 is then obtained as y0 = Pk i=1 wiyi Pk i=1 wi . 16 Semi-Supervised Neural Super-Resolut...

  12. [12]

    • Force prediction: forces depend on relative distances, not absolute positions

    Scientific tasks • V orticity prediction: the vorticity field depends on the local rotational structure, not the mean velocity. • Force prediction: forces depend on relative distances, not absolute positions

  13. [13]

    • Link prediction: predictions depend on similarity between features rather than the absolute feature scale

    General graph learning tasks • Node classification: predictions rely mainly on relational structures, not absolute feature scales. • Link prediction: predictions depend on similarity between features rather than the absolute feature scale. These tasks may benefit from our inductive biases. 35 Semi-Supervised Neural Super-Resolution for Mesh-Based Simulati...