pith. machine review for the scientific record. sign in

arxiv: 2605.08843 · v1 · submitted 2026-05-09 · 💻 cs.AI · cs.LG

Recognition: 2 theorem links

· Lean Theorem

M³: Reframing Training Measures for Discretized Physical Simulations

Authors on Pith no claims yet

Pith reviewed 2026-05-12 02:15 UTC · model grok-4.3

classification 💻 cs.AI cs.LG
keywords neural surrogate modelsphysical simulationsdiscretized dataMorton orderingmulti-scale measuresoperator learningdata biasvolumetric predictions
0
0 comments X

The pith

M³ mitigates bias from uneven discretization in physical simulation training by using multi-scale Morton measures to balance supervision, leading to substantially more accurate predictions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Discretized samples of continuous physical domains create an empirical measure that unevenly supervises neural surrogate models, biasing optimization toward over-represented regions and producing inconsistent physical predictions. The paper proposes M³ to counter this by partitioning space according to physical variation and distributing supervision across scales with Morton ordering. This rebalancing yields up to 4.7 times lower error on large volumetric cases and allows models trained on heavily subsampled data to beat those trained on denser grids. A reader should care because it reframes data distribution as a controllable factor that can improve physical fidelity without extra computation.

Core claim

The central claim is that the measure-induced bias in discretized training data for physical simulations can be mitigated by M³, a multi-scale Morton measure that partitions space by physical variation and allocates supervision across multiple scales. This approach consistently improves continuous-domain predictions across diverse industrial datasets, with gains that hold even when training data is reduced by orders of magnitude.

What carries the argument

Multi-scale Morton Measure (M³), which uses Morton ordering to partition and balance the training measure across scales according to physical variation.

If this is right

  • Predictions in the continuous physical domain become more accurate and spatially consistent.
  • M³-trained models achieve lower error than standard training even with aggressive data subsampling.
  • Models can outperform those trained on higher-resolution data using only a fraction of the points.
  • Data distribution emerges as a critical factor in operator learning for physics.
  • M³ provides a scalable, data-efficient method for physically consistent modeling.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If discretization bias is the dominant issue, M³ could generalize to other surrogate modeling tasks involving geometric or volumetric data.
  • Combining M³ with adaptive sampling during training might further reduce the need for dense initial datasets.
  • Similar measure-reframing ideas could address biases in other machine learning applications with non-uniform data distributions.

Load-bearing premise

That partitioning space according to physical variation and allocating supervision across multiple scales via Morton ordering will reliably mitigate the measure-induced bias without introducing new spatial inconsistencies or requiring domain-specific tuning.

What would settle it

If M³ fails to reduce the physics-weighted relative L₂ error on a new industrial-scale volumetric dataset compared to standard uniform sampling, or if the improvement disappears under subsampling, the central claim would not hold.

Figures

Figures reproduced from arXiv: 2605.08843 by Naoya Takeishi, Xiaowen Song, Xingyu Song, Yuan Mei.

Figure 1
Figure 1. Figure 1: Overview of the M3 pipeline. Starting from a non-uniform discretization, M3 constructs a variation-adaptive Morton partition, groups cells into multi-scale strata using the intensity score Sc, and allocates the training budget across scales and cells to obtain a structured empirical training measure µS. ordered from coarser to finer variation bands, where finer scales capture higher variation and coarser s… view at source ↗
Figure 2
Figure 2. Figure 2: Sampling comparison (Random vs. M3 , 8192 points). Random sampling inherits the density bias of the original simulation data, whereas M3 constructs a physically informed distribution that mitigates excessive concentration in over-resolved regions (e.g., boundary layers) and captures richer physical phenomena. Additional visualizations are provided in Appendix E.4. Once lost, such information is difficult t… view at source ↗
Figure 3
Figure 3. Figure 3: Spatial error maps at full resolution. White denotes zero error, while deeper red/blue indi￾cate larger signed deviations. In the panels shown, R-trained models display more localized extremes, whereas M3 often yields smoother error fields. See more corresponding results in Appendix E.5. tinuous physical space. This demonstrates robustness across data scales and discretization regimes [PITH_FULL_IMAGE:fig… view at source ↗
Figure 4
Figure 4. Figure 4: Full-resolution predictions for models trained under different data scales. Surface fields (wall shear stress, WSS): Random (×1/×0.1) vs. M3 (×0.1/×0.1). Volume fields (velocity): Random (×0.1/×0.1) vs. M3 (×0.1/×0.01). structured sampling is more effective than merely increasing the quantity of unstructured volume samples. Compared to random sampling at the same 10% data scale (×0.1/×0.1), M3 reduces weig… view at source ↗
Figure 5
Figure 5. Figure 5: Efficiency comparison across methods under increasing scale N. Left: wall-clock time under a 600 s limit. Neighborhood-based method (kNN) time out beyond N ≈106 , while M3 remains efficient across all scales. Grid- and proxy-based methods remain feasible but become significantly slower at large N. Right: peak RSS. Most methods remain within a similar memory range, while grid- and proxy-based methods grow m… view at source ↗
Figure 6
Figure 6. Figure 6: Mesh of SHIFT-Wing. The mesh exhibits strong anisotropy, originating from AMR, where high-gradient regions are densely resolved. 21 [PITH_FULL_IMAGE:figures/full_fig_p021_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Mesh of AhmedML. The surface mesh is nearly uniform, and variation in volume cell size is relatively mild [PITH_FULL_IMAGE:figures/full_fig_p022_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Mesh of DrivAerML. The contrast between coarse and fine cells is highly pronounced, resulting in a strongly non-uniform data distribution. 22 [PITH_FULL_IMAGE:figures/full_fig_p022_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Surface cells for the three datasets using a fixed scale bin K = 64. The final bin (65) contains cells with only one point [PITH_FULL_IMAGE:figures/full_fig_p023_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Volume cells for SHIFT-Wing, colored by geometric size. 23 [PITH_FULL_IMAGE:figures/full_fig_p023_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Surface cell visualization for DrivAerML with scale bin = 8 for improved clarity. Cells are adaptively refined in regions with large physical variation, resulting in a finer partition where needed. The resulting cells jointly form a coarse-grained representation of the full-resolution data. 24 [PITH_FULL_IMAGE:figures/full_fig_p024_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Volume cells for AhmedML, colored by geometric size. 25 [PITH_FULL_IMAGE:figures/full_fig_p025_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Volume cells for DrivAerML, colored by geometric size. 26 [PITH_FULL_IMAGE:figures/full_fig_p026_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Statistical summaries for SHIFT-Wing. Left axis: counts of cells and points; right axis: percentage of the total. 27 [PITH_FULL_IMAGE:figures/full_fig_p027_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Statistical summaries for AhmedML. Left axis: counts of cells and points; right axis: percentage of the total. 28 [PITH_FULL_IMAGE:figures/full_fig_p028_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Statistical summaries for DrivAerML. Left axis: counts of cells and points; right axis: percentage of the total. 29 [PITH_FULL_IMAGE:figures/full_fig_p029_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Scale-stratified cell visualization for DrivAerML. Eight consecutive scales are selected from the Step 2 multi-scale grouping results. Each scale groups cells whose points share similar physical variations, enabling geometrically different cells to be assigned to the same scale. 30 [PITH_FULL_IMAGE:figures/full_fig_p030_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Sampling comparison on surface pressure (1). 31 [PITH_FULL_IMAGE:figures/full_fig_p031_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Sampling comparison on surface pressure (2) [PITH_FULL_IMAGE:figures/full_fig_p032_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Sampling comparison on surface shear stress (1). 32 [PITH_FULL_IMAGE:figures/full_fig_p032_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Sampling comparison on surface shear stress (2). Point concentration is observed in the tire region under random sampling due to locally dense mesh discretization [PITH_FULL_IMAGE:figures/full_fig_p033_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Ground truth with 160M volume points. Minor visual differences in the sampling results are attributable to rendering. 33 [PITH_FULL_IMAGE:figures/full_fig_p033_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Uniform random sampling results in the volume field. The distribution follows the mesh density, leading to concentration near boundary layers (region highlighted in yellow) [PITH_FULL_IMAGE:figures/full_fig_p034_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: M3 sampling results in the volume field. The distribution is more uniform across the domain while preserving key volumetric features, more closely matching the full-resolution ground-truth distribution. 34 [PITH_FULL_IMAGE:figures/full_fig_p034_24.png] view at source ↗
Figure 25
Figure 25. Figure 25: Comparison of lift and drag predictions between randomly trained and M3 -trained models across held-out cases (50 for SHIFT-Wing and AhmedML, 49 for DrivAerML). As key aerodynamic quantities, these results show that physics-weighted metrics are consistent with ground￾truth coefficients. 35 [PITH_FULL_IMAGE:figures/full_fig_p035_25.png] view at source ↗
Figure 26
Figure 26. Figure 26: Field prediction results for SHIFT-Wing. 36 [PITH_FULL_IMAGE:figures/full_fig_p036_26.png] view at source ↗
Figure 27
Figure 27. Figure 27: Field prediction results for AhmedML. 37 [PITH_FULL_IMAGE:figures/full_fig_p037_27.png] view at source ↗
Figure 28
Figure 28. Figure 28: Field prediction results for DrivAerML. 38 [PITH_FULL_IMAGE:figures/full_fig_p038_28.png] view at source ↗
read the original abstract

Neural surrogate models for physical simulations are trained on discretized samples of continuous domains, where the induced empirical measure leads to uneven supervision, biasing optimization and causing spatial inconsistencies in physical fidelity. To mitigate this measure-induced bias, we propose M$^3$ (Multi-scale Morton Measure), a scalable framework that balances training measures by partitioning space according to physical variation and allocating supervision across multiple scales. Applied to three industrial-scale datasets with diverse discretizations, M$^3$ consistently improves predictions in the continuous physical domain, achieving up to 4.7$\times$ lower error in large-scale volumetric cases. These gains persist under aggressive subsampling (160M $\rightarrow$ 16M $\rightarrow$ 1.6M points), where M$^3$-trained models outperform those trained on higher-resolution data, reducing physics-weighted relative $L_2$ error by 3--4$\times$ and the corresponding MSE by up to 13$\times$. These results highlight data distribution as a key factor in operator learning and position M$^3$ as a scalable, data-efficient approach for physically consistent modeling.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that the empirical measure induced by discretized samples biases neural surrogate models for physical simulations, causing uneven supervision and spatial inconsistencies in physical fidelity. It proposes M³ (Multi-scale Morton Measure), a framework that mitigates this by partitioning space according to physical variation and allocating supervision across multiple scales via Morton ordering. Empirical results on three industrial-scale datasets with diverse discretizations show consistent improvements, including up to 4.7× lower error in large-scale volumetric cases; these gains persist under aggressive subsampling (160M → 16M → 1.6M points), where M³-trained models outperform those trained on higher-resolution data by reducing physics-weighted relative L₂ error 3–4× and MSE up to 13×.

Significance. If the results hold, the work is significant for demonstrating that training data distribution is a key, often overlooked factor in operator learning for physical simulations. It offers a scalable, data-efficient alternative to simply increasing resolution, with potential to improve physical consistency in surrogate models for industrial applications. The validation across multiple large datasets and subsampling regimes provides practical evidence of utility, though the absence of machine-checked proofs or parameter-free derivations means the assessment rests on empirical reproducibility.

major comments (3)
  1. [Abstract and §3] Abstract and §3 (Methods): The central mechanism partitions space 'according to physical variation' before applying Morton ordering, but no precise, reproducible definition or algorithm for computing this variation is given. Without specifying whether it relies on simulation-specific quantities (e.g., gradients, curvatures, or field magnitudes) or a domain-agnostic metric, it is impossible to verify that the approach avoids implicit domain tuning, which directly undermines the claim of generality across diverse discretizations.
  2. [Results (subsampling)] Results section (subsampling experiments): The headline claims of 3–4× reduction in physics-weighted relative L₂ error and up to 13× in MSE under 100× subsampling lack reported details on the exact partitioning algorithm, baseline implementations, error-bar reporting, or statistical significance tests. This makes it difficult to attribute the gains specifically to measure correction rather than experimental setup, which is load-bearing for the data-efficiency conclusion.
  3. [§4] §4 (Discussion or Experiments): No ablations are described on the contribution of the multi-scale Morton allocation versus the partitioning step alone, nor on sensitivity to the number of scales. Such controls would be necessary to confirm that the reported improvements stem from rebalancing the empirical measure without introducing new spatial inconsistencies.
minor comments (2)
  1. [Abstract] Abstract: The term 'physics-weighted relative L₂ error' is used without a definition or cross-reference to the relevant equation or section; a brief inline clarification would aid readability.
  2. [Figures/Tables] Figure and table captions: Ensure all visualizations of error metrics include explicit definitions of the weighting and any confidence intervals; current presentation leaves some notation ambiguous.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their thorough review and positive assessment of the significance of our work on M³ for addressing training measure bias in physical simulation surrogates. We address each of the major comments below by providing clarifications and committing to revisions that enhance the manuscript's clarity, reproducibility, and completeness. We believe these changes will satisfactorily address the concerns raised.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (Methods): The central mechanism partitions space 'according to physical variation' before applying Morton ordering, but no precise, reproducible definition or algorithm for computing this variation is given. Without specifying whether it relies on simulation-specific quantities (e.g., gradients, curvatures, or field magnitudes) or a domain-agnostic metric, it is impossible to verify that the approach avoids implicit domain tuning, which directly undermines the claim of generality across diverse discretizations.

    Authors: We agree that a more precise and reproducible definition of the physical variation metric is necessary for full verification of the method's generality. In the revised manuscript, we will expand the description in §3 to include a detailed, algorithmic specification of how physical variation is computed for partitioning, ensuring it is domain-agnostic and free of implicit tuning. This will directly support the generality claim demonstrated empirically across the diverse datasets. revision: yes

  2. Referee: [Results (subsampling)] Results section (subsampling experiments): The headline claims of 3–4× reduction in physics-weighted relative L₂ error and up to 13× in MSE under 100× subsampling lack reported details on the exact partitioning algorithm, baseline implementations, error-bar reporting, or statistical significance tests. This makes it difficult to attribute the gains specifically to measure correction rather than experimental setup, which is load-bearing for the data-efficiency conclusion.

    Authors: We concur that providing these details is essential to substantiate the subsampling results. The revised manuscript will include the exact partitioning algorithm (cross-referenced with the expanded §3), fuller descriptions of the baseline implementations, error bars from repeated experiments, and statistical significance testing to confirm the reported improvements are attributable to M³. revision: yes

  3. Referee: [§4] §4 (Discussion or Experiments): No ablations are described on the contribution of the multi-scale Morton allocation versus the partitioning step alone, nor on sensitivity to the number of scales. Such controls would be necessary to confirm that the reported improvements stem from rebalancing the empirical measure without introducing new spatial inconsistencies.

    Authors: We recognize the importance of these ablations for isolating the contributions of each component. In the revised version, we will add ablation studies in §4 comparing the full M³ approach to variants using only partitioning or only multi-scale allocation, along with sensitivity analysis to the number of scales. These will help verify that the gains come from measure rebalancing. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical gains from defined method on external data

full rationale

The paper defines M³ as a partitioning-plus-Morton-ordering procedure to rebalance the empirical measure induced by discretization. All reported improvements (error reductions, subsampling robustness) are presented as outcomes of applying this procedure to three external industrial datasets and comparing against baselines. No equation, prediction, or first-principles claim is shown to be equivalent to a fitted quantity or self-citation by construction; the central mechanism is an explicit algorithmic choice whose effect is measured, not presupposed. The derivation chain is therefore self-contained and non-circular.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Abstract-only review limits visibility into exact parameters; the core premise relies on the domain assumption that discretization induces measurable bias correctable by scale-aware partitioning.

axioms (1)
  • domain assumption Discretized samples of continuous domains induce an empirical measure that biases optimization and causes spatial inconsistencies in physical fidelity.
    Explicitly stated as the motivating problem in the abstract.

pith-pipeline@v0.9.0 · 5497 in / 1252 out tokens · 30149 ms · 2026-05-12T02:15:04.270755+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages

  1. [1]

    Fourier neural operator for parametric partial differen- tial equations

    Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differen- tial equations. InInternational Conference on Learning Representations, 2021

  2. [2]

    Neural operator: Learning maps between function spaces with applications to PDEs.Journal of Machine Learning Research, 2023

    Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to PDEs.Journal of Machine Learning Research, 2023

  3. [3]

    Benedikt Alkin, Maurits Bleeker, Richard Kurle, Tobias Kronlachner, Reinhard Sonnleitner, Matthias Dorfer, and Johannes Brandstetter. AB-UPT: Scaling neural CFD surrogates for high-fidelity automotive aerodynamics simulations via anchored-branched universal physics transformers.Transactions on Machine Learning Research, 2025

  4. [4]

    AB-UPT for automotive and aerospace applications.arXiv preprint arXiv:2510.15808, 2025

    Benedikt Alkin, Richard Kurle, Louis Serrano, Dennis Just, and Johannes Brandstetter. AB-UPT for automotive and aerospace applications.arXiv preprint arXiv:2510.15808, 2025

  5. [5]

    Maddix, Samuel Gundry, and Parisa Shabestari

    Neil Ashton, Danielle C. Maddix, Samuel Gundry, and Parisa Shabestari. AhmedML: High- fidelity computational fluid dynamics dataset for incompressible, low-speed bluff body aerody- namics.arXiv preprint arXiv:2407.20801, 2024

  6. [6]

    Neil Ashton, Christopher Mockett, Matthias Fuchs, Lucas Fliessbach, Henry Hetmann, Thilo Knacke, Nils Schönwald, Vasilios Skaperdas, Georgios Fotiadis, André Walle, Bastian Hupertz, and Danielle C. Maddix. DrivAerML: High-fidelity computational fluid dynamics dataset for road-car external aerodynamics.arXiv preprint arXiv:2408.11969, 2024

  7. [7]

    SHIFT-Wing: High-fidelity computational fluid dynamics dataset for transonic wing external aerodynamics

    Luminary Cloud. SHIFT-Wing: High-fidelity computational fluid dynamics dataset for transonic wing external aerodynamics. https://huggingface.co/datasets/luminary-shift/ WING/, 2025

  8. [8]

    Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators

    Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhong, and George Em Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 2021

  9. [9]

    Battaglia

    Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W. Battaglia. Learn- ing mesh-based simulation with graph networks. InInternational Conference on Learning Representations, 2021

  10. [10]

    Battaglia

    Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter W. Battaglia. Learning to simulate complex physics with graph networks. InInternational Conference on Machine Learning, 2020

  11. [11]

    Worrall, and Max Welling

    Johannes Brandstetter, Daniel E. Worrall, and Max Welling. Message passing neural PDE solvers. InInternational Conference on Learning Representations, 2022

  12. [12]

    Scalable transformer for PDE surrogate modeling

    Zijie Li, Dule Shu, and Amir Barati Farimani. Scalable transformer for PDE surrogate modeling. InAdvances in Neural Information Processing Systems, 2023

  13. [13]

    Huang, J., Yang, G., Wang, Z., and Park, J

    Zhongkai Hao, Chengyang Ying, Zhengyi Wang, Su Hang, Yinpeng Dong, Songming Liu, Ze Cheng, Jun Zhu, and Jian Song. GNOT: A general neural operator transformer for operator learning.arXiv preprint arXiv:2302.14376, 2023

  14. [14]

    Transolver: A fast transformer solver for PDEs on general geometries

    Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, and Mingsheng Long. Transolver: A fast transformer solver for PDEs on general geometries. InInternational Conference on Machine Learning, 2024. 10

  15. [15]

    Geometry-aware operator transformer as an efficient and accurate neural surrogate for PDEs on arbitrary domains.arXiv preprint arXiv:2505.18781, 2025

    Shizheng Wen, Arsh Kumbhat, Levi Lingsch, Sepehr Mousavi, Yizhou Zhao, Praveen Chan- drashekar, and Siddhartha Mishra. Geometry-aware operator transformer as an efficient and accurate neural surrogate for PDEs on arbitrary domains.arXiv preprint arXiv:2505.18781, 2025

  16. [16]

    Transolver-3: Scaling Up Transformer Solvers to Industrial-Scale Geometries,

    Hang Zhou, Haixu Wu, Haonan Shangguan, Yuezhou Ma, Huikun Weng, Jianmin Wang, and Mingsheng Long. Transolver-3: Scaling up transformer solvers to industrial-scale geometries. arXiv preprint arXiv:2602.04940, 2026

  17. [17]

    A comprehensive study of non- adaptive and residual-based adaptive sampling for physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 2022

    Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, and Lu Lu. A comprehensive study of non- adaptive and residual-based adaptive sampling for physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 2022

  18. [18]

    RL-PINNs: Reinforcement learning-driven adaptive sampling for efficient training of PINNs.arXiv preprint arXiv:2504.12949, 2025

    Zhenao Song. RL-PINNs: Reinforcement learning-driven adaptive sampling for efficient training of PINNs.arXiv preprint arXiv:2504.12949, 2025

  19. [19]

    Determinantal point processes for sampling minibatches in SGD

    Rémi Bardenet, Subhro Ghosh, and Meixia Lin. Determinantal point processes for sampling minibatches in SGD. InAdvances in Neural Information Processing Systems, 2021

  20. [20]

    Determinantal point processes for mini-batch diversification.arXiv preprint arXiv:1705.00607, 2017

    Cheng Zhang, Hedvig Kjellström, and Stephan Mandt. Determinantal point processes for mini-batch diversification.arXiv preprint arXiv:1705.00607, 2017

  21. [21]

    Accelerating stratified sampling SGD by reconstructing strata

    Weijie Liu, Hui Qian, Chao Zhang, Zebang Shen, Jiahao Xie, and Nenggan Zheng. Accelerating stratified sampling SGD by reconstructing strata. InInternational Joint Conference on Artificial Intelligence, 2020

  22. [22]

    Adaptive sampling for SGD by exploiting side information

    Siddharth Gopal. Adaptive sampling for SGD by exploiting side information. InInternational Conference on Machine Learning, 2016

  23. [23]

    Berger and Phillip Colella

    Marsha J. Berger and Phillip Colella. Local adaptive mesh refinement for shock hydrodynamics. Journal of Computational Physics, 1984

  24. [24]

    A short survey on importance weighting for machine learning.arXiv preprint arXiv:2403.10175, 2024

    Masanari Kimura and Hideitsu Hino. A short survey on importance weighting for machine learning.arXiv preprint arXiv:2403.10175, 2024

  25. [25]

    Improving predictive inference under covariate shift by weighting the log-likelihood function.Journal of Statistical Planning and Inference, 2000

    Hisashi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function.Journal of Statistical Planning and Inference, 2000

  26. [26]

    Coresets for data-efficient training of machine learning models

    Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. InInternational Conference on Machine Learning, 2020

  27. [27]

    Active learning for convolutional neural networks: A core-set approach

    Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. InInternational Conference on Learning Representations, 2018

  28. [28]

    A survey on deep active learning: Recent advances and new frontiers.arXiv preprint arXiv:2405.00334, 2024

    Dongyuan Li, Zhen Wang, Yankai Chen, Renhe Jiang, Weiping Ding, and Manabu Oku- mura. A survey on deep active learning: Recent advances and new frontiers.arXiv preprint arXiv:2405.00334, 2024

  29. [29]

    Guy M. Morton. A computer-oriented geodetic data base and a new technique in file sequencing. Technical report, IBM Research, 1966

  30. [30]

    Geometric modeling using octree encoding.Computer Graphics and Image Processing, 1982

    Donald Meagher. Geometric modeling using octree encoding.Computer Graphics and Image Processing, 1982

  31. [31]

    Über die stetige abbildung einer linie auf ein flächenstück.Mathematische Annalen, 1891

    David Hilbert. Über die stetige abbildung einer linie auf ein flächenstück.Mathematische Annalen, 1891

  32. [32]

    Bongki Moon, H. V . Jagadish, Christos Faloutsos, and Joel H. Saltz. Analysis of the clustering properties of the Hilbert space-filling curve.IEEE Transactions on Knowledge and Data Engineering, 2001

  33. [33]

    Symbolic discovery of optimization algorithms

    Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, and Quoc V Le. Symbolic discovery of optimization algorithms. InAdvances in Neural Information Processing Systems, 2023. 11 A Measure-Theoretic Formulation of Subsampled Learning A.1 Training Measures under Subsampling Subsampli...

  34. [34]

    The model’s latent representation computation must remain consistent between training and inference, ensuring that observed performance differences arise from the input data rather than changes in the model or its processing behavior

  35. [35]

    We therefore use AB-UPT [ 3] as the main backbone

    The entire domain can be evaluated at full resolution under the same target distribution to ensure a fair comparison. We therefore use AB-UPT [ 3] as the main backbone. This choice is motivated by its decoupled structure of latent anchor tokens and query tokens: anchors provide stable latent references, while queries retrieve representations via attention...

  36. [36]

    Raw data distributions of the simulation datasets

  37. [37]

    Partitioning results from Step 1

  38. [38]

    Grouping results from Step 2

  39. [39]

    Sampling results from Step 3

  40. [40]

    Inference results include predicted surface coefficients and full-resolution visualizations for each dataset. E.1 Discretization Structure of the Experimental Datasets: surface mesh and volume mesh Figure 6:Mesh of SHIFT-Wing.The mesh exhibits strong anisotropy, originating from AMR, where high-gradient regions are densely resolved. 21 Figure 7:Mesh of Ah...