pith. machine review for the scientific record. sign in

arxiv: 2604.14083 · v2 · submitted 2026-04-15 · ⚛️ physics.comp-ph · cond-mat.mtrl-sci· stat.CO

Recognition: 2 theorem links

· Lean Theorem

Distributional Inverse Homogenization

Authors on Pith no claims yet

Pith reviewed 2026-05-13 08:03 UTC · model grok-4.3

classification ⚛️ physics.comp-ph cond-mat.mtrl-scistat.CO
keywords distributional inverse homogenizationmicrostructural statisticsbulk mechanical propertiesstochastic homogenizationperiodic homogenizationVoronoi microstructuressurrogate modelinginverse problems
0
0 comments X

The pith

Large collections of bulk mechanical properties can be inverted to recover the statistical distribution of microstructure.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces distributional inverse homogenization, a noninvasive method that uses many measurements of bulk mechanical behavior to infer the probability distribution of a material's underlying microstructure. Standard homogenization maps microstructure to averaged properties, but the inverse problem is difficult because averaging erases fine-scale detail. By shifting focus from individual instances to distributions and leveraging large data collections, the approach recovers microstructural statistics for both periodic and stochastic cases in one and two dimensions. The method is illustrated on two-dimensional Voronoi microstructures, supported by one-dimensional theory, and paired with a learned surrogate that approximates the forward homogenization map for computational efficiency. This frames a new class of inverse problems that connects ideas from probability and homogenization theory.

Core claim

The homogenization map is invertible at the distributional level: given sufficiently many bulk measurements, the probability law on the microstructure can be recovered, as demonstrated empirically for two-dimensional Voronoi constructions and underpinned theoretically in one dimension, while a surrogate model is learned concurrently to accelerate repeated evaluations of the map.

What carries the argument

Distributional inverse homogenization, the statistical inversion procedure that matches the observed distribution of homogenized bulk responses to the unknown distribution over microstructures.

If this is right

  • Noninvasive recovery of microstructural statistics becomes feasible using only repeated bulk tests.
  • A surrogate model for the homogenization map is obtained as a byproduct and speeds up subsequent calculations.
  • Natural spatial variability within a sample supplies independent realizations that enable the distributional inversion.
  • The same framework applies to both periodic and stochastic homogenization settings.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could be extended to three-dimensional microstructures and more complex constitutive behavior.
  • It opens a route to uncertainty quantification on inferred microstructural distributions when combined with Bayesian methods.
  • Optimal experimental design could select which bulk tests to perform to maximize information about the microstructure distribution.
  • The learned surrogate might serve as a fast forward model inside larger-scale engineering simulations that propagate microstructural variability.

Load-bearing premise

That collections of bulk properties contain enough information for the homogenization map to be invertible at the level of probability distributions over microstructures.

What would settle it

A controlled experiment in which microstructures drawn from the inferred distribution produce bulk-property statistics that fail to match the measured collection.

Figures

Figures reproduced from arXiv: 2604.14083 by Andrew M. Stuart, Arnaud Vadeboncoeur, Kaushik Bhattacharya, Mark Girolami.

Figure 1
Figure 1. Figure 1: The left half of this figure illustrates why inverting the homogenization map for microstructure [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Distributional inverse homogenization schematic workflow in the locally stationary-ergodic mi [PITH_FULL_IMAGE:figures/full_fig_p011_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Distributional inversion, 1D periodic, Dirichlet distributed volume fractions. [PITH_FULL_IMAGE:figures/full_fig_p015_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Errors in learning β, γ, a, as compared to ground truth, from distributional inversion, 1D periodic. Euclidean vector α = (α1, · · · , αM) ∈ (0, ∞) M, is given by Dir(α) = 1 B(α) Y M m=1 ρ αj−1 j , B(α) = QM j=1 Γ(αj ) Γ ÄPM j=1 αj ä. (25) Here Γ is the Gamma function. Note that since the αm are positive and finite in number we may write α := βγ for a scalar parameter β and vector γ ∈ △M−1. We introduce th… view at source ↗
Figure 5
Figure 5. Figure 5: Randomly generated periodic microstructure for [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Histograms and scatter plots of the observed data-distribution. In (d)-(f) the color is representative [PITH_FULL_IMAGE:figures/full_fig_p020_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Convergence of loss and (w, a) for the i.i.d.. periodic microstructure model (a) 10 × 10 Voronoi, location 1. (b) 10 × 10 Voronoi, location 2. (c) 8000 × 8000 Voronoi [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Locally periodic Voronoi construction (a-b) [PITH_FULL_IMAGE:figures/full_fig_p021_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Homogenized locally periodic Voronoi construction, [PITH_FULL_IMAGE:figures/full_fig_p022_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Histograms and scatter plots of the observed data-distribution (before adding noise for inference). [PITH_FULL_IMAGE:figures/full_fig_p023_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Convergence of loss and (w, a). KL expansion terms to sample from Ξ(ℓ, ν; S), the cell size Q is set to 5 · 10−5 , meaning the full Voronoi diagram contains 4 · 108 cells. The noise ση = 10−2 . We set a † = (0.5, 2.25, 4), w † = (0, 1.38·10−10 , 2.78·10−9 ). To generate data we set τ = 10−20 to have sharp numerically discontinuous Voronoi cells. We use N = 5 · 103 coefficient observations. We use N = 5 · … view at source ↗
Figure 12
Figure 12. Figure 12: Randomly generated stationary-ergodic (S.E.) microstructure for [PITH_FULL_IMAGE:figures/full_fig_p025_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Histograms and scatter plots of the observed data-distribution for the i.i.d.. stationary-ergodic [PITH_FULL_IMAGE:figures/full_fig_p026_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Convergence of losses and (α, a) for the i.i.d.. stationary-ergodic material model. not include noise is this experiment. We set α † = (0.66, 1.32, 2) and a † = (1, 2, 3). The Voronoi construction is fully discontinuous [PITH_FULL_IMAGE:figures/full_fig_p027_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Dirichlet marginal copula, Λ(α, ℓ, ν; S), components on a small 1/20 × 1/20 subset of S. Notice, PM m=1 ρm(x) = 1. This sub-specimen contains 100 × 100 sub-specimen cells, Qn, which appear in [PITH_FULL_IMAGE:figures/full_fig_p029_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: The independent components of the homogenized coefficient field on a subset of a [PITH_FULL_IMAGE:figures/full_fig_p029_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Convergence of losses and (α, a) for the locally stationary-ergodic microstructure model. observed data A (n) is generated from a single An defined on Sp, hence it is a single-sample Monte-Carlo estimate of A (n) ; in contrast to this, the i.i.d.. model used for inference uses Javg = 10 Monte-Carlo samples. Results: The surrogate modeling dataset is limited to a size 2500 evaluations of G † . We use 5 ran… view at source ↗
read the original abstract

For many materials, macroscopic mechanical behavior is determined by an intricate microstructure. Understanding the relation between these two scales helps scientists and engineers design better materials. The relation which maps microstructure to bulk mechanical properties can be understood via the well-established theory of homogenization. However inverting the homogenization process, to recover microstructural information from measured macroscopic properties, is fraught with difficulties because of the averaging processes that underlie homogenization. Therefore, scientists and engineers usually need recourse to more invasive, often highly localized, investigations to learn about a microstructure. In this work, we develop a noninvasive methodology by which one can leverage large collections of measured bulk mechanical properties to learn information about the statistics of microstructure at a global level. We call this, distributional inverse homogenization. We study this problem in one and two dimensions, considering both periodic and stochastic homogenization. We demonstrate the methodology in the context of 2D Voronoi constructions and underpin the observed empirical success with theory in 1D. We also show how the natural spatial variability of microstructure can be exploited to gather data that enables distributional inversion. And we concurrently learn a surrogate model, approximating the homogenization map, that accelerates the resulting computations in this setting. The work formulates a new class of inverse problems, bridging ideas from probability and homogenization to facilitate the learning of microstructural material variability from macroscopic measurements.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes 'distributional inverse homogenization' as a noninvasive method to infer microstructural statistics from distributions of bulk mechanical properties. It develops the approach for 1D periodic and stochastic homogenization with theoretical support, demonstrates it numerically for 2D Voronoi microstructures, exploits spatial variability for data collection, and learns a surrogate homogenization model.

Significance. If the central claims hold, the work bridges homogenization theory with probabilistic methods to enable learning of microstructural variability from macroscopic measurements, potentially reducing the need for invasive techniques. The 1D theoretical results under periodicity and ergodicity, together with the surrogate model for accelerating computations, represent clear strengths.

major comments (3)
  1. [Section 3] Section 3: the 1D invertibility result is established under periodicity and ergodicity assumptions, but the manuscript provides no corresponding uniqueness or identifiability theorem for the stochastic 2D case, leaving the central claim that the distributional homogenization map is injective unsupported beyond the 1D setting.
  2. [Section 4] Section 4: the numerical experiments on 2D Voronoi ensembles show empirical success in recovering statistics, yet no analysis rules out the possibility that distinct two-point or higher-order correlation functions could produce indistinguishable effective-modulus distributions, especially with finite sample sizes.
  3. [Section 4] Section 4: the surrogate model is learned concurrently but without reported error bounds, convergence rates, or validation against the true homogenization operator in the distributional sense, which is load-bearing for the claimed computational acceleration.
minor comments (2)
  1. The abstract states the methodology is demonstrated in 2D Voronoi constructions but does not specify the range of volume fractions or contrast ratios tested, which would help assess generality.
  2. Notation for the microstructure distribution and the induced bulk-property distribution is introduced without an explicit table of symbols or consistent use across sections.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments and the recommendation for major revision. We provide point-by-point responses to the major comments below, indicating where revisions will be made to the manuscript.

read point-by-point responses
  1. Referee: [Section 3] Section 3: the 1D invertibility result is established under periodicity and ergodicity assumptions, but the manuscript provides no corresponding uniqueness or identifiability theorem for the stochastic 2D case, leaving the central claim that the distributional homogenization map is injective unsupported beyond the 1D setting.

    Authors: We agree that a general uniqueness theorem for the stochastic 2D case is not provided. The 1D result serves as theoretical underpinning, while the 2D results are demonstrated numerically. We will revise Section 3 to clarify the assumptions and scope, and add a discussion in the conclusions about extending the identifiability analysis to higher dimensions as future work. revision: partial

  2. Referee: [Section 4] Section 4: the numerical experiments on 2D Voronoi ensembles show empirical success in recovering statistics, yet no analysis rules out the possibility that distinct two-point or higher-order correlation functions could produce indistinguishable effective-modulus distributions, especially with finite sample sizes.

    Authors: This is a valid concern regarding potential non-uniqueness. Our experiments focus on Voronoi tessellations, which are common in materials science, and show successful recovery. We will add a paragraph in Section 4 discussing the limitations of finite samples and the possibility of non-identifiability for other microstructures, along with suggestions for future validation with diverse correlation structures. revision: partial

  3. Referee: [Section 4] Section 4: the surrogate model is learned concurrently but without reported error bounds, convergence rates, or validation against the true homogenization operator in the distributional sense, which is load-bearing for the claimed computational acceleration.

    Authors: We acknowledge the need for rigorous validation of the surrogate model. In the revised version, we will include quantitative error analysis, such as L2 error bounds on the effective modulus predictions, convergence rates with respect to training data size, and distributional comparisons (e.g., via Wasserstein distance) between the surrogate and true homogenization outputs. This will substantiate the acceleration claims. revision: yes

Circularity Check

0 steps flagged

No circularity; derivation is self-contained with independent 1D proof

full rationale

The paper establishes the core invertibility result via an explicit 1D proof under periodicity and ergodicity (Section 3) that does not rely on fitted parameters or self-citations for its validity. The 2D Voronoi results are presented as numerical demonstrations rather than derivations that reduce to inputs. No self-definitional steps, fitted inputs renamed as predictions, or load-bearing self-citations appear in the provided text. The methodology learns a surrogate for the homogenization map concurrently but treats this as an acceleration tool, not a circular justification of the distributional inversion itself.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The approach relies on standard homogenization theory and statistical inference techniques without introducing new free parameters or entities in the abstract.

axioms (1)
  • domain assumption Homogenization theory provides a map from microstructure to bulk properties
    Well-established theory mentioned in abstract as the basis for inversion.

pith-pipeline@v0.9.0 · 5542 in / 1182 out tokens · 68988 ms · 2026-05-13T08:03:08.678187+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

53 extracted references · 53 canonical work pages · 2 internal anchors

  1. [1]

    A. D. Rollett, S.-B. Lee, R. Campman, G. Rohrer, Three-dimensional characterization of microstructure by electron back-scatter diffraction, Annu. Rev. Mater. Res. 37 (1) (2007) 627–658

  2. [2]

    Catalanotti, On the generation of rve-based models of composites reinforced with long fibres or spherical particles, Composite Structures 138 (2016) 84–95

    G. Catalanotti, On the generation of rve-based models of composites reinforced with long fibres or spherical particles, Composite Structures 138 (2016) 84–95

  3. [3]

    L. J. Gibson, Cellular solids, MRS Bulletin 28 (4) (2003) 270–274

  4. [4]

    Fratzl, R

    P. Fratzl, R. Weinkamer, Nature’s hierarchical materials, Progress in materials Science 52 (8) (2007) 1263–1334

  5. [5]

    Gent, Hypothetical mechanism of crazing in glassy plastics, Journal of Materials Science 5 (11) (1970) 925–932

    A. Gent, Hypothetical mechanism of crazing in glassy plastics, Journal of Materials Science 5 (11) (1970) 925–932

  6. [6]

    Martın-Pérez, H

    B. Martın-Pérez, H. Zibara, R. Hooton, M. Thomas, A study of the effect of chloride binding on service life predictions, Cement and concrete research 30 (8) (2000) 1215– 1223

  7. [7]

    Martın-Pérez, M

    B. Martın-Pérez, M. Thomas, Numerical solution of mass transport equations in con- crete structures, Computers & Structures 79 (13) (2001) 1251–1264

  8. [8]

    Martın-Pérez, Service life modelling of rc highway structures exposed to chlorides., University of Toronto (1999)

  9. [9]

    Bond-Taylor, A

    S. Bond-Taylor, A. Leach, Y. Long, C. G. Willcocks, Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models, IEEE transactions on pattern analysis and machine intelligence 44 (11) (2021) 7327–7347. 34

  10. [10]

    Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, B. Poole, Score- based generative modeling through stochastic differential equations, in: International Conference on Learning Representations, 2020

  11. [11]

    I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances in neural information processing systems 27 (2014)

  12. [12]

    V. M. Panaretos, Y. Zemel, Statistical aspects of wasserstein distances, Annual review of statistics and its application 6 (1) (2019) 405–431

  13. [13]

    Sejdinovic, B

    D. Sejdinovic, B. Sriperumbudur, A. Gretton, K. Fukumizu, Equivalence of distance- based and rkhs-based statistics in hypothesis testing, The annals of statistics (2013) 2263–2291

  14. [14]

    G. J. Székely, M. L. Rizzo, Energy statistics: A class of statistics based on distances, Journal of statistical planning and inference 143 (8) (2013) 1249–1272

  15. [15]

    A.Gretton, K.M.Borgwardt, M.J.Rasch, B.Schölkopf, A.Smola, Akerneltwo-sample test, The journal of machine learning research 13 (1) (2012) 723–773

  16. [16]

    Bonneel, J

    N. Bonneel, J. Rabin, G. Peyré, H. Pfister, Sliced and radon wasserstein barycenters of measures, Journal of Mathematical Imaging and Vision 51 (1) (2015) 22–45

  17. [17]

    Flamary, C

    R. Flamary, C. Vincent-Cuaz, N. Courty, A. Gramfort, O. Kachaiev, H. Quang Tran, L. David, C. Bonet, N. Cassereau, T. Gnassounou, E. Tanguy, J. Delon, A. Collas, S. Mazelet, L. Chapel, T. Kerdoncuff, X. Yu, M. Feickert, P. Krzakala, T. Liu, E. Fer- nandes Montesuma, Pot python optimal transport (version 0.9.5) (2024)

  18. [18]

    Flamary, N

    R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Ja- nati, A. Rakotomamonjy, I. Redko, A. Rolet, A. Schutz, V. Seguy, D. J. Sutherland, R. Tavenard, A. Tong, T. Vayer, Pot: Python optimal transport, Journal of Machine Learning Research 22 (78) (2021) 1–8

  19. [19]

    Kolouri, K

    S. Kolouri, K. Nadjahi, U. Simsekli, R. Badeau, G. Rohde, Generalized sliced wasser- stein distances, Advances in neural information processing systems 32 (2019)

  20. [20]

    X. Chen, Y. Yang, Y. Li, Augmented sliced wasserstein distances, in: International Conference on Learning Representations, 2020

  21. [21]

    Deshpande, Y.-T

    I. Deshpande, Y.-T. Hu, R. Sun, A. Pyrros, N. Siddiqui, S. Koyejo, Z. Zhao, D. Forsyth, A. G. Schwing, Max-sliced wasserstein distance and its use for gans, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 10648–10656. 35

  22. [22]

    Nguyen, N

    K. Nguyen, N. Ho, Energy-based sliced wasserstein distance, Advances in Neural Infor- mation Processing Systems 36 (2023) 18046–18075

  23. [23]

    Nguyen, N

    K. Nguyen, N. Ho, T. Pham, H. Bui, Distributional sliced-wasserstein and applications togenerativemodeling, in: InternationalConferenceonLearningRepresentations, 2021

  24. [24]

    H. Gao, S. Kaltenbach, P. Koumoutsakos, Generative learning for forecasting the dy- namics of high-dimensional complex systems, Nature Communications 15 (1) (2024) 8904

  25. [25]

    H. Goh, S. Sheriffdeen, J. Wittmer, T. Bui-Thanh, Solving bayesian inverse problems via variational autoencoders, arXiv preprint arXiv:1912.04212 (2019)

  26. [26]

    Glyn-Davies, A

    A. Glyn-Davies, A. Vadeboncoeur, O. D. Akyildiz, I. Kazlauskaite, M. Girolami, A primer on variational inference for physics-informed deep generative modelling, Philo- sophical Transactions A 383 (2299) (2025) 20240324

  27. [27]

    O. D. Akyildiz, M. Girolami, A. M. Stuart, A. Vadeboncoeur, Efficient prior calibration from indirect data, SIAM Journal on Scientific Computing 47 (4) (2025) C932–C958

  28. [28]

    Efficient Deconvolution in Populational Inverse Problems

    A. Vadeboncoeur, M. Girolami, A. M. Stuart, Efficient deconvolution in populational inverse problems, arXiv preprint arXiv:2505.19841 (2025)

  29. [29]

    Gelman, J

    A. Gelman, J. B. Carlin, H. S. Stern, D. B. Rubin, Bayesian data analysis, Chapman and Hall/CRC, 1995

  30. [30]

    H. E. Robbins, An empirical bayes approach to statistics, in: Breakthroughs in Statis- tics: Foundations and basic theory, Springer, 1992, pp. 388–394

  31. [31]

    R. Z. Zhang, C. E. Miles, X. Xie, J. S. Lowengrub, Bilo: Bilevel local operator learning for pde inverse problems, Journal of Computational Physics (2026) 114679

  32. [32]

    Q. Li, M. Oprea, L. Wang, Y. Yang, Inverse problems over probability measure space, arXiv preprint arXiv:2504.18999 (2025)

  33. [33]

    Q. Li, M. Oprea, L. Wang, Y. Yang, Stochastic inverse problem: stability, regularization and wasserstein gradient flow, arXiv preprint arXiv:2410.00229 (2024)

  34. [34]

    Q. Li, L. Wang, Y. Yang, Least-squares problem over probability measure space, arXiv preprint arXiv:2501.09097 (2025)

  35. [35]

    Bernton, P

    E. Bernton, P. E. Jacob, M. Gerber, C. P. Robert, On parameter estimation with the wasserstein distance, Information and Inference: A Journal of the IMA 8 (4) (2019) 657–676

  36. [36]

    Vadeboncoeur, G

    A. Vadeboncoeur, G. Duthé, M. Girolami, E. Chatzi, Geometric autoencoder priors for bayesian inversion: Learn first observe later, arXiv preprint arXiv:2509.19929 (2025). 36

  37. [37]

    R.Quey, L.Renversade, Optimalpolyhedraldescriptionof3dpolycrystals: Methodand application to statistical and synchrotron x-ray diffraction data, Computer Methods in Applied Mechanics and Engineering 330 (2018) 308–333

  38. [38]

    R. Quey, M. Kasemer, The neper/fepx project: free/open-source polycrystal genera- tion, deformation simulation, and post-processing, in: IOP conference series: materials science and engineering, Vol. 1249, IOP Publishing, 2022, p. 012021

  39. [39]

    R. Quey, P. R. Dawson, F. Barbe, Large-scale 3d random polycrystals for the finite element method: Generation, meshing and remeshing, Computer Methods in Applied Mechanics and Engineering 200 (17-20) (2011) 1729–1745

  40. [40]

    Al-Ostaz, A

    A. Al-Ostaz, A. Diwakar, K. I. Alzebdeh, Statistical model for characterizing random microstructure of inclusion–matrix composites, Journal of materials science 42 (16) (2007) 7016–7030

  41. [41]

    Bensoussan, J.-L

    A. Bensoussan, J.-L. Lions, G. Papanicolaou, Asymptotic analysis for periodic struc- tures, Vol. 374, American Mathematical Soc., 2011

  42. [42]

    Blanc, C

    X. Blanc, C. Le Bris, F. Legoll, Some variance reduction methods for numerical stochas- tic homogenization, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374 (2066) (2016) 20150168

  43. [43]

    Pavliotis, A

    G. Pavliotis, A. Stuart, Multiscale methods: averaging and homogenization, Springer Science & Business Media, 2008

  44. [44]

    A. Gloria, Numerical approximation of effective coefficients in stochastic homogeniza- tion of discrete elliptic equations, ESAIM: Mathematical Modelling and Numerical Analysis 46 (1) (2012) 1–38

  45. [45]

    Gloria, F

    A. Gloria, F. Otto, An optimal error estimate in stochastic homogenization of discrete elliptic equations, The Annals of Applied Probability 22 (1) (2012) 1–28

  46. [46]

    Gloria, F

    A. Gloria, F. Otto, An optimal variance estimate in stochastic homogenization of dis- crete elliptic equations, The Annals of Probability 39 (3) (2011) 779–856

  47. [47]

    Zhang, A

    P. Zhang, A. Vadeboncoeur, A. Glyn-Davies, M. Girolami, Hierarchical inference and closure learning via adaptive surrogates for odes and pdes, arXiv preprint arXiv:2603.03922 (2026)

  48. [48]

    Bhattacharya, N

    K. Bhattacharya, N. B. Kovachki, A. Rajan, A. M. Stuart, M. Trautner, Learning ho- mogenization for elliptic operators, SIAM Journal on Numerical Analysis 62 (4) (2024) 1844–1873

  49. [49]

    S. M. Kozlov, Averaging of random operators, Sbornik: Mathematics 37 (2) (1980) 167–180. 37

  50. [50]

    Dashti, A

    M. Dashti, A. M. Stuart, Uncertainty quantification and weak approximation of an elliptic inverse problem, SIAM Journal on Numerical Analysis 49 (6) (2011) 2524–2542

  51. [51]

    E. Jang, S. Gu, B. Poole, Categorical reparameterization with gumbel-softmax, Inter- national Conference on Learning Representations (2017)

  52. [52]

    Figurnov, S

    M. Figurnov, S. Mohamed, A. Mnih, Implicit reparameterization gradients, Advances in neural information processing systems 31 (2018)

  53. [53]

    D. M. Bradley, R. C. Gupta, On the distribution of the sum of n non-identically dis- tributed uniform random variables, Annals of the Institute of Statistical Mathematics 54 (3) (2002) 689–700. 38