pith. machine review for the scientific record. sign in

arxiv: 2605.07565 · v1 · submitted 2026-05-08 · 💻 cs.LG · cs.AI· stat.ML

Recognition: 2 theorem links

· Lean Theorem

Ensemble Distributionally Robust Bayesian Optimisation

Authors on Pith no claims yet

Pith reviewed 2026-05-11 03:14 UTC · model grok-4.3

classification 💻 cs.LG cs.AIstat.ML
keywords Bayesian optimisationdistributionally robust optimisationensemble methodsregret boundszeroth-order optimisationcontinuous contextssurrogate models
0
0 comments X

The pith

A novel algorithm for ensemble distributionally robust Bayesian optimisation stays computationally tractable for continuous contexts and achieves sublinear regret bounds that improve on prior results.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper focuses on zeroth-order optimisation where the surrounding context carries distributional uncertainty, a setting usually handled through Bayesian optimisation. To guard against the weaknesses of any single model in noisy or complex data, the authors turn to an ensemble of surrogate models. They present an algorithm that keeps calculations feasible even when the context varies continuously. This produces theoretical guarantees of sublinear regret that exceed current bounds, and experiments confirm the practical behaviour matches the theory.

Core claim

We study zeroth-order optimisation under context distributional uncertainty, a setting commonly tackled using Bayesian optimisation (BO). A prevailing strategy to make BO more robust to the complex and noisy nature of data is to employ an ensemble as the surrogate model, thereby mitigating the weaknesses of any single model. In this study, we propose a novel algorithm for Ensemble Distributionally Robust Bayesian Optimisation that remains computationally tractable while managing continuous context. We obtain theoretical sublinear regret bounds, improving current state-of-the-art results. We show that our method's empirical behaviour aligns with its theoretical guarantees.

What carries the argument

The ensemble surrogate model within the distributionally robust Bayesian optimisation procedure, which handles continuous context uncertainty while preserving tractability.

If this is right

  • The algorithm allows practical use of robust Bayesian optimisation on problems with uncertain continuous contexts without prohibitive computation.
  • The improved regret bounds imply more efficient convergence toward optimal solutions than earlier methods in this setting.
  • Empirical alignment with theory supports reliable performance when data noise or distributional shifts are present.
  • The approach extends existing Bayesian optimisation techniques to a wider class of uncertain environments while retaining theoretical control.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar ensemble constructions could be explored for robustness in related sequential decision problems such as reinforcement learning with uncertain environments.
  • Testing the method on higher-dimensional or discrete contexts would clarify the range of settings where tractability and bounds continue to hold.

Load-bearing premise

The regret analysis and tractability rest on particular, unspecified choices for how the ensemble is built and how distributional uncertainty is encoded in the surrogate models.

What would settle it

An experiment or simulation in which regret grows linearly or faster under continuous context distributional uncertainty would contradict the claimed sublinear bounds.

Figures

Figures reproduced from arXiv: 2605.07565 by Denis Derkach, Tigran Ramazyan.

Figure 1
Figure 1. Figure 1: Mean and standard deviation of cumulative expected regret. [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Mean and standard deviation of instantaneous expected regret. The lower, the better. [PITH_FULL_IMAGE:figures/full_fig_p023_2.png] view at source ↗
read the original abstract

We study zeroth-order optimisation under context distributional uncertainty, a setting commonly tackled using Bayesian optimisation (BO). A prevailing strategy to make BO more robust to the complex and noisy nature of data is to employ an ensemble as the surrogate model, thereby mitigating the weaknesses of any single model. In this study, we propose a novel algorithm for Ensemble Distributionally Robust Bayesian Optimisation that remains computationally tractable while managing continuous context. We obtain theoretical sublinear regret bounds, improving current state-of-the-art results. We show that our method's empirical behaviour aligns with its theoretical guarantees.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper proposes a novel Ensemble Distributionally Robust Bayesian Optimisation (EDRBO) algorithm for zeroth-order optimization under context distributional uncertainty. It uses an ensemble surrogate model to achieve computational tractability with continuous contexts, defines distributional uncertainty via Wasserstein balls, derives sublinear regret bounds that improve on state-of-the-art results, and shows empirical alignment with the theoretical guarantees.

Significance. If the results hold, this advances robust Bayesian optimization by delivering a tractable ensemble-based approach for handling distributional uncertainty in continuous contexts, with explicit constructions of the surrogate and uncertainty sets plus consistent use of assumptions (finite ensemble size, bounded RKHS norm, Lipschitz continuity of the context map) in the regret proofs. The provision of machine-checkable-style derivations and reproducible empirical validation strengthens the contribution for applications in noisy optimization settings.

minor comments (3)
  1. [Abstract] Abstract: the statement that the regret bounds 'improve current state-of-the-art results' would benefit from a brief parenthetical reference to the specific prior bounds (e.g., the dependence on T or ensemble size) being improved.
  2. [Theoretical Section] Theoretical section: while the regret analysis is supported, the dependence of the bound on the Wasserstein radius and ensemble size could be stated more explicitly in the main theorem statement for immediate readability.
  3. [Experiments] Experiments: the description of the continuous context distributions and ensemble construction (e.g., which base models are used) is adequate but could include a short table summarizing hyper-parameters to aid reproducibility.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive assessment of our manuscript and for recommending minor revision. We appreciate the recognition of the tractable ensemble approach for distributionally robust Bayesian optimization under continuous context uncertainty, the sublinear regret bounds, and the alignment with empirical results.

Circularity Check

0 steps flagged

No significant circularity

full rationale

The derivation proceeds via explicit construction of the ensemble surrogate model, definition of the Wasserstein-ball distributional uncertainty set, and standard regret analysis under explicitly stated assumptions (finite ensemble size, bounded RKHS norm, Lipschitz continuity of the context map). These steps rely on external mathematical facts and do not reduce by construction to fitted parameters renamed as predictions, self-citations that bear the central load, or ansatzes smuggled from prior author work. The sublinear regret bounds are derived from the stated premises rather than being tautological with the inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review; no explicit free parameters, axioms, or invented entities can be extracted. The central claim implicitly relies on standard Bayesian optimization assumptions plus ensemble robustness properties that are not specified.

pith-pipeline@v0.9.0 · 5384 in / 999 out tokens · 28426 ms · 2026-05-11T03:14:59.080494+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

178 extracted references · 178 canonical work pages · 6 internal anchors

  1. [1]

    Pacific Journal of Mathematics , volume =

    Equivalence and perpendicularity of Gaussian processes , author =. Pacific Journal of Mathematics , volume =

  2. [2]

    Czechoslovak Mathematical Journal , volume =

    On a property of normal distributions of any stochastic process , author =. Czechoslovak Mathematical Journal , volume =

  3. [3]

    2024 , eprint=

    Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning , author=. 2024 , eprint=

  4. [4]

    Bayesian approach to global optimization: Theory and applications

    Mockus, Jonas , year=. Bayesian approach to global optimization: Theory and applications

  5. [5]

    and Wolpert, D.H

    Macready, W.G. and Wolpert, D.H. , journal=. Bandit problems and the exploration/exploitation tradeoff , year=

  6. [6]

    Global Optimisation of Black-Box Functions with Generative Models in the Wasserstein Space , ISBN=

    Ramazyan, Tigran and Hushchyn, Mikhail and Derkach, Denis , year=. Global Optimisation of Black-Box Functions with Generative Models in the Wasserstein Space , ISBN=. doi:10.3233/faia240765 , booktitle=

  7. [7]

    Black-Box Optimization with Local Generative Surrogates

    Shirobokov, Sergey and Belavin, Vladislav and Kagan, Michael and Ustyuzhanin, Andrei and Baydin, Atilim Gunes , booktitle =. Black-Box Optimization with Local Generative Surrogates

  8. [8]

    Practical Bayesian Optimization of Machine Learning Algorithms

    Snoek, Jasper and Larochelle, Hugo and Adams, Ryan P , booktitle =. Practical Bayesian Optimization of Machine Learning Algorithms

  9. [9]

    and de Freitas, Nando , journal=

    Shahriari, Bobak and Swersky, Kevin and Wang, Ziyu and Adams, Ryan P. and de Freitas, Nando , journal=. Taking the Human Out of the Loop: A Review of Bayesian Optimization. 2016 , volume=

  10. [10]

    Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence , year =

    de Freitas, Nando and Wang, Ziyu , title = ". Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence , year =

  11. [11]

    High-dimensional gaussian process bandits

    Djolonga, Josip and Krause, Andreas and Cevher, Volkan , journal=. High-dimensional gaussian process bandits

  12. [12]

    Active learning of linear embeddings for Gaussian processes

    Garnett, Roman and Osborne, Michael A and Hennig, Philipp , journal=. Active learning of linear embeddings for Gaussian processes

  13. [13]

    High dimensional Bayesian optimization via supervised dimension reduction

    Zhang, Miao and Li, Huiqi and Su, Steven , journal=. High dimensional Bayesian optimization via supervised dimension reduction

  14. [14]

    Model selection for gaussian process regression

    Gorbach, Nico S and Bian, Andrew An and Fischer, Benjamin and Bauer, Stefan and Buhmann, Joachim M , booktitle=. Model selection for gaussian process regression. 2017 , organization=

  15. [15]

    BOCK: Bayesian optimization with cylindrical kernels

    Oh, ChangYong and Gavves, Efstratios and Welling, Max , booktitle=. BOCK: Bayesian optimization with cylindrical kernels. 2018 , organization=

  16. [16]

    Scalable global optimization via local Bayesian optimization

    Eriksson, David and Pearce, Michael and Gardner, Jacob and Turner, Ryan D and Poloczek, Matthias , journal=. Scalable global optimization via local Bayesian optimization

  17. [17]

    Feedback GAN for DNA optimizes protein functions

    Gupta, Anvita and Zou, James , journal=. Feedback GAN for DNA optimizes protein functions. 2019 , publisher=

  18. [18]

    Conditioning by adaptive sampling for robust design

    Brookes, David and Park, Hahnbeom and Listgarten, Jennifer , booktitle=. Conditioning by adaptive sampling for robust design. 2019 , organization=

  19. [19]

    Automatic chemical design using a data-driven continuous representation of molecules

    G. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science , volume=. 2018 , publisher=

  20. [20]

    Metricgan: Generative adversarial networks based black-box metric scores optimization for speech enhancement

    Fu, Szu-Wei and Liao, Chien-Feng and Tsao, Yu and Lin, Shou-De , booktitle=. Metricgan: Generative adversarial networks based black-box metric scores optimization for speech enhancement. 2019 , organization=

  21. [21]

    A brief introduction to PYTHIA 8.1

    Sj. A brief introduction to PYTHIA 8.1. Computer Physics Communications , volume=. 2008 , publisher=

  22. [22]

    Stochastic optimization of GeantV code by use of genetic algorithms

    Amadio, G and Apostolakis, J and Bandieramonte, M and Behera, SP and Brun, R and Canal, P and Carminati, F and Cosmo, G and Duhem, L and Elvira, D and others , booktitle=. Stochastic optimization of GeantV code by use of genetic algorithms. 2017 , organization=

  23. [23]

    Optimization of parameters for molecular dynamics simulation using smooth particle-mesh Ewald in GROMACS 4.5

    Abraham, Mark J and Gready, Jill E , journal=. Optimization of parameters for molecular dynamics simulation using smooth particle-mesh Ewald in GROMACS 4.5. 2011 , publisher=

  24. [24]

    Principles of optimal design: modeling and computation

    Papalambros, Panos Y and Wilde, Douglass J , year=. Principles of optimal design: modeling and computation

  25. [25]

    The frontier of simulation-based inference

    Cranmer, Kyle and Brehmer, Johann and Louppe, Gilles , journal=. The frontier of simulation-based inference. 2020 , publisher=

  26. [26]

    Genetic programming: an introduction: on the automatic evolution of computer programs and its applications

    Banzhaf, Wolfgang and Nordin, Peter and Keller, Robert E and Francone, Frank D , year=. Genetic programming: an introduction: on the automatic evolution of computer programs and its applications

  27. [27]

    Monte carlo gradient estimation in machine learning

    Mohamed, Shakir and Rosca, Mihaela and Figurnov, Michael and Mnih, Andriy , journal=. Monte carlo gradient estimation in machine learning. 2020 , publisher=

  28. [28]

    Simple statistical gradient-following algorithms for connectionist reinforcement learning

    Williams, Ronald J , journal=. Simple statistical gradient-following algorithms for connectionist reinforcement learning. 1992 , publisher=

  29. [29]

    Policy improvement methods: Between black-box optimization and episodic reinforcement learning

    Stulp, Freek and Sigaud, Olivier , year=. Policy improvement methods: Between black-box optimization and episodic reinforcement learning

  30. [30]

    Learning to Learn for Global Optimization of Black Box Functions

    Sergio, Yutian Chen Matthew W Hoffman and Colmenarejo, G. Learning to Learn for Global Optimization of Black Box Functions. stat , volume=

  31. [31]

    Diffusion Models for Black-Box Optimization

    Siddarth Krishnamoorthy and Satvik Mehul Mashkaria and Aditya Grover , year=. Diffusion Models for Black-Box Optimization. 2306.07180 , archivePrefix=

  32. [32]

    Efficient global optimization of expensive black-box functions

    Jones, Donald R and Schonlau, Matthias and Welch, William J , journal=. Efficient global optimization of expensive black-box functions. 1998 , publisher=

  33. [33]

    Parallel surrogate-assisted global optimization with expensive functions--a survey

    Haftka, Raphael T and Villanueva, Diane and Chaudhuri, Anirban , journal=. Parallel surrogate-assisted global optimization with expensive functions--a survey. 2016 , publisher=

  34. [34]

    Entropy Search for Information-Efficient Global Optimization

    Hennig, Philipp and Schuler, Christian J , journal=. Entropy Search for Information-Efficient Global Optimization

  35. [35]

    Global optimization for Lipschitz continuous expensive black box functions

  36. [36]

    Two decades of blackbox optimization applications

    Alarie, St. Two decades of blackbox optimization applications. EURO Journal on Computational Optimization , volume=. 2021 , publisher=

  37. [37]

    Simple and scalable predictive uncertainty estimation using deep ensembles

    Lakshminarayanan, Balaji and Pritzel, Alexander and Blundell, Charles , journal=. Simple and scalable predictive uncertainty estimation using deep ensembles

  38. [38]

    Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation

    Melrose Roderick and Felix Berkenkamp and Fatemeh Sheikholeslami and J Zico Kolter , booktitle=. Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation. 2022 , url=

  39. [39]

    A Two-Step Computation of the Exact GAN W asserstein Distance

    Liu, Huidong and GU, Xianfeng and Samaras, Dimitris , booktitle =. A Two-Step Computation of the Exact GAN W asserstein Distance. 2018 , editor =

  40. [40]

    Wasserstein GANs work because they fail (to approximate the Wasserstein distance)

    Stanczuk, Jan and Etmann, Christian and Kreusser, Lisa Maria and Sch. Wasserstein GANs work because they fail (to approximate the Wasserstein distance). arXiv preprint arXiv:2103.01678 , year=

  41. [41]

    Generative modeling by estimating gradients of the data distribution

    Song, Yang and Ermon, Stefano , journal=. Generative modeling by estimating gradients of the data distribution

  42. [42]

    Wasserstein Generative Regression

    Song, Shanshan and Wang, Tong and Shen, Guohao and Lin, Yuanyuan and Huang, Jian , journal=. Wasserstein Generative Regression

  43. [43]

    , booktitle =

    Damianou, Andreas and Lawrence, Neil D. , booktitle =. Deep G aussian Processes. 2013 , editor =

  44. [44]

    Bayesian Uncertainty Quantification

    Koumoutsakos, Petros , year=. Bayesian Uncertainty Quantification

  45. [45]

    Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods

    H. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning , volume=. 2021 , publisher=

  46. [46]

    Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation

    Kwon, Yongchan and Won, Joong-Ho and Kim, Beom Joon and Paik, Myunghee Cho , journal=. Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation. 2020 , publisher=

  47. [47]

    Wasserstein barycenters: statistics and optimization

    Stromme, Austin James , year=. Wasserstein barycenters: statistics and optimization

  48. [48]

    Robust Wasserstein profile inference and applications to machine learning

    Blanchet, Jose and Kang, Yang and Murthy, Karthyek , journal=. Robust Wasserstein profile inference and applications to machine learning. 2019 , publisher=

  49. [49]

    Wasserstein distributionally robust optimization: Theory and applications in machine learning

    Kuhn, Daniel and Esfahani, Peyman Mohajerin and Nguyen, Viet Anh and Shafieezadeh-Abadeh, Soroosh , booktitle=. Wasserstein distributionally robust optimization: Theory and applications in machine learning. 2019 , publisher=

  50. [50]

    Variance minimization in the wasserstein space for invariant causal prediction

    Martinet, Guillaume G and Strzalkowski, Alexander and Engelhardt, Barbara , booktitle=. Variance minimization in the wasserstein space for invariant causal prediction. 2022 , organization=

  51. [51]

    Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes

    Mallasto, Anton and Feragen, Aasa , journal=. Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes

  52. [52]

    Elements of statistical inference in 2-wasserstein space

    Ebert, Johannes and Spokoiny, Vladimir and Suvorikova, Alexandra , booktitle=. Elements of statistical inference in 2-wasserstein space. 2019 , organization=

  53. [53]

    The Wasserstein distances between pushed-forward measures with applications to uncertainty quantification

    Sagiv, Amir , journal=. The Wasserstein distances between pushed-forward measures with applications to uncertainty quantification

  54. [54]

    Barycenters in the Wasserstein space

    Agueh, Martial and Carlier, Guillaume , journal=. Barycenters in the Wasserstein space. 2011 , publisher=

  55. [55]

    2020 , note =

    Hebbal, Ali and Brevault, Loïc and Balesdent, Mathieu and El-Ghazali, Talbi and Melab, Nouredine , title = ". 2020 , note =

  56. [56]

    Deep Gaussian Process for Enhanced Bayesian Optimization and its Application in Additive Manufacturing

    Raghav Gnanasambandam and Bo Shen and Andrew Chung Chee Law and Chaoran Dou and Zhenyu Kong. Deep Gaussian Process for Enhanced Bayesian Optimization and its Application in Additive Manufacturing. 2023. doi:10.36227/techrxiv.23548143.v1

  57. [57]

    Mostofa Ali Patwary and Prabhat and Ryan P

    Jasper Snoek and Oren Rippel and Kevin Swersky and Ryan Kiros and Nadathur Satish and Narayanan Sundaram and Md. Mostofa Ali Patwary and Prabhat and Ryan P. Adams , year=. Scalable Bayesian Optimization Using Deep Neural Networks. 1502.05700 , archivePrefix=

  58. [58]

    Bayesian optimization with robust Bayesian neural networks

    Springenberg, Jost Tobias and Klein, Aaron and Falkner, Stefan and Hutter, Frank , journal=. Bayesian optimization with robust Bayesian neural networks

  59. [59]

    Bayesian Optimization with Robust Bayesian Neural Networks

    Springenberg, Jost Tobias and Klein, Aaron and Falkner, Stefan and Hutter, Frank , booktitle =. Bayesian Optimization with Robust Bayesian Neural Networks

  60. [60]

    Ensemble learning models with a Bayesian optimization algorithm for mineral prospectivity mapping

    Jiangning Yin and Nan Li , keywords =. Ensemble learning models with a Bayesian optimization algorithm for mineral prospectivity mapping. Ore Geology Reviews , volume =. 2022 , issn =. doi:https://doi.org/10.1016/j.oregeorev.2022.104916 , url =

  61. [61]

    Automatic model construction with Gaussian processes

    Duvenaud, David , year=. Automatic model construction with Gaussian processes

  62. [62]

    Asymptotics for L2 functionals of the empirical quantile process, with applications to tests of fit based on weighted Wasserstein distances

    Del Barrio, Eustasio and Gin. Asymptotics for L2 functionals of the empirical quantile process, with applications to tests of fit based on weighted Wasserstein distances. Bernoulli , volume=. 2005 , publisher=

  63. [63]

    Improved training of Wasserstein GANs

    Gulrajani, Ishaan and Ahmed, Faruk and Arjovsky, Martin and Dumoulin, Vincent and Courville, Aaron C , journal=. Improved training of Wasserstein GANs

  64. [64]

    Bayesian neural networks

    Kononenko, Igor , journal=. Bayesian neural networks. 1989 , publisher=

  65. [65]

    International Journal for Numerical Methods in Engineering , volume =

    Svanberg, Krister , title = ". International Journal for Numerical Methods in Engineering , volume =. doi:https://doi.org/10.1002/nme.1620240207 , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1002/nme.1620240207 , abstract =

  66. [66]

    A Tutorial on Bayesian Optimization

    Peter I. Frazier , year=. A Tutorial on Bayesian Optimization. 1807.02811 , archivePrefix=

  67. [67]

    Sequential stochastic blackbox optimization with zeroth-order gradient estimators

    Charles Audet and Jean Bigeon and Romain Couderc and Michael Kokkolaras , year=. Sequential stochastic blackbox optimization with zeroth-order gradient estimators. 2305.19450 , archivePrefix=

  68. [68]

    Optimization of the LHCb calorimeter , booktitle =

    Alexey Boldyrev and Denis Derkach and Fedor Ratnikov and Andrey Shevelev. Optimization of the LHCb calorimeter , booktitle =

  69. [69]

    Šimko et al., Reana: A system for reusable research data analyses, EPJ Web Conf

    Viktoria Chekalina and Elena Orlova and Fedor Ratnikov and Dmitry Ulyanov and Andrey Ustyuzhanin and Egor Zakharov. Generative Models for Fast Calorimeter Simulation: the. 2019 , publisher =. doi:10.1051/epjconf/201921402034 , url =

  70. [70]

    Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters

    Paganini, Michela and de Oliveira, Luke and Nachman, Benjamin , journal =. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters. 2018 , month =. doi:10.1103/PhysRevLett.120.042003 , url =

  71. [71]

    and Miller, David C

    Cozad, Alison and Sahinidis, Nikolaos V. and Miller, David C. , title = ". AIChE Journal , volume =. doi:https://doi.org/10.1002/aic.14418 , url =. https://aiche.onlinelibrary.wiley.com/doi/pdf/10.1002/aic.14418 , abstract =

  72. [72]

    , title = "

    Aehle et al., M. , title = ". Physics in Medicine & Biology , abstract =. 2023 , month =. doi:10.1088/1361-6560/ad0bdd , url =

  73. [73]

    Gutmann and Jukka Cor and er , title = "

    Michael U. Gutmann and Jukka Cor and er , title = ". Journal of Machine Learning Research , year =

  74. [74]

    Computational Astrophysics and Cosmology , author=

    CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks. Computational Astrophysics and Cosmology , author=. 2019 , month=may, pages=. doi:10.1186/s40668-019-0029-9 , abstractNote=

  75. [75]

    , keywords =

    Dorigo et al., T. , keywords =. Toward the end-to-end optimization of particle physics instruments with differentiable programming. Reviews in Physics , volume =. 2023 , issn =. doi:https://doi.org/10.1016/j.revip.2023.100085 , url =

  76. [76]

    A compositional object-based approach to learning physical dynamics.arXiv preprint arXiv:1612.00341, 2016

    Michael B. Chang and Tomer D. Ullman and Antonio Torralba and Joshua B. Tenenbaum , title = ". CoRR , volume =. 2016 , url =. 1612.00341 , timestamp =

  77. [77]

    Zico , booktitle =

    de Avila Belbute-Peres, Filipe and Smith, Kevin and Allen, Kelsey and Tenenbaum, Josh and Kolter, J. Zico , booktitle =. End-to-End Differentiable Physics for Learning and Control

  78. [78]

    CoRR , volume =

    Jonas Degrave and Michiel Hermans and Joni Dambre and Francis Wyffels , title = ". CoRR , volume =. 2016 , url =. 1611.01652 , timestamp =

  79. [79]

    Advances in Applied Mathematics , volume =

    Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics , volume =. 1985 , issn =. doi:https://doi.org/10.1016/0196-8858(85)90002-8 , url =

  80. [80]

    First-order

    Nicolas Lanzetti and Saverio Bolognani and Florian Dörfler , year=. First-order Conditions for Optimization in the Wasserstein Space. 2209.12197 , archivePrefix=

Showing first 80 references.