pith. machine review for the scientific record. sign in

arxiv: 2509.19929 · v4 · submitted 2025-09-24 · 📊 stat.ML · cs.LG· physics.comp-ph· physics.data-an

Recognition: unknown

Geometric Autoencoder Priors for Bayesian Inversion: Learn First Observe Later

Authors on Pith no claims yet
classification 📊 stat.ML cs.LGphysics.comp-phphysics.data-an
keywords bayesiansystemsgeometriesinferenceinformationinversionlearningphysical
0
0 comments X
read the original abstract

Uncertainty Quantification (UQ) is paramount for inference in engineering. A common inference task is to recover full-field information of physical systems from a small number of noisy observations, a usually highly ill-posed problem. Sharing information from multiple distinct yet related physical systems can alleviate this ill-posedness. Critically, engineering systems often have complicated variable geometries prohibiting the use of standard multi-system Bayesian UQ. In this work, we introduce Geometric Autoencoders for Bayesian Inversion (GABI), a framework for learning geometry-aware generative models of physical responses that serve as highly informative geometry-conditioned priors for Bayesian inversion. Following a ''learn first, observe later'' paradigm, GABI distills information from large datasets of systems with varying geometries, without requiring knowledge of governing PDEs, boundary conditions, or observation processes, into a rich latent prior. At inference time, this prior is seamlessly combined with the likelihood of a specific observation process, yielding a geometry-adapted posterior distribution. Our proposed framework is architecture-agnostic. A creative use of Approximate Bayesian Computation (ABC) sampling yields an efficient implementation that utilizes modern GPU hardware. We test our method on: steady-state heat over rectangular domains; Reynolds-Averaged Navier-Stokes (RANS) flow around airfoils; Helmholtz resonance and source localization on 3D car bodies; RANS airflow over terrain. We find: the predictive accuracy to be comparable to deterministic supervised learning approaches in the restricted setting where supervised learning is applicable; UQ to be well calibrated and robust on challenging problems with complex geometries.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Distributional Inverse Homogenization

    physics.comp-ph 2026-04 unverdicted novelty 8.0

    Distributional inverse homogenization recovers microstructural statistics from macroscopic mechanical measurements by leveraging collections of bulk data in periodic and stochastic settings.

  2. Distributional Inverse Homogenization

    physics.comp-ph 2026-04 unverdicted novelty 7.0

    Distributional inverse homogenization learns microstructural statistics from bulk mechanical measurements by inverting the homogenization process statistically.