pith. machine review for the scientific record. sign in

arxiv: 2505.13007 · v2 · submitted 2025-05-19 · 💻 cs.LG · cs.CE

Recognition: unknown

Latent Generative Modeling of Random Fields from Limited Training Data

Authors on Pith no claims yet
classification 💻 cs.LG cs.CE
keywords generativelatentdatafieldsmodelingrandomtrainingsampling
0
0 comments X
read the original abstract

The ability to accurately model random fields plays a critical role in science and engineering for problems involving uncertain, spatially-varying quantities such as heterogeneous material properties and turbulent flows. Deep generative models offer a powerful tool for sampling high- or infinite-dimensional uncertainties like random fields, but their reliance on large, dense training datasets limits their applicability in contexts where sufficient data is difficult or expensive to obtain. In this work, we propose a latent-space approach to generative modeling of random fields that incorporates domain knowledge to supplement limited training data. A constraint-aware variational autoencoder (VAE) with a function decoder is first used to learn compact latent representations of continuous functions that adhere to known physical or statistical constraints, even when training data is sparse or indirect. Generative modeling is then performed in the learned latent space, decoupling constraint enforcement from the sampling process. This decoupling enables expressive multi-step generative methods to be deployed in data-limited settings where existing constrained multi-step approaches are not directly applicable. The richer latent distributions captured by the generative model also overcome limitations of standard VAEs, which rely on simple parametric priors and struggle to represent complex, multimodal, or heavy-tailed distributions over functions. Efficacy is demonstrated on two challenging applications: wind velocity field reconstruction from sparse sensors and material property inference from indirect measurements. Results show the effectiveness of incorporating domain knowledge constraints for data-limited problems and the improved sample quality and robustness of the latent generative modeling approach versus directly sampling a constrained VAE.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Constraint-Aware Flow Matching: Decision Aligned End-to-End Training for Constrained Sampling

    cs.LG 2026-05 unverdicted novelty 7.0

    Constraint-Aware Flow Matching integrates constraint projections into the flow matching training objective to align model dynamics with constrained sampling and reduce distributional shift.