pith. machine review for the scientific record. sign in

arxiv: 2605.05586 · v1 · submitted 2026-05-07 · 💻 cs.LG

Recognition: unknown

AeroJEPA: Learning Semantic Latent Representations for Scalable 3D Aerodynamic Field Modeling

Abhijeet Vishwasrao, Adrian Lozano-Duran, Andrea Arroyo Ramo, Federica Tonti, Francisco Giral, Hector Gomez, Mahmoud Golestanian, Ricardo Vinuesa, Sergio Hoyas, Soledad Le Clainche, Steven L. Brunton

Authors on Pith no claims yet

Pith reviewed 2026-05-09 15:49 UTC · model grok-4.3

classification 💻 cs.LG
keywords aerodynamic surrogate modelingjoint embedding predictive architecturelatent representations3D flow fieldsmachine learning for CFDdesign optimizationcontinuous implicit decoding
0
0 comments X

The pith

AeroJEPA predicts target latents of aerodynamic flows from geometry and condition latents rather than the full field.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that a joint-embedding predictive architecture can produce scalable surrogates for 3D aerodynamic fields by learning to predict a latent representation of the flow from a latent representation of the input geometry and operating conditions. This predictive step decouples the cost of modeling from the spatial resolution of the output field, which matters because realistic 3D CFD data sets are extremely large. The resulting latent space is encouraged to organize around semantic properties of the flow, so that downstream tasks such as interpolation, linear probing of physical quantities, and latent-space optimization become possible without repeated full-field reconstructions. Evaluation on a high-fidelity lifting configuration and a broad family of transonic wings shows competitive accuracy together with these additional capabilities.

Core claim

AeroJEPA is a Joint-Embedding Predictive Architecture that encodes geometry and operating conditions into a context latent, predicts the corresponding target latent of the flow field, and optionally decodes the field at arbitrary resolution through a continuous implicit decoder. The prediction objective forces the latent space to capture semantic structure in the aerodynamics, while the separation of latent prediction from field reconstruction removes the direct dependence of model cost on output resolution.

What carries the argument

The joint-embedding predictive architecture (JEPA) applied to aerodynamics, in which a context encoder on geometry and conditions predicts the latent embedding that a target encoder would produce from the flow field itself.

If this is right

  • Field reconstruction cost remains independent of output resolution, allowing high-fidelity 3D grids without proportional increase in model size.
  • Latent representations encode aerodynamic quantities even when those quantities are not supplied as direct supervision during training.
  • Linear probes on the latents recover quantities such as lift or drag coefficients for new designs.
  • Arithmetic on latent vectors produces controlled changes in geometry or flow features that can be decoded back to fields.
  • Constrained optimization can be performed entirely in latent space, reducing the number of full CFD evaluations required for design.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same predictive-latent pattern could be transferred to other high-dimensional physics fields such as structural mechanics or combustion.
  • If the semantic organization generalizes, latent-space search might replace many-query CFD loops in early-stage aircraft design.
  • Combining AeroJEPA latents with gradient-based optimizers could enable real-time design iteration once the initial training is complete.

Load-bearing premise

The learned latent space will organize semantically and stay useful for interpolation, probing, and optimization when applied to geometries and flow conditions outside the two training data sets.

What would settle it

Performance of latent-space linear probes or design optimization drops sharply when the model is tested on a new family of wing shapes or Reynolds-number regime not represented in HiLiftAeroML or SuperWing.

Figures

Figures reproduced from arXiv: 2605.05586 by Abhijeet Vishwasrao, Adrian Lozano-Duran, Andrea Arroyo Ramo, Federica Tonti, Francisco Giral, Hector Gomez, Mahmoud Golestanian, Ricardo Vinuesa, Sergio Hoyas, Soledad Le Clainche, Steven L. Brunton.

Figure 1
Figure 1. Figure 1: Overview of the AeroJEPA framework. The context encoder maps the geometry point cloud view at source ↗
Figure 2
Figure 2. Figure 2: HiLiftAeroML reconstruction for test geometry LHC013 at view at source ↗
Figure 3
Figure 3. Figure 3: Main HiLift latent-space results. Top: PCA projection of the context latents comparing view at source ↗
Figure 4
Figure 4. Figure 4: Proof-of-concept latent-space optimization on SuperWing. view at source ↗
Figure 5
Figure 5. Figure 5: Additional qualitative pressure views for test geometry LHC013 at view at source ↗
Figure 6
Figure 6. Figure 6: Ridge linear probing from predicted latents to aerodynamic coefficients on HiLiftAeroML. view at source ↗
Figure 7
Figure 7. Figure 7: Latent interpolation on HiLiftAeroML between different operating conditions and geometry view at source ↗
Figure 8
Figure 8. Figure 8: Pressure-coefficient profile extracted from the decoded field on a wing section for case view at source ↗
Figure 9
Figure 9. Figure 9: Parity plots for CL and CD computed from the decoded predicted fields. Unlike the latent probing experiment, these coefficients are obtained by estimating aerodynamic forces directly from the reconstructed surface solution, showing that AeroJEPA yields accurate integrated force predictions from the decoded field itself. The right panel of view at source ↗
Figure 10
Figure 10. Figure 10: Recovery of HiLift design variables from the context latents using ridge regression. view at source ↗
Figure 11
Figure 11. Figure 11: Latent-arithmetic analysis on HiLiftAeroML. Each trajectory traverses the context latent view at source ↗
Figure 12
Figure 12. Figure 12: Representative decoded-field comparisons on SuperWing for view at source ↗
Figure 13
Figure 13. Figure 13: Parity plots for aerodynamic forces estimated from the decoded SuperWing fields. The view at source ↗
Figure 14
Figure 14. Figure 14: Latent-space optimization trajectory. Two-dimensional PCA projection of the training context latents zctx, coloured by the corresponding dataset CL/CD. The dashed contour is the projection of the 95% Mahalanobis trust region used to constrain the search. Light grey curves are the SLSQP iterates of the eight random restarts; the orange curve highlights the restart selected as the global optimum. White circ… view at source ↗
Figure 15
Figure 15. Figure 15: Concept-vector arithmetic in the HighLift context latent. 4 × 4 response matrix obtained by walking the train-mean latent µctx along the unit-norm linear-probe direction of one design parameter at a time. Rows: latent direction walked (IB Flap, OB Flap, IB Slat, OB Slat). Columns: design parameter read out by the corresponding linear probe. Each panel reports the sensitivity in train standard deviations p… view at source ↗
read the original abstract

Aerodynamic surrogate models are increasingly used to replace repeated high-fidelity CFD evaluations in many-query design settings, but current approaches still face two important limitations: they often scale poorly to the very large fields arising in realistic 3D aerodynamics, and they rarely produce latent representations that are directly useful for analysis and design. We introduce AeroJEPA, a Joint-Embedding Predictive Architecture for aerodynamic field modeling that addresses both issues. Rather than predicting the full flow field directly from geometry, AeroJEPA predicts a target latent representation of the flow from a context latent representation of the geometry and operating conditions, and optionally reconstructs the field through a continuous implicit decoder. This formulation decouples latent prediction from field resolution while encouraging the latent space to organize semantically. We evaluate AeroJEPA on two complementary datasets: HiLiftAeroML, which stresses the method in a high-fidelity regime with extremely large boundary-layer fields, and SuperWing, which tests large-scale generalization and latent-space optimization over a broad family of transonic wings. Across these benchmarks, AeroJEPA is competitive as a continuous surrogate for aerodynamic fields, scales naturally to high-resolution outputs, and learns context and predicted latents that encode geometry and aerodynamic quantities not used directly as supervision. We further show that the resulting latent space supports controlled interpolation, linear probing, concept-vector arithmetic, and a constrained design latent-optimization experiment. These results suggest that predictive latent learning is a promising direction for scalable and design-meaningful aerodynamic surrogate modeling.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces AeroJEPA, a Joint-Embedding Predictive Architecture for 3D aerodynamic field modeling. Rather than predicting full flow fields directly, the model predicts a target latent representation of the flow from a context latent derived from geometry and operating conditions, followed by an optional continuous implicit decoder for field reconstruction. This is evaluated on the HiLiftAeroML dataset (high-fidelity boundary-layer fields) and SuperWing dataset (transonic wings), with claims of competitive surrogate accuracy, natural scaling to high-resolution outputs, and semantically organized latents that encode geometry and aerodynamic quantities without direct supervision, enabling interpolation, linear probing, concept arithmetic, and constrained design optimization.

Significance. If the central claims hold, AeroJEPA would offer a useful advance for many-query aerodynamic design by providing both scalable continuous field surrogates and latent representations that support downstream analysis and optimization tasks. The predictive latent formulation is a reasonable way to encourage semantic organization independent of output resolution, and the dual-dataset evaluation (high-fidelity and broad-family) is a strength. Reproducible code or parameter-free derivations are not mentioned, but the empirical demonstrations on two distinct benchmarks add value if the latent utility generalizes.

major comments (2)
  1. [§4] §4 (Experiments): All demonstrations of latent-space semantic organization, interpolation, probing, concept arithmetic, and design optimization are performed exclusively on splits of HiLiftAeroML and SuperWing. No tests on out-of-family geometries, unseen Reynolds-number regimes, or 3D configurations absent from training are reported, which directly undermines the claim that the latents yield 'design-meaningful' surrogates.
  2. [§3.2] §3.2 (Architecture): The JEPA-style predictor is described at a high level, but the precise loss formulation, weighting between context/target encoders and the optional decoder, and any regularization that enforces semantic organization are not given as explicit equations. This makes it impossible to verify the claimed decoupling of latent prediction from field resolution or to reproduce the training dynamics.
minor comments (2)
  1. [Tables 1-2] Table 1 and Table 2: quantitative metrics are reported without error bars or statistical significance tests across multiple random seeds, weakening the 'competitive' claim.
  2. [Figure 5] Figure 5 (latent visualizations): axis labels and color scales are not fully described in the caption, making it difficult to interpret the encoded aerodynamic quantities.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed review of our manuscript. We address each major comment below and indicate where revisions will be incorporated to improve clarity and strengthen the presentation.

read point-by-point responses
  1. Referee: [§4] §4 (Experiments): All demonstrations of latent-space semantic organization, interpolation, probing, concept arithmetic, and design optimization are performed exclusively on splits of HiLiftAeroML and SuperWing. No tests on out-of-family geometries, unseen Reynolds-number regimes, or 3D configurations absent from training are reported, which directly undermines the claim that the latents yield 'design-meaningful' surrogates.

    Authors: We acknowledge that all reported evaluations of latent-space properties use standard train/test splits drawn from the HiLiftAeroML and SuperWing datasets rather than out-of-family geometries or entirely unseen Reynolds-number regimes. These datasets were selected precisely because they contain substantial intra-family geometric and aerodynamic diversity (high-fidelity boundary-layer variations in HiLiftAeroML; broad transonic wing families in SuperWing), which enabled the demonstrations of interpolation, linear probing, concept arithmetic, and constrained optimization. We agree, however, that the absence of cross-family or out-of-distribution tests limits the strength of the claim that the latents are broadly 'design-meaningful.' In the revised manuscript we will add an explicit limitations paragraph in Section 4 that qualifies the current scope of generalization and outlines the need for future out-of-distribution benchmarks. revision: partial

  2. Referee: [§3.2] §3.2 (Architecture): The JEPA-style predictor is described at a high level, but the precise loss formulation, weighting between context/target encoders and the optional decoder, and any regularization that enforces semantic organization are not given as explicit equations. This makes it impossible to verify the claimed decoupling of latent prediction from field resolution or to reproduce the training dynamics.

    Authors: We thank the referee for highlighting this omission. The current description of the JEPA-style predictor is indeed high-level and lacks the explicit loss equations, weighting coefficients, and regularization terms. In the revised manuscript we will insert the full mathematical formulation of the predictor loss, the combined objective with the optional decoder, and any regularization used to promote semantic organization directly into Section 3.2. These additions will make the claimed decoupling of latent prediction from output resolution verifiable and will enable exact reproduction of the reported training dynamics. revision: yes

Circularity Check

0 steps flagged

No circularity in derivation chain; method and claims are self-contained.

full rationale

The paper presents AeroJEPA as an architectural extension of JEPA-style latent prediction applied to 3D aerodynamic fields, with the core formulation (context latent to target latent prediction, optional implicit decoder) motivated independently to decouple resolution and encourage semantic organization. No equations or derivations are shown that reduce claimed performance, latent encoding properties, or downstream utility (interpolation, probing, optimization) to quantities defined by the model's own fitted parameters or by self-citation chains. All evaluations are empirical on the two specified datasets using standard training and analysis techniques that remain falsifiable outside the fitted values. This is the normal case of an independent architectural proposal with empirical support.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

The central claim rests on the assumption that aerodynamic flow fields admit compact latent representations whose predictive relationships encode semantic aerodynamic quantities; this is a domain assumption rather than a derived result.

free parameters (1)
  • latent dimension
    Dimensionality of context and target latents chosen to balance expressivity and scalability; value not stated in abstract.
axioms (1)
  • domain assumption Aerodynamic fields possess semantically meaningful low-dimensional structure that can be learned via joint-embedding prediction without direct supervision on those semantics.
    Invoked to justify why the predicted latents encode geometry and aerodynamic quantities not used as supervision.
invented entities (1)
  • AeroJEPA architecture no independent evidence
    purpose: Joint-embedding predictive model that decouples latent prediction from field resolution.
    New method introduced by the paper; no independent evidence outside the reported experiments.

pith-pipeline@v0.9.0 · 5617 in / 1399 out tokens · 27884 ms · 2026-05-09T15:49:12.172531+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

52 extracted references · 27 canonical work pages · 3 internal anchors

  1. [1]

    Physica D: Nonlinear Phenomena , volume=

    Blending machine learning and sequential data assimilation over latent spaces for surrogate modeling of Boussinesq systems , author=. Physica D: Nonlinear Phenomena , volume=. 2023 , publisher=

  2. [2]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Self-supervised learning from images with a joint-embedding predictive architecture , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  3. [3]

    Journal of Aircraft , volume=

    Surrogate model development for optimized blended-wing-body aerodynamics , author=. Journal of Aircraft , volume=. 2023 , publisher=

  4. [4]

    Nature Reviews Physics , volume=

    Neural operators for accelerating scientific simulations and design , author=. Nature Reviews Physics , volume=. 2024 , publisher=

  5. [5]

    LeJEPA: Provable and scalable self-supervised learning without the heuristics.arXiv preprint arXiv:2511.08544, 2025

    Lejepa: Provable and scalable self-supervised learning without the heuristics , author=. arXiv preprint arXiv:2511.08544 , year=

  6. [6]

    Leworld- model: Stable end-to-end joint-embedding predictive architecture from pixels.arXiv preprint arXiv:2603.19312, 2026

    Leworldmodel: Stable end-to-end joint-embedding predictive architecture from pixels , author=. arXiv preprint arXiv:2603.19312 , year=

  7. [7]

    arXiv preprint arXiv:2502.03933 , year=

    HEP-JEPA: A foundation model for collider physics using joint embedding predictive architecture , author=. arXiv preprint arXiv:2502.03933 , year=

  8. [8]

    Aerospace Science and Technology , volume=

    Dimensionality-reduction-based surrogate models for real-time design space exploration of a jet engine compressor blade , author=. Aerospace Science and Technology , volume=. 2021 , publisher=

  9. [9]

    Scientific Reports , volume=

    Neural fields for rapid aircraft aerodynamics simulations , author=. Scientific Reports , volume=. 2024 , publisher=

  10. [10]

    Journal of Scientific Computing , volume=

    Generalised latent assimilation in heterogeneous reduced spaces with machine learning surrogate models , author=. Journal of Scientific Computing , volume=. 2023 , publisher=

  11. [11]

    arXiv preprint arXiv:2601.20190 , year=

    WirelessJEPA: A Multi-Antenna Foundation Model using Spatio-temporal Wireless Latent Predictions , author=. arXiv preprint arXiv:2601.20190 , year=

  12. [12]

    arXiv preprint arXiv:2601.00844 , year=

    Value-guided action planning with JEPA world models , author=. arXiv preprint arXiv:2601.00844 , year=

  13. [13]

    arXiv preprint arXiv:2412.10925 , year=

    Video representation learning with joint-embedding predictive architectures , author=. arXiv preprint arXiv:2412.10925 , year=

  14. [14]

    Aerospace Science and Technology , volume=

    Rapid airfoil design optimization via neural networks-based parameterization and surrogate modeling , author=. Aerospace Science and Technology , volume=. 2021 , publisher=

  15. [15]

    Data-Centric Engineering , volume=

    Discretization-independent surrogate modeling of physical fields around variable geometries using coordinate-based networks , author=. Data-Centric Engineering , volume=. 2025 , publisher=

  16. [16]

    arXiv preprint arXiv:2602.02093 , year=

    Cell-JEPA: Latent Representation Learning for Single-Cell Transcriptomics , author=. arXiv preprint arXiv:2602.02093 , year=

  17. [17]

    A-jepa: Joint-embedding predictive architecture can listen.arXiv preprint arXiv:2311.15830,

    A-jepa: Joint-embedding predictive architecture can listen , author=. arXiv preprint arXiv:2311.15830 , year=

  18. [18]

    2008 , publisher=

    Engineering design via surrogate modelling: a practical guide , author=. 2008 , publisher=

  19. [19]

    Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume=

    Optimization using surrogate models and partially converged computational fluid dynamics simulations , author=. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume=. 2006 , publisher=

  20. [20]

    Learning and leveraging world mod- els in visual representation learning.arXiv preprint arXiv:2403.00504,

    Learning and leveraging world models in visual representation learning , author=. arXiv preprint arXiv:2403.00504 , year=

  21. [21]

    International Journal of Computational Fluid Dynamics , volume=

    Training a neural-network-based surrogate model for aerodynamic optimisation using a Gaussian process , author=. International Journal of Computational Fluid Dynamics , volume=. 2022 , publisher=

  22. [22]

    Computer Methods in Applied Mechanics and Engineering , volume=

    Accelerating phase-field predictions via recurrent neural networks learning the microstructure evolution in latent space , author=. Computer Methods in Applied Mechanics and Engineering , volume=. 2022 , publisher=

  23. [23]

    arXiv preprint arXiv:2412.05333 , year=

    Learning symmetry-independent jet representations via jet-based joint embedding predictive architecture , author=. arXiv preprint arXiv:2412.05333 , year=

  24. [24]

    Nature Communications , volume=

    Learning nonlinear operators in latent spaces for real-time predictions of complex dynamics in physical systems , author=. Nature Communications , volume=. 2024 , publisher=

  25. [25]

    Advances in neural information processing systems , volume=

    Connecting joint-embedding predictive architecture with contrastive self-supervised learning , author=. Advances in neural information processing systems , volume=

  26. [26]

    arXiv preprint arXiv:2405.17995 , year=

    Dmt-jepa: Discriminative masked targets for joint-embedding predictive architecture , author=. arXiv preprint arXiv:2405.17995 , year=

  27. [27]

    arXiv preprint arXiv:2309.16014 , year=

    Graph-level representation learning with joint-embedding predictive architectures , author=. arXiv preprint arXiv:2309.16014 , year=

  28. [28]

    Joint embedding predictive architectures focus on slow features.arXiv preprint arXiv:2211.10831,

    Joint embedding predictive architectures focus on slow features , author=. arXiv preprint arXiv:2211.10831 , year=

  29. [29]

    Physics of Fluids , volume=

    Toward aerodynamic surrogate modeling based on -variational autoencoders , author=. Physics of Fluids , volume=. 2024 , publisher=

  30. [30]

    Nature Communications , volume=

    -variational autoencoders and transformers for reduced-order modelling of fluid flows , author=. Nature Communications , volume=. 2024 , publisher=

  31. [31]

    arXiv preprint arXiv:2511.18424 , year=

    CrossJEPA: Cross-Modal Joint-Embedding Predictive Architecture for Efficient 3D Representation Learning from 2D Images , author=. arXiv preprint arXiv:2511.18424 , year=

  32. [32]

    arXiv preprint arXiv:2512.10942 (2025) 11

    Vl-jepa: Joint embedding predictive architecture for vision-language , author=. arXiv preprint arXiv:2512.10942 , year=

  33. [33]

    arXiv preprint arXiv:2409.15803 , year=

    3d-jepa: A joint embedding predictive architecture for 3d self-supervised representation learning , author=. arXiv preprint arXiv:2409.15803 , year=

  34. [34]

    PI-JEPA: Label-Free Surrogate Pretraining for Coupled Multiphysics Simulation via Operator-Split Latent Prediction

    PI-JEPA: Label-Free Surrogate Pretraining for Coupled Multiphysics Simulation via Operator-Split Latent Prediction , author=. arXiv preprint arXiv:2604.01349 , year=

  35. [35]

    arXiv preprint arXiv:2603.13227 , year=

    Representation Learning for Spatiotemporal Physical Systems , author=. arXiv preprint arXiv:2603.13227 , year=

  36. [36]

    Adams, R

    GeoTransolver: Learning Physics on Irregular Domains Using Multi-scale Geometry Aware Physics Attention Transformer , author=. arXiv preprint arXiv:2512.20399 , year=

  37. [37]

    arXiv preprint arXiv:2402.02366 , year=

    Transolver: A fast transformer solver for pdes on general geometries , author=. arXiv preprint arXiv:2402.02366 , year=

  38. [38]

    arXiv preprint arXiv:2601.07139 , year=

    AdaField: Generalizable Surface Pressure Modeling with Physics-Informed Pre-training and Flow-Conditioned Adaptation , author=. arXiv preprint arXiv:2601.07139 , year=

  39. [39]

    Factorized implicit global convolution for automotive computational fluid dynamics prediction,

    Factorized implicit global convolution for automotive computational fluid dynamics prediction , author=. arXiv preprint arXiv:2502.04317 , year=

  40. [40]

    arXiv preprint arXiv:2410.05016 , year=

    T-jepa: Augmentation-free self-supervised learning for tabular data , author=. arXiv preprint arXiv:2410.05016 , year=

  41. [41]

    IFAC-PapersOnLine , volume=

    Learning state-space models of dynamic systems from arbitrary data using joint embedding predictive architectures , author=. IFAC-PapersOnLine , volume=. 2025 , publisher=

  42. [42]

    arXiv preprint arXiv:2405.10093 , year=

    Lat-pfn: A joint embedding predictive architecture for in-context time-series forecasting , author=. arXiv preprint arXiv:2405.10093 , year=

  43. [43]

    Progress in aerospace sciences , volume=

    A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses , author=. Progress in aerospace sciences , volume=. 2018 , publisher=

  44. [44]

    Chinese Journal of Aeronautics , volume=

    Heterogeneous data-driven aerodynamic modeling based on physical feature embedding , author=. Chinese Journal of Aeronautics , volume=. 2024 , publisher=

  45. [45]

    AIAA SCITECH 2026 Forum , pages=

    High-Fidelity CFD Data Generation for HiLiftAeroML using Solution-Adapted WMLES , author=. AIAA SCITECH 2026 Forum , pages=

  46. [46]

    SuperWing: a comprehensive transonic wing dataset for data-driven aerodynamic design

    SuperWing: a comprehensive transonic wing dataset for data-driven aerodynamic design , author=. arXiv preprint arXiv:2512.14397 , year=

  47. [47]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Point transformer v3: Simpler faster stronger , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  48. [48]

    International conference on machine learning , pages=

    Neural message passing for quantum chemistry , author=. International conference on machine learning , pages=. 2017 , organization=

  49. [49]

    Proceedings of the IEEE/CVF international conference on computer vision , pages=

    Scalable diffusion models with transformers , author=. Proceedings of the IEEE/CVF international conference on computer vision , pages=

  50. [50]

    Agentic Exploration of PDE Spaces using Latent Foundation Models for Parameterized Simulations

    Agentic Exploration of PDE Spaces using Latent Foundation Models for Parameterized Simulations , author=. arXiv preprint arXiv:2604.09584 , year=

  51. [51]

    arXiv preprint arXiv:2601.05525 , year=

    Explainable AI: Learning from the Learners , author=. arXiv preprint arXiv:2601.05525 , year=

  52. [52]

    Expert Systems with Applications , volume=

    Towards extraction of orthogonal and parsimonious non-linear modes from turbulent flows , author=. Expert Systems with Applications , volume=. 2022 , publisher=